aboutsummaryrefslogtreecommitdiffhomepage
path: root/tensorflow/g3doc/api_docs/python
diff options
context:
space:
mode:
Diffstat (limited to 'tensorflow/g3doc/api_docs/python')
-rw-r--r--tensorflow/g3doc/api_docs/python/array_ops.md3168
-rw-r--r--tensorflow/g3doc/api_docs/python/check_ops.md510
-rw-r--r--tensorflow/g3doc/api_docs/python/client.md1199
-rw-r--r--tensorflow/g3doc/api_docs/python/constant_op.md775
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.bayesflow.entropy.md304
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.bayesflow.monte_carlo.md206
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.bayesflow.stochastic_graph.md46
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.bayesflow.stochastic_tensor.md467
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.bayesflow.variational_inference.md171
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.copy_graph.md86
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.crf.md212
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.distributions.bijector.md4336
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.distributions.md27438
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.ffmpeg.md61
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.framework.md1205
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.graph_editor.md2054
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.integrate.md100
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.layers.md2340
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.learn.md5510
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.learn.monitors.md2684
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.legacy_seq2seq.md587
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.linalg.md4413
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.losses.md472
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.metrics.md1971
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.opt.md454
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.rnn.md2203
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.training.md1057
-rw-r--r--tensorflow/g3doc/api_docs/python/contrib.util.md157
-rw-r--r--tensorflow/g3doc/api_docs/python/control_flow_ops.md808
-rw-r--r--tensorflow/g3doc/api_docs/python/framework.md3969
-rw-r--r--tensorflow/g3doc/api_docs/python/functional_ops.md299
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.PriorityQueue.from_list.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.ReaderBase.md183
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.SparseFeature.md78
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.SparseTensorValue.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorArray.md281
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorShape.md397
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.VarLenFeature.__new__.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.VariableScope.md159
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.add_check_numerics_ops.md13
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.add_n.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.add_to_collection.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_greater_equal.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_non_positive.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.case.md75
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.cholesky.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.cond.md54
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.md89
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Bernoulli.md593
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Chi2WithAbsDf.md572
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Dirichlet.md682
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Distribution.md690
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.GammaWithSoftplusConcentrationRate.md565
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.md578
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.RegisterKL.md43
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.RelaxedOneHotCategorical.md650
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.bijector.CholeskyOuterProduct.md301
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.bijector.SigmoidCentered.md276
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.framework.assign_from_checkpoint.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.framework.deprecated_args.md41
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.add_control_inputs.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.detach_inputs.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.filter_ops.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.make_placeholder_from_tensor.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.placeholder_name.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.select_ops.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.swap_inputs.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.swap_ios.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.convolution2d_in_plane.md49
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.l2_regularizer.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.real_valued_column.md44
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.sparse_column_with_keys.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.weighted_sparse_column.md43
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.LinearRegressor.md412
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.ModelFnOps.md135
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.extract_pandas_matrix.md13
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.make_export_strategy.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.monitors.replace_monitors_with_hooks.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.read_batch_record_features.md32
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.attention_decoder.md60
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.embedding_rnn_decoder.md48
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq.md46
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.sequence_loss.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.linalg.LinearOperatorDiag.md532
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.losses.get_losses.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.set_size.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_mean_tensor.md48
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.AttentionCellWrapper.md77
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.MultiRNNCell.md66
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.OutputProjectionWrapper.md68
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.stack_bidirectional_dynamic_rnn.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.training.weighted_resample.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.util.ops_used_by_graph_def.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.convert_to_tensor_or_sparse_tensor.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.cumprod.md44
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.delete_session_tensor.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.depth_to_space.md95
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.device.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.CancelledError.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.DataLossError.md13
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.DeadlineExceededError.md11
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fake_quant_with_min_max_vars_gradient.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fake_quant_with_min_max_vars_per_channel_gradient.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fft.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fft2d.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.floormod.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.get_seed.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.get_session_handle.md40
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.global_norm.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.ifft3d.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.adjust_brightness.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.adjust_gamma.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.decode_jpeg.md48
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.grayscale_to_rgb.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.random_brightness.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.rgb_to_grayscale.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_finite.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_nan.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_non_decreasing.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_numeric_tensor.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.mod.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.name_scope.md39
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.avg_pool3d.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.conv2d_backprop_filter.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.ctc_loss.md103
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.l2_normalize.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.norm.md66
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.not_equal.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.orthogonal_initializer.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.py_func.md51
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.quantize_v2.md74
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reduce_sum.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reshape.md73
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reverse_sequence.md76
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.segment_min.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_tensor_to_dense.md43
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sqrt.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.FileWriterCache.clear.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.TaggedRunMetadata.md252
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.merge_all.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.ExponentialMovingAverage.md232
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.GlobalStepWaiterHook.md87
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.SessionRunArgs.__new__.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.SessionRunHook.md97
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.add_queue_runner.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.limit_epochs.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.tuple.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.zeros_initializer.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf_debug.DumpingDebugWrapperSession.md140
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf_debug.watch_graph_with_blacklists.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.AggregationMethod.md10
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.ConditionalAccumulatorBase.md79
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.FIFOQueue.md299
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.IdentityReader.md175
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.NoGradient.md32
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Print.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Tensor.md940
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Variable.from_proto.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.accumulate_n.md40
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.all_variables.md8
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_less_equal.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_rank_at_least.md33
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assign.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_to_space.md100
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.constant_initializer.md86
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.md58
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.bayesflow.variational_inference.elbo.md71
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.crf.crf_log_likelihood.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.MultivariateNormalDiag.md771
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.md619
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.QuantizedDistribution.md740
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.StudentT.md683
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.md583
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.TransformedDistribution.md710
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.get_graph_from_inputs.md32
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.get_local_variables.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.get_variables_by_name.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.init_from_checkpoint.md72
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.compute_boundary_ts.md32
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.get_name_scope_ops.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.make_list_of_t.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.reroute_ios.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.transform_op_if_inside_handler.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.integrate.odeint.md90
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.embedding_column.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.flatten.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.make_place_holder_tensors_for_base_features.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.summarize_tensors.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.LinearClassifier.md467
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.PredictionKey.md1
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.extract_pandas_data.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.infer_real_valued_columns_from_input_fn.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.monitors.SummaryWriterCache.get.md13
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.monitors.ValidationMonitor.md242
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.train.md75
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.linalg.LinearOperatorComposition.md536
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.linalg.LinearOperatorIdentity.md562
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.losses.absolute_difference.md35
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.losses.compute_weighted_loss.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.losses.hinge_loss.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_mean_squared_error.md51
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_sparse_recall_at_k.md74
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.opt.VariableClippingOptimizer.md66
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.BasicLSTMCell.md72
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.GRUCell.md51
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.GridLSTMCell.md134
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.TimeReversedFusedRNN.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.static_state_saving_rnn.md33
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.training.resample_at_rate.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.util.constant_value.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.convert_to_tensor.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.diag.md33
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.divide.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.einsum.md74
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.erf.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.AlreadyExistsError.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.fake_quant_with_min_max_args_gradient.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.get_default_session.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.gradients.md48
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.greater_equal.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.igammac.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.adjust_hue.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.central_crop.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.random_hue.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.resize_bicubic.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.invert_permutation.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.is_strictly_increasing.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.linspace.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.log1p.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.map_fn.md96
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.matching_files.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.negative.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.compute_accidental_hits.md45
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.conv1d.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.embedding_lookup_sparse.md76
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.erosion2d.md52
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.moments.md35
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.normalize_moments.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.sampled_softmax_loss.md49
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.parse_single_example.md38
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.qr.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.random_uniform_initializer.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.read_file.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_any.md40
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_join.md46
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_logsumexp.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.sparse_minimum.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.string_to_hash_bucket_strong.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.Benchmark.md61
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.get_temp_dir.md10
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdadeltaOptimizer.md176
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.ClusterSpec.md210
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.GradientDescentOptimizer.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.LoggingTensorHook.md85
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.LooperThread.md222
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredSession.md128
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredTrainingSession.md44
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.NanTensorHook.md80
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.Server.create_local_server.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.generate_checkpoint_state_proto.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.maybe_batch.md41
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.shuffle_batch_join.md77
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.unsorted_segment_max.md38
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.unstack.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.zeros.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.zeros_like.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf_debug.LocalCLIDebugHook.md256
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DType.md250
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.FIFOQueue.from_list.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.Graph.md885
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.InteractiveSession.md346
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.SparseFeature.__new__.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.SparseTensorValue.__new__.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.TFRecordReader.md173
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.TextLineReader.md175
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.WholeFileReader.md175
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.assert_non_negative.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.betainc.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.cholesky_solve.md35
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.constant.md53
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.bayesflow.monte_carlo.expectation.md56
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.copy_graph.get_copied_op.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.crf.CrfForwardRnnCell.md73
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.crf.crf_binary_score.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.Categorical.md629
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.Chi2.md612
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.ConditionalDistribution.md476
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.ReparameterizationType.md47
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.Uniform.md625
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.WishartCholesky.md673
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.bijector.Bijector.md509
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.arg_scope.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.assert_scalar_int.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.deprecated_arg_values.md35
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.get_unique_variable.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.get_variables_to_restore.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.load_checkpoint.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.with_same_shape.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.SubGraphView.md472
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.TransformerInfo.md67
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.copy.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.filter_ts_from_regex.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.keep_t_if_possible_handler.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.infer_real_valued_columns.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.optimize_loss.md83
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.repeat.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.safe_embedding_lookup_sparse.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.shared_embedding_columns.md48
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.weighted_sum_from_feature_columns.md54
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.BaseEstimator.md305
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.ModeKeys.md7
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.ModelFnOps.__new__.md54
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.ProblemType.md10
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.CaptureVariable.md199
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.ExportMonitor.md248
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.GraphDump.md163
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.NanLoss.md184
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.StepCounter.md171
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.legacy_seq2seq.embedding_attention_decoder.md52
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.legacy_seq2seq.embedding_attention_seq2seq.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.legacy_seq2seq.rnn_decoder.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.set_difference.md64
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_auc.md64
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_covariance.md55
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_false_negatives_at_thresholds.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_mean.md44
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_mean_relative_error.md52
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_sparse_precision_at_k.md77
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.opt.ScipyOptimizerInterface.md87
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.rnn.GRUBlockCell.md84
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.rnn.static_bidirectional_rnn.md48
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.digamma.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.edit_distance.md65
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.encode_base64.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.ResourceExhaustedError.md12
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.expand_dims.md54
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.floor_div.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.gather_nd.md110
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.get_variable_scope.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.global_variables_initializer.md10
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.identity.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.imag.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.crop_and_resize.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.encode_jpeg.md51
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.initialize_all_variables.md8
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.initialize_local_variables.md8
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.is_variable_initialized.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.local_variables_initializer.md10
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matmul.md90
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.minimum.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.conv3d_backprop_filter_v2.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.depthwise_conv2d_native_backprop_input.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.embedding_lookup.md56
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.log_uniform_candidate_sampler.md56
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.relu.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.parallel_stack.md41
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.random_uniform.md41
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.real.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.report_uninitialized_variables.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.required_space_to_batch_paddings.md35
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scatter_nd_sub.md61
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scatter_update.md46
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.setdiff1d.md41
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.shape_n.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sin.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.space_to_batch_nd.md137
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sparse_placeholder.md43
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.split.md53
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.squeeze.md44
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.string_split.md43
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.SummaryDescription.md245
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.audio.md35
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.tensor_summary.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.tables_initializer.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.TestCase.md875
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.assert_equal_graph_def.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.compute_gradient.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.is_built_with_cuda.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.to_int32.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.CheckpointSaverHook.md77
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.Saver.from_proto.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.SessionCreator.md8
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.basic_train_loop.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.global_step.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.latest_checkpoint.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.maybe_shuffle_batch_join.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.natural_exp_decay.md56
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.shuffle_batch.md86
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.start_queue_runners.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.uniform_unit_scaling_initializer.md38
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.variables_initializer.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.write_file.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.PriorityQueue.md305
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.RegisterGradient.md45
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.SparseTensor.md248
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_rank.md32
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_type.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ceil.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.check_numerics.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.bayesflow.stochastic_tensor.SampleValue.md80
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.bayesflow.variational_inference.ELBOForms.check_form.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.copy_graph.copy_op_to_graph.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Binomial.md687
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DirichletMultinomial.md726
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.ExpRelaxedOneHotCategorical.md688
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Exponential.md608
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Gamma.md639
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.InverseGamma.md653
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Multinomial.md697
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.NormalWithSoftplusScale.md559
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.OneHotCategorical.md637
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.ffmpeg.encode_audio.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.framework.add_model_variable.md9
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.framework.get_model_variables.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.framework.local_variable.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.copy_op_handler.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.detach_outputs.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.filter_ops_from_regex.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.filter_ts.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.op_type.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.reroute_inputs.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.select_ts.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.swap_outputs.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.crossed_column.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.separable_convolution2d.md52
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.sparse_column_with_integerized_feature.md43
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.summarize_activation.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.variance_scaling_initializer.md54
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.Estimator.md397
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.extract_dask_data.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.monitors.StopAtStep.md154
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.monitors.SummarySaver.md175
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.monitors.SummaryWriterCache.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.legacy_seq2seq.basic_rnn_seq2seq.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.legacy_seq2seq.tied_rnn_seq2seq.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.linalg.LinearOperatorScaledIdentity.md543
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.linalg.LinearOperatorUDVHUpdate.md600
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.losses.add_loss.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.losses.cosine_distance.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.losses.get_regularization_losses.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.auc_using_histogram.md38
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_concat.md44
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_true_negatives_at_thresholds.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.CoupledInputForgetGateLSTMCell.md127
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.DropoutWrapper.md69
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.InputProjectionWrapper.md67
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.LSTMBlockWrapper.md49
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.training.bucket.md86
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.training.rejection_sample.md57
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.training.stratified_sample.md58
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.control_dependencies.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.convert_to_tensor_or_indexed_slices.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_csv.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_raw.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.OutOfRangeError.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.UnauthenticatedError.md11
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.exp.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.fake_quant_with_min_max_vars.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.fake_quant_with_min_max_vars_per_channel.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.foldl.md44
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.global_variables.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.group.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ifft2d.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_contrast.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_saturation.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.convert_image_dtype.md32
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.decode_png.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_contrast.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_saturation.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.local_variables.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.logical_xor.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.multinomial.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.dropout.md38
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.fractional_max_pool.md81
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.log_softmax.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.max_pool.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.quantized_max_pool.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.quantized_relu_x.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.softsign.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.parse_tensor.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.placeholder.md34
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.random_gamma.md65
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.random_shuffle.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reduce_min.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.register_tensor_conversion_function.md44
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_mul.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_nd_add.md61
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_mean.md32
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.shape.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_concat.md102
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_softmax.md52
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_split.md43
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.stop_gradient.md34
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.substr.md92
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.svd.md47
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.compute_gradient_error.md38
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.gpu_device_name.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.test_src_dir_path.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.to_double.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.trace.md41
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.AdagradOptimizer.md181
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.QueueRunner.md175
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Server.md129
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.SyncReplicasOptimizer.md268
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.assert_global_step.md9
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.batch_join.md88
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.get_checkpoint_mtimes.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.range_input_producer.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.truncatediv.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.unique_with_counts.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.DebugTensorDatum.md146
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.LocalCLIDebugWrapperSession.md207
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.add_debug_tensor_watch.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.load_tensor_from_event_file.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.Assert.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.ConditionalAccumulator.md136
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.FixedLengthRecordReader.md175
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.argmin.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.assert_less.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.broadcast_static_shape.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.clip_by_value.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.complex.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.md111
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.bayesflow.variational_inference.ELBOForms.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.crf.viterbi_decode.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.bijector.Invert.md307
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.bijector.Softplus.md295
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.normal_conjugates_known_scale_predictive.md55
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.framework.VariableDeviceChooser.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.framework.add_arg_scope.md13
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.framework.load_variable.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.OpMatcher.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.get_consuming_ops.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.graph_replace.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.make_list_of_op.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.reroute_outputs.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.sgv_scope.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.avg_pool2d.md32
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md83
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.check_feature_columns.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.one_hot_column.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.regression_target.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.unit_norm.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.DNNClassifier.md467
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.DNNLinearCombinedClassifier.md493
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.Evaluable.md77
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.ExportStrategy.md89
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.RunConfig.md163
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.evaluate.md57
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.infer_real_valued_columns_from_input.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.monitors.SummaryWriterCache.clear.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.monitors.get_default_monitors.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.read_batch_features.md46
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.run_feeds.md8
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.legacy_seq2seq.one2many_rnn_seq2seq.md51
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.losses.mean_squared_error.md35
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.set_union.md63
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_precision_at_thresholds.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_recall_at_k.md55
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_sparse_precision_at_top_k.md75
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_true_positives.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.DeviceWrapper.md62
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.LSTMBlockCell.md67
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.TimeFreqLSTMCell.md100
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.training.NextQueuedSequenceBatch.md265
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.training.batch_sequences_with_states.md167
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.util.stripped_op_list_for_graph.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.decode_base64.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.FailedPreconditionError.md13
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.extract_image_patches.md40
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.fixed_size_partitioner.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.floor.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.greater.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.histogram_fixed_width.md38
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.hsv_to_rgb.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.initialize_variables.md8
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.log.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.conv2d_backprop_input.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.conv2d_transpose.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.ctc_beam_search_decoder.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d.md45
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d_native.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.dilation2d.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.l2_loss.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.log_poisson_loss.md44
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.max_pool3d.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.nce_loss.md58
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.softplus.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.sparse_softmax_cross_entropy_with_logits.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.placeholder_with_default.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.python_io.TFRecordCompressionType.md1
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.reduce_all.md40
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.reduce_mean.md40
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.segment_max.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.self_adjoint_eigvals.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_add.md55
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_to_indicator.md52
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.stack.md44
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.string_to_hash_bucket_fast.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.summary.SummaryDescription.RegisterExtension.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.tile.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.SingularMonitoredSession.md131
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.WorkerSessionCreator.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.match_filenames_once.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.maybe_batch_join.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.update_checkpoint_state.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.write_graph.md33
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.unsorted_segment_sum.md38
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.while_loop.md117
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf_debug.DebugDumpDir.md548
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf_debug.DumpingDebugHook.md185
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf_debug.has_inf_or_nan.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.OpError.md66
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.RandomShuffleQueue.md312
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.add.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.asin.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_greater.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_integer.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.boolean_mask.md43
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.broadcast_dynamic_shape.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.cast.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.clip_by_global_norm.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.clip_by_norm.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.container.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.bayesflow.stochastic_graph.surrogate_loss.md34
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.crf.crf_log_norm.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.bijector.Affine.md399
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.bijector.Chain.md324
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.bijector.Exp.md305
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.arg_scoped_arguments.md13
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.get_variables.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.list_variables.md13
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.variable.md33
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.bypass.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.can_be_regex.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.detach_control_outputs.md11
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.get_walks_intersection_ops.md39
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.make_regex.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.remove_control_inputs.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.convolution2d.md75
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.fully_connected.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.legacy_fully_connected.md73
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.multi_class_target.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.one_hot_encoding.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.scattered_embedding_column.md51
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.sum_regularizer.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.summarize_collection.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.monitors.BaseMonitor.md187
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.monitors.CheckpointSaver.md146
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.monitors.RunHookAdapterForMonitors.md57
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.run_n.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.legacy_seq2seq.sequence_loss_by_example.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.losses.sigmoid_cross_entropy.md39
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.accuracy.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_false_positives.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_percentage_less.md43
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_recall_at_thresholds.md48
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.rnn.EmbeddingWrapper.md71
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.rnn.RNNCell.md85
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.cos.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.count_nonzero.md43
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.diag_part.md34
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.div.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.PermissionDeniedError.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.UnavailableError.md11
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.raise_exception_on_not_ok_status.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.floordiv.md32
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.decode_image.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.flip_left_right.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.flip_up_down.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.pad_to_bounding_box.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.resize_image_with_crop_or_pad.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.resize_nearest_neighbor.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.rot90.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.sample_distorted_bounding_box.md89
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.total_variation.md40
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.load_file_system_library.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_and.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_not.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.make_template.md111
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.model_variables.md8
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.atrous_conv2d_transpose.md43
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.conv3d.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.sigmoid_cross_entropy_with_logits.md49
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.top_k.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.no_op.md13
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.range.md52
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reverse_v2.md64
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.scatter_nd.md94
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sign.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_maximum.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.FileWriterCache.get.md13
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.SummaryDescription.FromString.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.image.md47
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.tan.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.test.main.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.to_int64.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.ChiefSessionCreator.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.FtrlOptimizer.md185
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.LooperThread.loop.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.NewCheckpointReader.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.Optimizer.md265
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.Saver.md372
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.SessionManager.md209
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.checkpoint_exists.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.maybe_shuffle_batch.md41
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.piecewise_constant.md41
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.polynomial_decay.md78
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.replica_device_setter.md63
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.transpose.md49
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.truncated_normal_initializer.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.variable_op_scope.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.verify_tensor_all_finite.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Dimension.md361
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.FixedLenSequenceFeature.md59
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.PaddingFIFOQueue.md313
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.QueueBase.md305
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.SparseConditionalAccumulator.md209
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.abs.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.as_string.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_positive.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.bitcast.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.concat.md58
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.conj.md33
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.entropy.elbo_ratio.md68
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.entropy.renyi_alpha.md38
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler.md43
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.stochastic_tensor.value_type.md35
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.copy_graph.copy_variable_to_graph.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.crf.crf_sequence_score.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.crf.crf_unary_score.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.BernoulliWithSigmoidProbs.md563
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.Beta.md689
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.Laplace.md607
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.LaplaceWithSoftplusScale.md559
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.Logistic.md637
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.bijector.AffineLinearOperator.md358
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.bijector.Identity.md283
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.assert_scalar.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.assign_from_values.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.create_global_step.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.deprecated.md34
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.reduce_sum_n.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.assign_renamed_collections_handler.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.detach_control_inputs.md10
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.get_forward_walk_ops.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.get_tensors.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.replace_t_with_placeholder_handler.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.select_ops_and_ts.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.swap_ts.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.bucketized_column.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.embed_sequence.md34
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.summarize_activations.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.ExportStrategy.__new__.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.MetricSpec.md181
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.NotFittedError.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.RunConfig.get_task_id.md12
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.extract_dask_labels.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.monitors.PrintTensor.md187
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.read_batch_examples.md46
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.linalg.LinearOperatorMatrix.md519
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.losses.log_loss.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.set_intersection.md63
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_false_positives_at_thresholds.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_mean_iou.md51
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_recall.md47
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_root_mean_squared_error.md51
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.rnn.LSTMStateTuple.md54
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.rnn.LayerNormBasicLSTMCell.md84
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.make_tensor_proto.md46
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.cumsum.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.decode_json_example.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.dequantize.md52
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.dynamic_partition.md54
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.erfc.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.AbortedError.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.InternalError.md12
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.NotFoundError.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.UnimplementedError.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.fake_quant_with_min_max_args.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.igamma.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.decode_gif.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.extract_glimpse.md56
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.rgb_to_hsv.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.import_graph_def.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.initialize_all_tables.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.load_op_library.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.maximum.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.min_max_variable_partitioner.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.moving_average_variables.md13
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.multiply.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.batch_norm_with_global_normalization.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.ctc_greedy_decoder.md40
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.depthwise_conv2d_native_backprop_filter.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.elu.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.separable_conv2d.md54
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.softmax.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.with_space_to_batch.md133
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.one_hot.md131
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.op_scope.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.parse_example.md197
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.pow.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.python_io.tf_record_iterator.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_crop.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_normal_initializer.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.rank.md32
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reciprocal.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.self_adjoint_eig.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sigmoid.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.slice.md47
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.space_to_depth.md87
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_reduce_sum_sparse.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_reset_shape.md60
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_segment_mean.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_segment_sum.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_transpose.md40
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.string_join.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.TaggedRunMetadata.RegisterExtension.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.get_summary_description.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.scalar.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.to_float.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.AdamOptimizer.md206
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Coordinator.md266
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.QueueRunner.from_proto.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.RMSPropOptimizer.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Scaffold.md144
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.SessionRunContext.md57
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.StopAtStepHook.md85
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Supervisor.md859
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.exponential_decay.md60
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.slice_input_producer.md35
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.trainable_variables.md13
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.truncated_normal.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf_debug.watch_graph.md32
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.DeviceSpec.from_string.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.FixedLenFeature.md59
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.Operation.md234
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.PaddingFIFOQueue.from_list.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.QueueBase.from_list.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.as_dtype.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_equal.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_variables_initialized.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.bayesflow.entropy.renyi_ratio.md103
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.ConditionalTransformedDistribution.md489
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.ExponentialWithSoftplusRate.md565
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.md792
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.Normal.md639
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.RelaxedBernoulli.md706
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.bijector.Inline.md308
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.bijector.PowerTransform.md301
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.kl.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.assert_or_get_global_step.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.assign_from_values_fn.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.filter_variables.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.get_variables_by_suffix.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.has_arg_scope.md13
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.with_shape.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.make_placeholder_from_dtype_and_shape.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.make_view.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.ph.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.sgv.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.convolution2d_transpose.md52
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.input_from_feature_columns.md58
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.layer_norm.md39
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.sequence_input_from_feature_columns.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.sparse_column_with_hash_bucket.md33
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.InputFnOps.md64
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.build_parsing_serving_input_fn.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.monitors.EveryN.md232
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.legacy_seq2seq.model_with_buckets.md43
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.losses.softmax_cross_entropy.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_mean_absolute_error.md51
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_precision.md49
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_sensitivity_at_specificity.md56
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_specificity_at_sensitivity.md56
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_true_positives_at_thresholds.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.opt.ExternalOptimizerInterface.md56
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.BasicRNNCell.md51
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.FusedRNNCellAdaptor.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.LSTMBlockFusedCell.md72
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.LSTMCell.md124
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.ResidualWrapper.md73
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.util.make_ndarray.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.count_up_to.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.dynamic_stitch.md59
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.error_code_from_exception_type.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.exception_type_from_error_code.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.fft3d.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_collection.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_collection_ref.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_session_tensor.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ifft.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.non_max_suppression.md45
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.random_flip_left_right.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.resize_area.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.less.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.lgamma.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.logical_or.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_band_part.md61
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_diag.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_diag_part.md45
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_solve.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_solve_ls.md56
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_transpose.md34
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.atrous_conv2d.md115
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.avg_pool.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.conv3d_transpose.md35
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.fixed_unigram_candidate_sampler.md75
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.fractional_avg_pool.md57
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.in_top_k.md33
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.local_response_normalization.md34
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.quantized_avg_pool.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.softmax_cross_entropy_with_logits.md41
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.weighted_moments.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ones_like.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.polygamma.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.python_io.TFRecordWriter.md55
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.random_normal.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.reduce_max.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.rint.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.scatter_nd_update.md60
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.scatter_sub.md44
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.segment_prod.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.segment_sum.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sparse_reduce_sum.md43
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sparse_tensor_dense_matmul.md165
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.square.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.string_to_hash_bucket.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.summary.FileWriter.md209
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.summary.FileWriterCache.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.MomentumOptimizer.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.ProximalGradientDescentOptimizer.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.SessionRunValues.md66
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.export_meta_graph.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.get_checkpoint_state.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.get_global_step.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.inverse_time_decay.md56
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.truediv.md34
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.unique.md34
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.variable_scope.md100
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.where.md47
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.FixedLenSequenceFeature.__new__.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.GraphKeys.md44
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.IndexedSlices.md102
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.RandomShuffleQueue.from_list.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Session.md416
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.VarLenFeature.md39
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Variable.md1156
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.acos.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.argmax.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_negative.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_proper_iterable.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assign_sub.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.atan.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_to_space_nd.md136
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.bayesflow.stochastic_tensor.MeanValue.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.bayesflow.variational_inference.elbo_with_log_joint.md35
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Mixture.md659
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.bijector.SoftmaxCentered.md298
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.matrix_diag_transform.md53
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.normal_conjugates_known_scale_posterior.md48
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.framework.assert_same_float_dtype.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.framework.get_variable_full_name.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.framework.zero_initializer.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.ControlOutputs.md52
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.Transformer.md64
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.check_cios.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.copy_with_input_replacements.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.detach.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.get_backward_walk_ops.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.get_ops_ios.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.apply_regularization.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.conv2d_in_plane.md49
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.max_pool2d.md33
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.parse_feature_columns_from_examples.md52
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.parse_feature_columns_from_sequence_examples.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.summarize_tensor.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.xavier_initializer_conv2d.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.Experiment.md229
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.KMeansClustering.md413
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.TaskType.md1
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.extract_pandas_labels.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.linalg.LinearOperatorTriL.md521
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.losses.sparse_softmax_cross_entropy.md33
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.confusion_matrix.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_accuracy.md50
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_mean_cosine_distance.md46
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_pearson_correlation.md52
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_sparse_average_precision_at_k.md57
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_true_negatives.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.opt.MovingAverageOptimizer.md217
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.rnn.FusedRNNCell.md47
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.rnn.LSTMStateTuple.__new__.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.rnn.static_rnn.md65
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.training.bucket_by_sequence_length.md55
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.cross.md22
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.equal.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.hessians.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.crop_to_bounding_box.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.draw_bounding_boxes.md32
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.per_image_standardization.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_bilinear.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_images.md41
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.transpose_image.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.is_inf.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.lbeta.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.less_equal.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.matrix_inverse.md32
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.matrix_set_diag.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.batch_normalization.md47
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.bidirectional_dynamic_rnn.md84
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.conv2d.md49
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.convolution.md116
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.dynamic_rnn.md102
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.learned_unigram_candidate_sampler.md53
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.pool.md80
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.quantized_conv2d.md39
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.relu6.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.sufficient_statistics.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.weighted_cross_entropy_with_logits.md52
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.no_regularizer.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.ones_initializer.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.python_io.TFRecordOptions.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.quantized_concat.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reduce_prod.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reset_default_graph.md10
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reverse.md64
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.round.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.rsqrt.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scatter_add.md46
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scatter_div.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sequence_mask.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.set_random_seed.md98
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_fill_empty_rows.md54
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_reorder.md41
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_retain.md33
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_segment_sqrt_n.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_to_dense.md45
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.strided_slice.md86
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.subtract.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.summary.histogram.md25
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.summary.merge.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.tensordot.md54
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdagradDAOptimizer.md194
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.Scaffold.get_or_default.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.SessionRunArgs.md64
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.SummarySaverHook.md79
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.batch.md81
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.do_quantize_training_on_graphdef.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.import_meta_graph.md70
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.truncatemod.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.variable_axis_size_partitioner.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.zeta.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.DeviceSpec.md147
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.FixedLenFeature.__new__.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.NotDifferentiable.md32
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.Session.reset.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.assign_add.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.clip_by_average_norm.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.entropy.entropy_shannon.md38
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler_logspace.md45
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.stochastic_tensor.get_current_value_type.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.variational_inference.register_prior.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.BetaWithSoftplusConcentration.md597
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.MultivariateNormalTriL.md750
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.Poisson.md613
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.WishartFull.md669
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.softplus_inverse.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.ffmpeg.decode_audio.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.assign_from_checkpoint_fn.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.get_or_create_global_step.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.is_tensor.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.model_variable.md34
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.remove_squeezable_dimensions.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.connect.md27
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.get_generating_ops.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.get_walks_union_ops.md38
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.get_within_boundary_ops.md32
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.make_view_from_scope.md14
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.reroute_ts.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.create_feature_spec_for_parsing.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.joint_weighted_sum_from_feature_columns.md35
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.l1_regularizer.md21
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.xavier_initializer.md29
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.DNNLinearCombinedRegressor.md408
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.DNNRegressor.md393
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.InputFnOps.__new__.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.LogisticRegressor.md45
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.Trainable.md45
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.infer.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.monitors.LoggingTrainable.md184
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.legacy_seq2seq.embedding_tied_rnn_seq2seq.md53
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.linalg.LinearOperator.md553
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.losses.get_total_loss.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.losses.mean_pairwise_squared_error.md49
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.aggregate_metric_map.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.aggregate_metrics.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_false_negatives.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.rnn.CompiledWrapper.md58
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.training.SequenceQueueingStateSaver.md270
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.InvalidArgumentError.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.UnknownError.md15
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.expm1.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.eye.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.fill.md31
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.foldr.md44
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.gather.md37
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_default_graph.md17
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_local_variable.md87
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_variable.md86
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.encode_png.md28
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.random_flip_up_down.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.matrix_determinant.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.matrix_triangular_solve.md44
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.meshgrid.md45
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.bias_add.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.crelu.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.fused_batch_norm.md33
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.max_pool_with_argmax.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.raw_rnn.md170
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.uniform_candidate_sampler.md49
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.zero_fraction.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.ones.md24
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.pad.md57
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.random_poisson.md38
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.realdiv.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.saturate_cast.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.scalar_mul.md23
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.scan.md92
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.size.md26
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.space_to_batch.md110
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_mask.md40
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_merge.md98
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_reshape.md51
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.squared_difference.md18
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.string_to_number.md20
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.summary.TaggedRunMetadata.FromString.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.tanh.md16
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.test.is_gpu_available.md13
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.to_bfloat16.md19
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.FeedFnHook.md88
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.FinalOpsHook.md111
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.NanLossDuringTrainingError.md8
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.ProximalAdagradOptimizer.md30
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.SessionRunValues.__new__.md4
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.StepCounterHook.md65
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.input_producer.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.string_input_producer.md36
-rw-r--r--tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.summary_iterator.md42
-rw-r--r--tensorflow/g3doc/api_docs/python/histogram_ops.md48
-rw-r--r--tensorflow/g3doc/api_docs/python/image.md1415
-rw-r--r--tensorflow/g3doc/api_docs/python/index.md1204
-rw-r--r--tensorflow/g3doc/api_docs/python/io_ops.md4575
-rw-r--r--tensorflow/g3doc/api_docs/python/math_ops.md3672
-rw-r--r--tensorflow/g3doc/api_docs/python/nn.md3634
-rw-r--r--tensorflow/g3doc/api_docs/python/python_io.md117
-rw-r--r--tensorflow/g3doc/api_docs/python/script_ops.md65
-rw-r--r--tensorflow/g3doc/api_docs/python/session_ops.md116
-rw-r--r--tensorflow/g3doc/api_docs/python/sparse_ops.md1439
-rw-r--r--tensorflow/g3doc/api_docs/python/state_ops.md3657
-rw-r--r--tensorflow/g3doc/api_docs/python/string_ops.md392
-rw-r--r--tensorflow/g3doc/api_docs/python/summary.md1004
-rw-r--r--tensorflow/g3doc/api_docs/python/tensor_array_ops.md297
-rw-r--r--tensorflow/g3doc/api_docs/python/test.md1133
-rw-r--r--tensorflow/g3doc/api_docs/python/tf_debug.md1659
-rw-r--r--tensorflow/g3doc/api_docs/python/train.md6664
1166 files changed, 0 insertions, 194876 deletions
diff --git a/tensorflow/g3doc/api_docs/python/array_ops.md b/tensorflow/g3doc/api_docs/python/array_ops.md
deleted file mode 100644
index 1c7749edfd..0000000000
--- a/tensorflow/g3doc/api_docs/python/array_ops.md
+++ /dev/null
@@ -1,3168 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Tensor Transformations
-
-Note: Functions taking `Tensor` arguments can also take anything accepted by
-[`tf.convert_to_tensor`](framework.md#convert_to_tensor).
-
-[TOC]
-
-## Casting
-
-TensorFlow provides several operations that you can use to cast tensor data
-types in your graph.
-
-- - -
-
-### `tf.string_to_number(string_tensor, out_type=None, name=None)` {#string_to_number}
-
-Converts each string in the input Tensor to the specified numeric type.
-
-(Note that int32 overflow results in an error while float overflow
-results in a rounded value.)
-
-##### Args:
-
-
-* <b>`string_tensor`</b>: A `Tensor` of type `string`.
-* <b>`out_type`</b>: An optional `tf.DType` from: `tf.float32, tf.int32`. Defaults to `tf.float32`.
- The numeric type to interpret each string in `string_tensor` as.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `out_type`.
- A Tensor of the same shape as the input `string_tensor`.
-
-
-- - -
-
-### `tf.to_double(x, name='ToDouble')` {#to_double}
-
-Casts a tensor to type `float64`.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` with same shape as `x` with type `float64`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` cannot be cast to the `float64`.
-
-
-- - -
-
-### `tf.to_float(x, name='ToFloat')` {#to_float}
-
-Casts a tensor to type `float32`.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` with same shape as `x` with type `float32`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` cannot be cast to the `float32`.
-
-
-- - -
-
-### `tf.to_bfloat16(x, name='ToBFloat16')` {#to_bfloat16}
-
-Casts a tensor to type `bfloat16`.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` with same shape as `x` with type `bfloat16`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` cannot be cast to the `bfloat16`.
-
-
-- - -
-
-### `tf.to_int32(x, name='ToInt32')` {#to_int32}
-
-Casts a tensor to type `int32`.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` with same shape as `x` with type `int32`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` cannot be cast to the `int32`.
-
-
-- - -
-
-### `tf.to_int64(x, name='ToInt64')` {#to_int64}
-
-Casts a tensor to type `int64`.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` with same shape as `x` with type `int64`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` cannot be cast to the `int64`.
-
-
-- - -
-
-### `tf.cast(x, dtype, name=None)` {#cast}
-
-Casts a tensor to a new type.
-
-The operation casts `x` (in case of `Tensor`) or `x.values`
-(in case of `SparseTensor`) to `dtype`.
-
-For example:
-
-```python
-# tensor `a` is [1.8, 2.2], dtype=tf.float
-tf.cast(a, tf.int32) ==> [1, 2] # dtype=tf.int32
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`.
-* <b>`dtype`</b>: The destination type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` with same shape as `x`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` cannot be cast to the `dtype`.
-
-
-- - -
-
-### `tf.bitcast(input, type, name=None)` {#bitcast}
-
-Bitcasts a tensor from one type to another without copying data.
-
-Given a tensor `input`, this operation returns a tensor that has the same buffer
-data as `input` with datatype `type`.
-
-If the input datatype `T` is larger than the output datatype `type` then the
-shape changes from [...] to [..., sizeof(`T`)/sizeof(`type`)].
-
-If `T` is smaller than `type`, the operator requires that the rightmost
-dimension be equal to sizeof(`type`)/sizeof(`T`). The shape then goes from
-[..., sizeof(`type`)/sizeof(`T`)] to [...].
-
-*NOTE*: Bitcast is implemented as a low-level cast, so machines with different
-endian orderings will give different results.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`type`</b>: A `tf.DType` from: `tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.int16, tf.int8, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint32, tf.half`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `type`.
-
-
-- - -
-
-### `tf.saturate_cast(value, dtype, name=None)` {#saturate_cast}
-
-Performs a safe saturating cast of `value` to `dtype`.
-
-This function casts the input to `dtype` without applying any scaling. If
-there is a danger that values would over or underflow in the cast, this op
-applies the appropriate clamping before the cast.
-
-##### Args:
-
-
-* <b>`value`</b>: A `Tensor`.
-* <b>`dtype`</b>: The desired output `DType`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `value` safely cast to `dtype`.
-
-
-
-## Shapes and Shaping
-
-TensorFlow provides several operations that you can use to determine the shape
-of a tensor and change the shape of a tensor.
-
-- - -
-
-### `tf.broadcast_dynamic_shape(shape_x, shape_y)` {#broadcast_dynamic_shape}
-
-Returns the broadcasted dynamic shape between `shape_x` and `shape_y`.
-
-##### Args:
-
-
-* <b>`shape_x`</b>: A rank 1 integer `Tensor`, representing the shape of x.
-* <b>`shape_y`</b>: A rank 1 integer `Tensor`, representing the shape of x.
-
-##### Returns:
-
- A rank 1 integer `Tensor` representing the broadcasted shape.
-
-
-- - -
-
-### `tf.broadcast_static_shape(shape_x, shape_y)` {#broadcast_static_shape}
-
-Returns the broadcasted static shape between `shape_x` and `shape_y`.
-
-##### Args:
-
-
-* <b>`shape_x`</b>: A `TensorShape`
-* <b>`shape_y`</b>: A `TensorShape`
-
-##### Returns:
-
- A `TensorShape` representing the broadcasted shape.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the two shapes can not be broadcasted.
-
-
-- - -
-
-### `tf.shape(input, name=None, out_type=tf.int32)` {#shape}
-
-Returns the shape of a tensor.
-
-This operation returns a 1-D integer tensor representing the shape of `input`.
-
-For example:
-
-```python
-# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
-shape(t) ==> [2, 2, 3]
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`out_type`</b>: (Optional) The specified output type of the operation
- (`int32` or `int64`). Defaults to `tf.int32`.
-
-##### Returns:
-
- A `Tensor` of type `out_type`.
-
-
-- - -
-
-### `tf.shape_n(input, out_type=None, name=None)` {#shape_n}
-
-Returns shape of tensors.
-
-This operation returns N 1-D integer tensors representing shape of `input[i]s`.
-
-##### Args:
-
-
-* <b>`input`</b>: A list of at least 1 `Tensor` objects of the same type.
-* <b>`out_type`</b>: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A list with the same number of `Tensor` objects as `input` of `Tensor` objects of type out_type.
-
-
-- - -
-
-### `tf.size(input, name=None, out_type=tf.int32)` {#size}
-
-Returns the size of a tensor.
-
-This operation returns an integer representing the number of elements in
-`input`.
-
-For example:
-
-```python
-# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]
-size(t) ==> 12
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`out_type`</b>: (Optional) The specified output type of the operation
- (`int32` or `int64`). Defaults to tf.int32.
-
-##### Returns:
-
- A `Tensor` of type `out_type`. Defaults to tf.int32.
-
-
-- - -
-
-### `tf.rank(input, name=None)` {#rank}
-
-Returns the rank of a tensor.
-
-This operation returns an integer representing the rank of `input`.
-
-For example:
-
-```python
-# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
-# shape of tensor 't' is [2, 2, 3]
-rank(t) ==> 3
-```
-
-**Note**: The rank of a tensor is not the same as the rank of a matrix. The
-rank of a tensor is the number of indices required to uniquely select each
-element of the tensor. Rank is also known as "order", "degree", or "ndims."
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `int32`.
-
-@compatibility(numpy)
-Equivalent to np.ndim
-@end_compatibility
-
-
-- - -
-
-### `tf.reshape(tensor, shape, name=None)` {#reshape}
-
-Reshapes a tensor.
-
-Given `tensor`, this operation returns a tensor that has the same values
-as `tensor` with shape `shape`.
-
-If one component of `shape` is the special value -1, the size of that dimension
-is computed so that the total size remains constant. In particular, a `shape`
-of `[-1]` flattens into 1-D. At most one component of `shape` can be -1.
-
-If `shape` is 1-D or higher, then the operation returns a tensor with shape
-`shape` filled with the values of `tensor`. In this case, the number of elements
-implied by `shape` must be the same as the number of elements in `tensor`.
-
-For example:
-
-```prettyprint
-# tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9]
-# tensor 't' has shape [9]
-reshape(t, [3, 3]) ==> [[1, 2, 3],
- [4, 5, 6],
- [7, 8, 9]]
-
-# tensor 't' is [[[1, 1], [2, 2]],
-# [[3, 3], [4, 4]]]
-# tensor 't' has shape [2, 2, 2]
-reshape(t, [2, 4]) ==> [[1, 1, 2, 2],
- [3, 3, 4, 4]]
-
-# tensor 't' is [[[1, 1, 1],
-# [2, 2, 2]],
-# [[3, 3, 3],
-# [4, 4, 4]],
-# [[5, 5, 5],
-# [6, 6, 6]]]
-# tensor 't' has shape [3, 2, 3]
-# pass '[-1]' to flatten 't'
-reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6]
-
-# -1 can also be used to infer the shape
-
-# -1 is inferred to be 9:
-reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3],
- [4, 4, 4, 5, 5, 5, 6, 6, 6]]
-# -1 is inferred to be 2:
-reshape(t, [-1, 9]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3],
- [4, 4, 4, 5, 5, 5, 6, 6, 6]]
-# -1 is inferred to be 3:
-reshape(t, [ 2, -1, 3]) ==> [[[1, 1, 1],
- [2, 2, 2],
- [3, 3, 3]],
- [[4, 4, 4],
- [5, 5, 5],
- [6, 6, 6]]]
-
-# tensor 't' is [7]
-# shape `[]` reshapes to a scalar
-reshape(t, []) ==> 7
-```
-
-##### Args:
-
-
-* <b>`tensor`</b>: A `Tensor`.
-* <b>`shape`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- Defines the shape of the output tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `tensor`.
-
-
-- - -
-
-### `tf.squeeze(input, axis=None, name=None, squeeze_dims=None)` {#squeeze}
-
-Removes dimensions of size 1 from the shape of a tensor.
-
-Given a tensor `input`, this operation returns a tensor of the same type with
-all dimensions of size 1 removed. If you don't want to remove all size 1
-dimensions, you can remove specific size 1 dimensions by specifying
-`axis`.
-
-For example:
-
-```prettyprint
-# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
-shape(squeeze(t)) ==> [2, 3]
-```
-
-Or, to remove specific size 1 dimensions:
-
-```prettyprint
-# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
-shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1]
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. The `input` to squeeze.
-* <b>`axis`</b>: An optional list of `ints`. Defaults to `[]`.
- If specified, only squeezes the dimensions listed. The dimension
- index starts at 0. It is an error to squeeze a dimension that is not 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`squeeze_dims`</b>: Deprecated keyword argument that is now axis.
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- Contains the same data as `input`, but has one or more dimensions of
- size 1 removed.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: When both `squeeze_dims` and `axis` are specified.
-
-
-- - -
-
-### `tf.expand_dims(input, axis=None, name=None, dim=None)` {#expand_dims}
-
-Inserts a dimension of 1 into a tensor's shape.
-
-Given a tensor `input`, this operation inserts a dimension of 1 at the
-dimension index `axis` of `input`'s shape. The dimension index `axis` starts
-at zero; if you specify a negative number for `axis` it is counted backward
-from the end.
-
-This operation is useful if you want to add a batch dimension to a single
-element. For example, if you have a single image of shape `[height, width,
-channels]`, you can make it a batch of 1 image with `expand_dims(image, 0)`,
-which will make the shape `[1, height, width, channels]`.
-
-Other examples:
-
-```python
-# 't' is a tensor of shape [2]
-shape(expand_dims(t, 0)) ==> [1, 2]
-shape(expand_dims(t, 1)) ==> [2, 1]
-shape(expand_dims(t, -1)) ==> [2, 1]
-
-# 't2' is a tensor of shape [2, 3, 5]
-shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5]
-shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5]
-shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1]
-```
-
-This operation requires that:
-
-`-1-input.dims() <= dim <= input.dims()`
-
-This operation is related to `squeeze()`, which removes dimensions of
-size 1.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`.
-* <b>`axis`</b>: 0-D (scalar). Specifies the dimension index at which to
- expand the shape of `input`.
-* <b>`name`</b>: The name of the output `Tensor`.
-* <b>`dim`</b>: 0-D (scalar). Equivalent to `axis`, to be deprecated.
-
-##### Returns:
-
- A `Tensor` with the same data as `input`, but its shape has an additional
- dimension of size 1 added.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if both `dim` and `axis` are specified.
-
-
-- - -
-
-### `tf.meshgrid(*args, **kwargs)` {#meshgrid}
-
-Broadcasts parameters for evaluation on an N-D grid.
-
-Given N one-dimensional coordinate arrays `*args`, returns a list `outputs`
-of N-D coordinate arrays for evaluating expressions on an N-D grid.
-
-Notes:
-
-`meshgrid` supports cartesian ('xy') and matrix ('ij') indexing conventions.
-When the `indexing` argument is set to 'xy' (the default), the broadcasting
-instructions for the first two dimensions are swapped.
-
-Examples:
-
-Calling `X, Y = meshgrid(x, y)` with the tensors
-
-```prettyprint
- x = [1, 2, 3]
- y = [4, 5, 6]
-```
-
-results in
-
-```prettyprint
- X = [[1, 1, 1],
- [2, 2, 2],
- [3, 3, 3]]
- Y = [[4, 5, 6],
- [4, 5, 6],
- [4, 5, 6]]
-```
-
-##### Args:
-
-
-* <b>`*args`</b>: `Tensor`s with rank 1
-* <b>`indexing`</b>: Either 'xy' or 'ij' (optional, default: 'xy')
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
-
-* <b>`outputs`</b>: A list of N `Tensor`s with rank N
-
-
-
-## Slicing and Joining
-
-TensorFlow provides several operations to slice or extract parts of a tensor,
-or join multiple tensors together.
-
-- - -
-
-### `tf.slice(input_, begin, size, name=None)` {#slice}
-
-Extracts a slice from a tensor.
-
-This operation extracts a slice of size `size` from a tensor `input` starting
-at the location specified by `begin`. The slice `size` is represented as a
-tensor shape, where `size[i]` is the number of elements of the 'i'th dimension
-of `input` that you want to slice. The starting location (`begin`) for the
-slice is represented as an offset in each dimension of `input`. In other
-words, `begin[i]` is the offset into the 'i'th dimension of `input` that you
-want to slice from.
-
-`begin` is zero-based; `size` is one-based. If `size[i]` is -1,
-all remaining elements in dimension i are included in the
-slice. In other words, this is equivalent to setting:
-
-`size[i] = input.dim_size(i) - begin[i]`
-
-This operation requires that:
-
-`0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]`
-
-For example:
-
-```python
-# 'input' is [[[1, 1, 1], [2, 2, 2]],
-# [[3, 3, 3], [4, 4, 4]],
-# [[5, 5, 5], [6, 6, 6]]]
-tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]]
-tf.slice(input, [1, 0, 0], [1, 2, 3]) ==> [[[3, 3, 3],
- [4, 4, 4]]]
-tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]],
- [[5, 5, 5]]]
-```
-
-##### Args:
-
-
-* <b>`input_`</b>: A `Tensor`.
-* <b>`begin`</b>: An `int32` or `int64` `Tensor`.
-* <b>`size`</b>: An `int32` or `int64` `Tensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` the same type as `input`.
-
-
-- - -
-
-### `tf.strided_slice(input_, begin, end, strides=None, begin_mask=0, end_mask=0, ellipsis_mask=0, new_axis_mask=0, shrink_axis_mask=0, var=None, name=None)` {#strided_slice}
-
-Extracts a strided slice from a tensor.
-
-To a first order, this operation extracts a slice of size `end - begin`
-from a tensor `input`
-starting at the location specified by `begin`. The slice continues by adding
-`stride` to the `begin` index until all dimensions are not less than `end`.
-Note that components of stride can be negative, which causes a reverse
-slice.
-
-This operation can be thought of an encoding of a numpy style sliced
-range. Given a python slice input[<spec0>, <spec1>, ..., <specn>]
-this function will be called as follows.
-
-`begin`, `end`, and `strides` will be all length n. n is in general
-not the same dimensionality as `input`.
-
-For the ith spec,
-`begin_mask`, `end_mask`, `ellipsis_mask`, `new_axis_mask`,
-and `shrink_axis_mask` will have the ith bit corresponding to
-the ith spec.
-
-If the ith bit of `begin_mask` is non-zero, `begin[i]` is ignored and
-the fullest possible range in that dimension is used instead.
-`end_mask` works analogously, except with the end range.
-
-`foo[5:,:,:3]` on a 7x8x9 tensor is equivalent to `foo[5:7,0:8,0:3]`.
-`foo[::-1]` reverses a tensor with shape 8.
-
-
-If the ith bit of `ellipsis_mask`, as many unspecified dimensions
-as needed will be inserted between other dimensions. Only one
-non-zero bit is allowed in `ellipsis_mask`.
-
-For example `foo[3:5,...,4:5]` on a shape 10x3x3x10 tensor is
-equivalent to `foo[3:5,:,:,4:5]` and
-`foo[3:5,...]` is equivalent to `foo[3:5,:,:,:]`.
-
-If the ith bit of `new_axis_mask` is one, then a `begin`,
-`end`, and `stride` are ignored and a new length 1 dimension is
-added at this point in the output tensor.
-
-For example `foo[3:5,4]` on a 10x8 tensor produces a shape 2 tensor
-whereas `foo[3:5,4:5]` produces a shape 2x1 tensor with shrink_mask
-being 1<<1 == 2.
-
-If the ith bit of `shrink_axis_mask` is one, then `begin`,
-`end[i]`, and `stride[i]` are used to do a slice in the appropriate
-dimension, but the output tensor will be reduced in dimensionality
-by one. This is only valid if the ith entry of slice[i]==1.
-
-NOTE: `begin` and `end` are zero-indexed`.
-`strides` entries must be non-zero.
-
-
-```python
-# 'input' is [[[1, 1, 1], [2, 2, 2]],
-# [[3, 3, 3], [4, 4, 4]],
-# [[5, 5, 5], [6, 6, 6]]]
-tf.strided_slice(input, [1, 0, 0], [2, 1, 3], [1, 1, 1]) ==> [[[3, 3, 3]]]
-tf.strided_slice(input, [1, 0, 0], [2, 2, 3], [1, 1, 1]) ==> [[[3, 3, 3],
- [4, 4, 4]]]
-tf.strided_slice(input, [1, 1, 0], [2, -1, 3], [1, -1, 1]) ==>[[[4, 4, 4],
- [3, 3, 3]]]
-```
-
-##### Args:
-
-
-* <b>`input_`</b>: A `Tensor`.
-* <b>`begin`</b>: An `int32` or `int64` `Tensor`.
-* <b>`end`</b>: An `int32` or `int64` `Tensor`.
-* <b>`strides`</b>: An `int32` or `int64` `Tensor`.
-* <b>`begin_mask`</b>: An `int32` mask.
-* <b>`end_mask`</b>: An `int32` mask.
-* <b>`ellipsis_mask`</b>: An `int32` mask.
-* <b>`new_axis_mask`</b>: An `int32` mask.
-* <b>`shrink_axis_mask`</b>: An `int32` mask.
-* <b>`var`</b>: The variable corresponding to `input_` or None
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` the same type as `input`.
-
-
-- - -
-
-### `tf.split(value, num_or_size_splits, axis=0, num=None, name='split')` {#split}
-
-Splits a tensor into sub tensors.
-
-If `num_or_size_splits` is a scalar, `num_split`, then splits `value` along
-dimension `axis` into `num_split` smaller tensors.
-Requires that `num_split` evenly divides `value.shape[axis]`.
-
-If `num_or_size_splits` is a tensor, `size_splits`, then splits `value` into
-`len(size_splits)` pieces. The shape of the `i`-th piece has the same size as
-the `value` except along dimension `axis` where the size is `size_splits[i]`.
-
-For example:
-
-```python
-# 'value' is a tensor with shape [5, 30]
-# Split 'value' into 3 tensors with sizes [4, 15, 11] along dimension 1
-split0, split1, split2 = tf.split(value, [4, 15, 11], 1)
-tf.shape(split0) ==> [5, 4]
-tf.shape(split1) ==> [5, 15]
-tf.shape(split2) ==> [5, 11]
-# Split 'value' into 3 tensors along dimension 1
-split0, split1, split2 = tf.split(value, num_or_size_splits=3, axis=1)
-tf.shape(split0) ==> [5, 10]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: The `Tensor` to split.
-* <b>`num_or_size_splits`</b>: Either an integer indicating the number of splits along
- split_dim or a 1-D Tensor containing the sizes of each output tensor
- along split_dim. If an integer then it must evenly divide
- `value.shape[axis]`; otherwise the sum of sizes along the split
- dimension must match that of the `value`.
-* <b>`axis`</b>: A 0-D `int32` `Tensor`. The dimension along which to split.
- Must be in the range `[0, rank(value))`. Defaults to 0.
-* <b>`num`</b>: Optional, used to specify the number of outputs when it cannot be
- inferred from the shape of `size_splits`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- if `num_or_size_splits` is a scalar returns `num_or_size_splits` `Tensor`
- objects; if `num_or_size_splits` is a 1-D Tensor returns
- `num_or_size_splits.get_shape[0]` `Tensor` objects resulting from splitting
- `value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `num` is unspecified and cannot be inferred.
-
-
-- - -
-
-### `tf.tile(input, multiples, name=None)` {#tile}
-
-Constructs a tensor by tiling a given tensor.
-
-This operation creates a new tensor by replicating `input` `multiples` times.
-The output tensor's i'th dimension has `input.dims(i) * multiples[i]` elements,
-and the values of `input` are replicated `multiples[i]` times along the 'i'th
-dimension. For example, tiling `[a b c d]` by `[2]` produces
-`[a b c d a b c d]`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. 1-D or higher.
-* <b>`multiples`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 1-D. Length must be the same as the number of dimensions in `input`
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
-
-- - -
-
-### `tf.pad(tensor, paddings, mode='CONSTANT', name=None)` {#pad}
-
-Pads a tensor.
-
-This operation pads a `tensor` according to the `paddings` you specify.
-`paddings` is an integer tensor with shape `[n, 2]`, where n is the rank of
-`tensor`. For each dimension D of `input`, `paddings[D, 0]` indicates how
-many values to add before the contents of `tensor` in that dimension, and
-`paddings[D, 1]` indicates how many values to add after the contents of
-`tensor` in that dimension. If `mode` is "REFLECT" then both `paddings[D, 0]`
-and `paddings[D, 1]` must be no greater than `tensor.dim_size(D) - 1`. If
-`mode` is "SYMMETRIC" then both `paddings[D, 0]` and `paddings[D, 1]` must be
-no greater than `tensor.dim_size(D)`.
-
-The padded size of each dimension D of the output is:
-
-`paddings[D, 0] + tensor.dim_size(D) + paddings[D, 1]`
-
-For example:
-
-```python
-# 't' is [[1, 2, 3], [4, 5, 6]].
-# 'paddings' is [[1, 1,], [2, 2]].
-# rank of 't' is 2.
-pad(t, paddings, "CONSTANT") ==> [[0, 0, 0, 0, 0, 0, 0],
- [0, 0, 1, 2, 3, 0, 0],
- [0, 0, 4, 5, 6, 0, 0],
- [0, 0, 0, 0, 0, 0, 0]]
-
-pad(t, paddings, "REFLECT") ==> [[6, 5, 4, 5, 6, 5, 4],
- [3, 2, 1, 2, 3, 2, 1],
- [6, 5, 4, 5, 6, 5, 4],
- [3, 2, 1, 2, 3, 2, 1]]
-
-pad(t, paddings, "SYMMETRIC") ==> [[2, 1, 1, 2, 3, 3, 2],
- [2, 1, 1, 2, 3, 3, 2],
- [5, 4, 4, 5, 6, 6, 5],
- [5, 4, 4, 5, 6, 6, 5]]
-```
-
-##### Args:
-
-
-* <b>`tensor`</b>: A `Tensor`.
-* <b>`paddings`</b>: A `Tensor` of type `int32`.
-* <b>`mode`</b>: One of "CONSTANT", "REFLECT", or "SYMMETRIC" (case-insensitive)
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `tensor`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: When mode is not one of "CONSTANT", "REFLECT", or "SYMMETRIC".
-
-
-- - -
-
-### `tf.concat(values, axis, name='concat')` {#concat}
-
-Concatenates tensors along one dimension.
-
-Concatenates the list of tensors `values` along dimension `axis`. If
-`values[i].shape = [D0, D1, ... Daxis(i), ...Dn]`, the concatenated
-result has shape
-
- [D0, D1, ... Raxis, ...Dn]
-
-where
-
- Raxis = sum(Daxis(i))
-
-That is, the data from the input tensors is joined along the `axis`
-dimension.
-
-The number of dimensions of the input tensors must match, and all dimensions
-except `axis` must be equal.
-
-For example:
-
-```python
-t1 = [[1, 2, 3], [4, 5, 6]]
-t2 = [[7, 8, 9], [10, 11, 12]]
-tf.concat([t1, t2], 0) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
-tf.concat([t1, t2], 1) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]
-
-# tensor t3 with shape [2, 3]
-# tensor t4 with shape [2, 3]
-tf.shape(tf.concat([t3, t4], 0)) ==> [4, 3]
-tf.shape(tf.concat([t3, t4], 1)) ==> [2, 6]
-```
-
-Note: If you are concatenating along a new axis consider using stack.
-E.g.
-
-```python
-tf.concat([tf.expand_dims(t, axis) for t in tensors], axis)
-```
-
-can be rewritten as
-
-```python
-tf.stack(tensors, axis=axis)
-```
-
-##### Args:
-
-
-* <b>`values`</b>: A list of `Tensor` objects or a single `Tensor`.
-* <b>`axis`</b>: 0-D `int32` `Tensor`. Dimension along which to concatenate.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` resulting from concatenation of the input tensors.
-
-
-- - -
-
-### `tf.stack(values, axis=0, name='stack')` {#stack}
-
-Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor.
-
-Packs the list of tensors in `values` into a tensor with rank one higher than
-each tensor in `values`, by packing them along the `axis` dimension.
-Given a list of length `N` of tensors of shape `(A, B, C)`;
-
-if `axis == 0` then the `output` tensor will have the shape `(N, A, B, C)`.
-if `axis == 1` then the `output` tensor will have the shape `(A, N, B, C)`.
-Etc.
-
-For example:
-
-```prettyprint
-# 'x' is [1, 4]
-# 'y' is [2, 5]
-# 'z' is [3, 6]
-stack([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim.
-stack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]]
-```
-
-This is the opposite of unstack. The numpy equivalent is
-
- tf.stack([x, y, z]) = np.asarray([x, y, z])
-
-##### Args:
-
-
-* <b>`values`</b>: A list of `Tensor` objects with the same shape and type.
-* <b>`axis`</b>: An `int`. The axis to stack along. Defaults to the first dimension.
- Supports negative indexes.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
-
-* <b>`output`</b>: A stacked `Tensor` with the same type as `values`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `axis` is out of the range [-(R+1), R+1).
-
-
-- - -
-
-### `tf.parallel_stack(values, name='parallel_stack')` {#parallel_stack}
-
-Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor in parallel.
-
-Requires that the shape of inputs be known at graph construction time.
-
-Packs the list of tensors in `values` into a tensor with rank one higher than
-each tensor in `values`, by packing them along the first dimension.
-Given a list of length `N` of tensors of shape `(A, B, C)`; the `output`
-tensor will have the shape `(N, A, B, C)`.
-
-For example:
-
-```prettyprint
-# 'x' is [1, 4]
-# 'y' is [2, 5]
-# 'z' is [3, 6]
-parallel_stack([x, y, z]) => [[1, 4], [2, 5], [3, 6]]
-```
-
-The difference between stack and parallel_stack is that stack requires all
-of the inputs be computed before the operation will begin but doesn't require
-that the input shapes be known during graph construction. Parallel stack
-will copy pieces of the input into the output as they become available, in
-some situations this can provide a performance benefit.
-
-This is the opposite of unstack. The numpy equivalent is
-
- tf.parallel_stack([x, y, z]) = np.asarray([x, y, z])
-
-##### Args:
-
-
-* <b>`values`</b>: A list of `Tensor` objects with the same shape and type.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
-
-* <b>`output`</b>: A stacked `Tensor` with the same type as `values`.
-
-
-- - -
-
-### `tf.unstack(value, num=None, axis=0, name='unstack')` {#unstack}
-
-Unpacks the given dimension of a rank-`R` tensor into rank-`(R-1)` tensors.
-
-Unpacks `num` tensors from `value` by chipping it along the `axis` dimension.
-If `num` is not specified (the default), it is inferred from `value`'s shape.
-If `value.shape[axis]` is not known, `ValueError` is raised.
-
-For example, given a tensor of shape `(A, B, C, D)`;
-
-If `axis == 0` then the i'th tensor in `output` is the slice
- `value[i, :, :, :]` and each tensor in `output` will have shape `(B, C, D)`.
- (Note that the dimension unpacked along is gone, unlike `split`).
-
-If `axis == 1` then the i'th tensor in `output` is the slice
- `value[:, i, :, :]` and each tensor in `output` will have shape `(A, C, D)`.
-Etc.
-
-This is the opposite of pack. The numpy equivalent is
-
- tf.unstack(x, n) = list(x)
-
-##### Args:
-
-
-* <b>`value`</b>: A rank `R > 0` `Tensor` to be unstacked.
-* <b>`num`</b>: An `int`. The length of the dimension `axis`. Automatically inferred
- if `None` (the default).
-* <b>`axis`</b>: An `int`. The axis to unstack along. Defaults to the first
- dimension. Supports negative indexes.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The list of `Tensor` objects unstacked from `value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `num` is unspecified and cannot be inferred.
-* <b>`ValueError`</b>: If `axis` is out of the range [-R, R).
-
-
-- - -
-
-### `tf.reverse_sequence(input, seq_lengths, seq_axis=None, batch_axis=None, name=None, seq_dim=None, batch_dim=None)` {#reverse_sequence}
-
-Reverses variable length slices.
-
-This op first slices `input` along the dimension `batch_axis`, and for each
-slice `i`, reverses the first `seq_lengths[i]` elements along
-the dimension `seq_axis`.
-
-The elements of `seq_lengths` must obey `seq_lengths[i] <= input.dims[seq_dim]`,
-and `seq_lengths` must be a vector of length `input.dims[batch_dim]`.
-
-The output slice `i` along dimension `batch_axis` is then given by input
-slice `i`, with the first `seq_lengths[i]` slices along dimension
-`seq_axis` reversed.
-
-For example:
-
-```prettyprint
-# Given this:
-batch_dim = 0
-seq_dim = 1
-input.dims = (4, 8, ...)
-seq_lengths = [7, 2, 3, 5]
-
-# then slices of input are reversed on seq_dim, but only up to seq_lengths:
-output[0, 0:7, :, ...] = input[0, 7:0:-1, :, ...]
-output[1, 0:2, :, ...] = input[1, 2:0:-1, :, ...]
-output[2, 0:3, :, ...] = input[2, 3:0:-1, :, ...]
-output[3, 0:5, :, ...] = input[3, 5:0:-1, :, ...]
-
-# while entries past seq_lens are copied through:
-output[0, 7:, :, ...] = input[0, 7:, :, ...]
-output[1, 2:, :, ...] = input[1, 2:, :, ...]
-output[2, 3:, :, ...] = input[2, 3:, :, ...]
-output[3, 2:, :, ...] = input[3, 2:, :, ...]
-```
-
-In contrast, if:
-
-```prettyprint
-# Given this:
-batch_dim = 2
-seq_dim = 0
-input.dims = (8, ?, 4, ...)
-seq_lengths = [7, 2, 3, 5]
-
-# then slices of input are reversed on seq_dim, but only up to seq_lengths:
-output[0:7, :, 0, :, ...] = input[7:0:-1, :, 0, :, ...]
-output[0:2, :, 1, :, ...] = input[2:0:-1, :, 1, :, ...]
-output[0:3, :, 2, :, ...] = input[3:0:-1, :, 2, :, ...]
-output[0:5, :, 3, :, ...] = input[5:0:-1, :, 3, :, ...]
-
-# while entries past seq_lens are copied through:
-output[7:, :, 0, :, ...] = input[7:, :, 0, :, ...]
-output[2:, :, 1, :, ...] = input[2:, :, 1, :, ...]
-output[3:, :, 2, :, ...] = input[3:, :, 2, :, ...]
-output[2:, :, 3, :, ...] = input[2:, :, 3, :, ...]
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. The input to reverse.
-* <b>`seq_lengths`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 1-D with length `input.dims(batch_dim)` and
- `max(seq_lengths) <= input.dims(seq_dim)`
-* <b>`seq_axis`</b>: An `int`. The dimension which is partially reversed.
-* <b>`batch_axis`</b>: An optional `int`. Defaults to `0`.
- The dimension along which reversal is performed.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- The partially reversed input. It has the same shape as `input`.
-
-
-- - -
-
-### `tf.reverse(tensor, axis, name=None)` {#reverse}
-
-Reverses specific dimensions of a tensor.
-
-NOTE `tf.reverse` has now changed behavior in preparation for 1.0.
-`tf.reverse_v2` is currently an alias that will be deprecated before TF 1.0.
-
-Given a `tensor`, and a `int32` tensor `axis` representing the set of
-dimensions of `tensor` to reverse. This operation reverses each dimension
-`i` for which there exists `j` s.t. `axis[j] == i`.
-
-`tensor` can have up to 8 dimensions. The number of dimensions specified
-in `axis` may be 0 or more entries. If an index is specified more than
-once, a InvalidArgument error is raised.
-
-For example:
-
-```prettyprint
-# tensor 't' is [[[[ 0, 1, 2, 3],
-# [ 4, 5, 6, 7],
-# [ 8, 9, 10, 11]],
-# [[12, 13, 14, 15],
-# [16, 17, 18, 19],
-# [20, 21, 22, 23]]]]
-# tensor 't' shape is [1, 2, 3, 4]
-
-# 'dims' is [3] or 'dims' is -1
-reverse(t, dims) ==> [[[[ 3, 2, 1, 0],
- [ 7, 6, 5, 4],
- [ 11, 10, 9, 8]],
- [[15, 14, 13, 12],
- [19, 18, 17, 16],
- [23, 22, 21, 20]]]]
-
-# 'dims' is '[1]' (or 'dims' is '[-3]')
-reverse(t, dims) ==> [[[[12, 13, 14, 15],
- [16, 17, 18, 19],
- [20, 21, 22, 23]
- [[ 0, 1, 2, 3],
- [ 4, 5, 6, 7],
- [ 8, 9, 10, 11]]]]
-
-# 'dims' is '[2]' (or 'dims' is '[-2]')
-reverse(t, dims) ==> [[[[8, 9, 10, 11],
- [4, 5, 6, 7],
- [0, 1, 2, 3]]
- [[20, 21, 22, 23],
- [16, 17, 18, 19],
- [12, 13, 14, 15]]]]
-```
-
-##### Args:
-
-
-* <b>`tensor`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `int64`, `bool`, `half`, `float32`, `float64`, `complex64`, `complex128`.
- Up to 8-D.
-* <b>`axis`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 1-D. The indices of the dimensions to reverse.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `tensor`. The same shape as `tensor`.
-
-
-- - -
-
-### `tf.reverse_v2(tensor, axis, name=None)` {#reverse_v2}
-
-Reverses specific dimensions of a tensor.
-
-NOTE `tf.reverse` has now changed behavior in preparation for 1.0.
-`tf.reverse_v2` is currently an alias that will be deprecated before TF 1.0.
-
-Given a `tensor`, and a `int32` tensor `axis` representing the set of
-dimensions of `tensor` to reverse. This operation reverses each dimension
-`i` for which there exists `j` s.t. `axis[j] == i`.
-
-`tensor` can have up to 8 dimensions. The number of dimensions specified
-in `axis` may be 0 or more entries. If an index is specified more than
-once, a InvalidArgument error is raised.
-
-For example:
-
-```prettyprint
-# tensor 't' is [[[[ 0, 1, 2, 3],
-# [ 4, 5, 6, 7],
-# [ 8, 9, 10, 11]],
-# [[12, 13, 14, 15],
-# [16, 17, 18, 19],
-# [20, 21, 22, 23]]]]
-# tensor 't' shape is [1, 2, 3, 4]
-
-# 'dims' is [3] or 'dims' is -1
-reverse(t, dims) ==> [[[[ 3, 2, 1, 0],
- [ 7, 6, 5, 4],
- [ 11, 10, 9, 8]],
- [[15, 14, 13, 12],
- [19, 18, 17, 16],
- [23, 22, 21, 20]]]]
-
-# 'dims' is '[1]' (or 'dims' is '[-3]')
-reverse(t, dims) ==> [[[[12, 13, 14, 15],
- [16, 17, 18, 19],
- [20, 21, 22, 23]
- [[ 0, 1, 2, 3],
- [ 4, 5, 6, 7],
- [ 8, 9, 10, 11]]]]
-
-# 'dims' is '[2]' (or 'dims' is '[-2]')
-reverse(t, dims) ==> [[[[8, 9, 10, 11],
- [4, 5, 6, 7],
- [0, 1, 2, 3]]
- [[20, 21, 22, 23],
- [16, 17, 18, 19],
- [12, 13, 14, 15]]]]
-```
-
-##### Args:
-
-
-* <b>`tensor`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `int64`, `bool`, `half`, `float32`, `float64`, `complex64`, `complex128`.
- Up to 8-D.
-* <b>`axis`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 1-D. The indices of the dimensions to reverse.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `tensor`. The same shape as `tensor`.
-
-
-- - -
-
-### `tf.transpose(a, perm=None, name='transpose')` {#transpose}
-
-Transposes `a`. Permutes the dimensions according to `perm`.
-
-The returned tensor's dimension i will correspond to the input dimension
-`perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is
-the rank of the input tensor. Hence by default, this operation performs a
-regular matrix transpose on 2-D input Tensors.
-
-For example:
-
-```python
-# 'x' is [[1 2 3]
-# [4 5 6]]
-tf.transpose(x) ==> [[1 4]
- [2 5]
- [3 6]]
-
-# Equivalently
-tf.transpose(x, perm=[1, 0]) ==> [[1 4]
- [2 5]
- [3 6]]
-
-# 'perm' is more useful for n-dimensional tensors, for n > 2
-# 'x' is [[[1 2 3]
-# [4 5 6]]
-# [[7 8 9]
-# [10 11 12]]]
-# Take the transpose of the matrices in dimension-0
-tf.transpose(x, perm=[0, 2, 1]) ==> [[[1 4]
- [2 5]
- [3 6]]
-
- [[7 10]
- [8 11]
- [9 12]]]
-```
-
-##### Args:
-
-
-* <b>`a`</b>: A `Tensor`.
-* <b>`perm`</b>: A permutation of the dimensions of `a`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A transposed `Tensor`.
-
-
-- - -
-
-### `tf.extract_image_patches(images, ksizes, strides, rates, padding, name=None)` {#extract_image_patches}
-
-Extract `patches` from `images` and put them in the "depth" output dimension.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
- 4-D Tensor with shape `[batch, in_rows, in_cols, depth]`.
-* <b>`ksizes`</b>: A list of `ints` that has length `>= 4`.
- The size of the sliding window for each dimension of `images`.
-* <b>`strides`</b>: A list of `ints` that has length `>= 4`.
- 1-D of length 4. How far the centers of two consecutive patches are in
- the images. Must be: `[1, stride_rows, stride_cols, 1]`.
-* <b>`rates`</b>: A list of `ints` that has length `>= 4`.
- 1-D of length 4. Must be: `[1, rate_rows, rate_cols, 1]`. This is the
- input stride, specifying how far two consecutive patch samples are in the
- input. Equivalent to extracting patches with
- `patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1)`, followed by
- subsampling them spatially by a factor of `rates`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-
- We specify the size-related attributes as:
-
- ```python
- ksizes = [1, ksize_rows, ksize_cols, 1]
- strides = [1, strides_rows, strides_cols, 1]
- rates = [1, rates_rows, rates_cols, 1]
- ```
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `images`.
- 4-D Tensor with shape `[batch, out_rows, out_cols, ksize_rows *
- ksize_cols * depth]` containing image patches with size
- `ksize_rows x ksize_cols x depth` vectorized in the "depth" dimension.
-
-
-- - -
-
-### `tf.space_to_batch_nd(input, block_shape, paddings, name=None)` {#space_to_batch_nd}
-
-SpaceToBatch for N-D tensors of type T.
-
-This operation divides "spatial" dimensions `[1, ..., M]` of the input into a
-grid of blocks of shape `block_shape`, and interleaves these blocks with the
-"batch" dimension (0) such that in the output, the spatial dimensions
-`[1, ..., M]` correspond to the position within the grid, and the batch
-dimension combines both the position within a spatial block and the original
-batch position. Prior to division into blocks, the spatial dimensions of the
-input are optionally zero padded according to `paddings`. See below for a
-precise description.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`.
- N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,
- where spatial_shape has `M` dimensions.
-* <b>`block_shape`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 1-D with shape `[M]`, all values must be >= 1.
-* <b>`paddings`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 2-D with shape `[M, 2]`, all values must be >= 0.
- `paddings[i] = [pad_start, pad_end]` specifies the padding for input dimension
- `i + 1`, which corresponds to spatial dimension `i`. It is required that
- `block_shape[i]` divides `input_shape[i + 1] + pad_start + pad_end`.
-
- This operation is equivalent to the following steps:
-
- 1. Zero-pad the start and end of dimensions `[1, ..., M]` of the
- input according to `paddings` to produce `padded` of shape `padded_shape`.
-
- 2. Reshape `padded` to `reshaped_padded` of shape:
-
- [batch] +
- [padded_shape[1] / block_shape[0],
- block_shape[0],
- ...,
- padded_shape[M] / block_shape[M-1],
- block_shape[M-1]] +
- remaining_shape
-
- 3. Permute dimensions of `reshaped_padded` to produce
- `permuted_reshaped_padded` of shape:
-
- block_shape +
- [batch] +
- [padded_shape[1] / block_shape[0],
- ...,
- padded_shape[M] / block_shape[M-1]] +
- remaining_shape
-
- 4. Reshape `permuted_reshaped_padded` to flatten `block_shape` into the batch
- dimension, producing an output tensor of shape:
-
- [batch * prod(block_shape)] +
- [padded_shape[1] / block_shape[0],
- ...,
- padded_shape[M] / block_shape[M-1]] +
- remaining_shape
-
- Some examples:
-
- (1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and
- `paddings = [[0, 0], [0, 0]]`:
-
- ```prettyprint
- x = [[[[1], [2]], [[3], [4]]]]
- ```
-
- The output tensor has shape `[4, 1, 1, 1]` and value:
-
- ```prettyprint
- [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
- ```
-
- (2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and
- `paddings = [[0, 0], [0, 0]]`:
-
- ```prettyprint
- x = [[[[1, 2, 3], [4, 5, 6]],
- [[7, 8, 9], [10, 11, 12]]]]
- ```
-
- The output tensor has shape `[4, 1, 1, 3]` and value:
-
- ```prettyprint
- [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
- ```
-
- (3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and
- `paddings = [[0, 0], [0, 0]]`:
-
- ```prettyprint
- x = [[[[1], [2], [3], [4]],
- [[5], [6], [7], [8]],
- [[9], [10], [11], [12]],
- [[13], [14], [15], [16]]]]
- ```
-
- The output tensor has shape `[4, 2, 2, 1]` and value:
-
- ```prettyprint
- x = [[[[1], [3]], [[9], [11]]],
- [[[2], [4]], [[10], [12]]],
- [[[5], [7]], [[13], [15]]],
- [[[6], [8]], [[14], [16]]]]
- ```
-
- (4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and
- paddings = `[[0, 0], [2, 0]]`:
-
- ```prettyprint
- x = [[[[1], [2], [3], [4]],
- [[5], [6], [7], [8]]],
- [[[9], [10], [11], [12]],
- [[13], [14], [15], [16]]]]
- ```
-
- The output tensor has shape `[8, 1, 3, 1]` and value:
-
- ```prettyprint
- x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
- [[[0], [2], [4]]], [[[0], [10], [12]]],
- [[[0], [5], [7]]], [[[0], [13], [15]]],
- [[[0], [6], [8]]], [[[0], [14], [16]]]]
- ```
-
- Among others, this operation is useful for reducing atrous convolution into
- regular convolution.
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
-
-- - -
-
-### `tf.space_to_batch(input, paddings, block_size, name=None)` {#space_to_batch}
-
-SpaceToBatch for 4-D tensors of type T.
-
-This is a legacy version of the more general SpaceToBatchND.
-
-Zero-pads and then rearranges (permutes) blocks of spatial data into batch.
-More specifically, this op outputs a copy of the input tensor where values from
-the `height` and `width` dimensions are moved to the `batch` dimension. After
-the zero-padding, both `height` and `width` of the input must be divisible by the
-block size.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. 4-D with shape `[batch, height, width, depth]`.
-* <b>`paddings`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies
- the padding of the input with zeros across the spatial dimensions as follows:
-
- paddings = [[pad_top, pad_bottom], [pad_left, pad_right]]
-
- The effective spatial dimensions of the zero-padded input tensor will be:
-
- height_pad = pad_top + height + pad_bottom
- width_pad = pad_left + width + pad_right
-
- The attr `block_size` must be greater than one. It indicates the block size.
-
- * Non-overlapping blocks of size `block_size x block size` in the height and
- width dimensions are rearranged into the batch dimension at each location.
- * The batch of the output tensor is `batch * block_size * block_size`.
- * Both height_pad and width_pad must be divisible by block_size.
-
- The shape of the output will be:
-
- [batch*block_size*block_size, height_pad/block_size, width_pad/block_size,
- depth]
-
- Some examples:
-
- (1) For the following input of shape `[1, 2, 2, 1]` and block_size of 2:
-
- ```prettyprint
- x = [[[[1], [2]], [[3], [4]]]]
- ```
-
- The output tensor has shape `[4, 1, 1, 1]` and value:
-
- ```prettyprint
- [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
- ```
-
- (2) For the following input of shape `[1, 2, 2, 3]` and block_size of 2:
-
- ```prettyprint
- x = [[[[1, 2, 3], [4, 5, 6]],
- [[7, 8, 9], [10, 11, 12]]]]
- ```
-
- The output tensor has shape `[4, 1, 1, 3]` and value:
-
- ```prettyprint
- [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
- ```
-
- (3) For the following input of shape `[1, 4, 4, 1]` and block_size of 2:
-
- ```prettyprint
- x = [[[[1], [2], [3], [4]],
- [[5], [6], [7], [8]],
- [[9], [10], [11], [12]],
- [[13], [14], [15], [16]]]]
- ```
-
- The output tensor has shape `[4, 2, 2, 1]` and value:
-
- ```prettyprint
- x = [[[[1], [3]], [[9], [11]]],
- [[[2], [4]], [[10], [12]]],
- [[[5], [7]], [[13], [15]]],
- [[[6], [8]], [[14], [16]]]]
- ```
-
- (4) For the following input of shape `[2, 2, 4, 1]` and block_size of 2:
-
- ```prettyprint
- x = [[[[1], [2], [3], [4]],
- [[5], [6], [7], [8]]],
- [[[9], [10], [11], [12]],
- [[13], [14], [15], [16]]]]
- ```
-
- The output tensor has shape `[8, 1, 2, 1]` and value:
-
- ```prettyprint
- x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],
- [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]
- ```
-
- Among others, this operation is useful for reducing atrous convolution into
- regular convolution.
-
-* <b>`block_size`</b>: An `int` that is `>= 2`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
-
-- - -
-
-### `tf.required_space_to_batch_paddings(input_shape, block_shape, base_paddings=None, name=None)` {#required_space_to_batch_paddings}
-
-Calculate padding required to make block_shape divide input_shape.
-
-This function can be used to calculate a suitable paddings argument for use
-with space_to_batch_nd and batch_to_space_nd.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: int32 Tensor of shape [N].
-* <b>`block_shape`</b>: int32 Tensor of shape [N].
-* <b>`base_paddings`</b>: Optional int32 Tensor of shape [N, 2]. Specifies the minimum
- amount of padding to use. All elements must be >= 0. If not specified,
- defaults to 0.
-* <b>`name`</b>: string. Optional name prefix.
-
-##### Returns:
-
- (paddings, crops), where:
-
- `paddings` and `crops` are int32 Tensors of rank 2 and shape [N, 2]
-
-* <b>`satisfying`</b>:
-
- paddings[i, 0] = base_paddings[i, 0].
- 0 <= paddings[i, 1] - base_paddings[i, 1] < block_shape[i]
- (input_shape[i] + paddings[i, 0] + paddings[i, 1]) % block_shape[i] == 0
-
- crops[i, 0] = 0
- crops[i, 1] = paddings[i, 1] - base_paddings[i, 1]
-
-
-* <b>`Raises`</b>: ValueError if called with incompatible shapes.
-
-
-- - -
-
-### `tf.batch_to_space_nd(input, block_shape, crops, name=None)` {#batch_to_space_nd}
-
-BatchToSpace for N-D tensors of type T.
-
-This operation reshapes the "batch" dimension 0 into `M + 1` dimensions of shape
-`block_shape + [batch]`, interleaves these blocks back into the grid defined by
-the spatial dimensions `[1, ..., M]`, to obtain a result with the same rank as
-the input. The spatial dimensions of this intermediate result are then
-optionally cropped according to `crops` to produce the output. This is the
-reverse of SpaceToBatch. See below for a precise description.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`.
- N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,
- where spatial_shape has M dimensions.
-* <b>`block_shape`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 1-D with shape `[M]`, all values must be >= 1.
-* <b>`crops`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 2-D with shape `[M, 2]`, all values must be >= 0.
- `crops[i] = [crop_start, crop_end]` specifies the amount to crop from input
- dimension `i + 1`, which corresponds to spatial dimension `i`. It is
- required that
- `crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]`.
-
- This operation is equivalent to the following steps:
-
- 1. Reshape `input` to `reshaped` of shape:
- [block_shape[0], ..., block_shape[M-1],
- batch / prod(block_shape),
- input_shape[1], ..., input_shape[N-1]]
-
- 2. Permute dimensions of `reshaped` to produce `permuted` of shape
- [batch / prod(block_shape),
-
- input_shape[1], block_shape[0],
- ...,
- input_shape[M], block_shape[M-1],
-
- input_shape[M+1], ..., input_shape[N-1]]
-
- 3. Reshape `permuted` to produce `reshaped_permuted` of shape
- [batch / prod(block_shape),
-
- input_shape[1] * block_shape[0],
- ...,
- input_shape[M] * block_shape[M-1],
-
- input_shape[M+1],
- ...,
- input_shape[N-1]]
-
- 4. Crop the start and end of dimensions `[1, ..., M]` of
- `reshaped_permuted` according to `crops` to produce the output of shape:
- [batch / prod(block_shape),
-
- input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1],
- ...,
- input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1],
-
- input_shape[M+1], ..., input_shape[N-1]]
-
- Some examples:
-
- (1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and
- `crops = [[0, 0], [0, 0]]`:
-
- ```prettyprint
- [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
- ```
-
- The output tensor has shape `[1, 2, 2, 1]` and value:
-
- ```prettyprint
- x = [[[[1], [2]], [[3], [4]]]]
- ```
-
- (2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and
- `crops = [[0, 0], [0, 0]]`:
-
- ```prettyprint
- [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
- ```
-
- The output tensor has shape `[1, 2, 2, 3]` and value:
-
- ```prettyprint
- x = [[[[1, 2, 3], [4, 5, 6]],
- [[7, 8, 9], [10, 11, 12]]]]
- ```
-
- (3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and
- `crops = [[0, 0], [0, 0]]`:
-
- ```prettyprint
- x = [[[[1], [3]], [[9], [11]]],
- [[[2], [4]], [[10], [12]]],
- [[[5], [7]], [[13], [15]]],
- [[[6], [8]], [[14], [16]]]]
- ```
-
- The output tensor has shape `[1, 4, 4, 1]` and value:
-
- ```prettyprint
- x = [[[1], [2], [3], [4]],
- [[5], [6], [7], [8]],
- [[9], [10], [11], [12]],
- [[13], [14], [15], [16]]]
- ```
-
- (4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and
- `crops = [[0, 0], [2, 0]]`:
-
- ```prettyprint
- x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
- [[[0], [2], [4]]], [[[0], [10], [12]]],
- [[[0], [5], [7]]], [[[0], [13], [15]]],
- [[[0], [6], [8]]], [[[0], [14], [16]]]]
- ```
-
- The output tensor has shape `[2, 2, 4, 1]` and value:
-
- ```prettyprint
- x = [[[[1], [2], [3], [4]],
- [[5], [6], [7], [8]]],
- [[[9], [10], [11], [12]],
- [[13], [14], [15], [16]]]]
- ```
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
-
-- - -
-
-### `tf.batch_to_space(input, crops, block_size, name=None)` {#batch_to_space}
-
-BatchToSpace for 4-D tensors of type T.
-
-This is a legacy version of the more general BatchToSpaceND.
-
-Rearranges (permutes) data from batch into blocks of spatial data, followed by
-cropping. This is the reverse transformation of SpaceToBatch. More specifically,
-this op outputs a copy of the input tensor where values from the `batch`
-dimension are moved in spatial blocks to the `height` and `width` dimensions,
-followed by cropping along the `height` and `width` dimensions.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. 4-D tensor with shape
- `[batch*block_size*block_size, height_pad/block_size, width_pad/block_size,
- depth]`. Note that the batch size of the input tensor must be divisible by
- `block_size * block_size`.
-* <b>`crops`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies
- how many elements to crop from the intermediate result across the spatial
- dimensions as follows:
-
- crops = [[crop_top, crop_bottom], [crop_left, crop_right]]
-
-* <b>`block_size`</b>: An `int` that is `>= 2`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- 4-D with shape `[batch, height, width, depth]`, where:
-
- height = height_pad - crop_top - crop_bottom
- width = width_pad - crop_left - crop_right
-
- The attr `block_size` must be greater than one. It indicates the block size.
-
- Some examples:
-
- (1) For the following input of shape `[4, 1, 1, 1]` and block_size of 2:
-
- ```prettyprint
- [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
- ```
-
- The output tensor has shape `[1, 2, 2, 1]` and value:
-
- ```prettyprint
- x = [[[[1], [2]], [[3], [4]]]]
- ```
-
- (2) For the following input of shape `[4, 1, 1, 3]` and block_size of 2:
-
- ```prettyprint
- [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
- ```
-
- The output tensor has shape `[1, 2, 2, 3]` and value:
-
- ```prettyprint
- x = [[[[1, 2, 3], [4, 5, 6]],
- [[7, 8, 9], [10, 11, 12]]]]
- ```
-
- (3) For the following input of shape `[4, 2, 2, 1]` and block_size of 2:
-
- ```prettyprint
- x = [[[[1], [3]], [[9], [11]]],
- [[[2], [4]], [[10], [12]]],
- [[[5], [7]], [[13], [15]]],
- [[[6], [8]], [[14], [16]]]]
- ```
-
- The output tensor has shape `[1, 4, 4, 1]` and value:
-
- ```prettyprint
- x = [[[1], [2], [3], [4]],
- [[5], [6], [7], [8]],
- [[9], [10], [11], [12]],
- [[13], [14], [15], [16]]]
- ```
-
- (4) For the following input of shape `[8, 1, 2, 1]` and block_size of 2:
-
- ```prettyprint
- x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],
- [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]
- ```
-
- The output tensor has shape `[2, 2, 4, 1]` and value:
-
- ```prettyprint
- x = [[[[1], [3]], [[5], [7]]],
- [[[2], [4]], [[10], [12]]],
- [[[5], [7]], [[13], [15]]],
- [[[6], [8]], [[14], [16]]]]
- ```
-
-
-- - -
-
-### `tf.space_to_depth(input, block_size, name=None)` {#space_to_depth}
-
-SpaceToDepth for tensors of type T.
-
-Rearranges blocks of spatial data, into depth. More specifically,
-this op outputs a copy of the input tensor where values from the `height`
-and `width` dimensions are moved to the `depth` dimension.
-The attr `block_size` indicates the input block size and how the data is moved.
-
- * Non-overlapping blocks of size `block_size x block size` are rearranged
- into depth at each location.
- * The depth of the output tensor is `input_depth * block_size * block_size`.
- * The input tensor's height and width must be divisible by block_size.
-
-That is, assuming the input is in the shape:
-`[batch, height, width, depth]`,
-the shape of the output will be:
-`[batch, height/block_size, width/block_size, depth*block_size*block_size]`
-
-This operation requires that the input tensor be of rank 4, and that
-`block_size` be >=1 and a divisor of both the input `height` and `width`.
-
-This operation is useful for resizing the activations between convolutions
-(but keeping all data), e.g. instead of pooling. It is also useful for training
-purely convolutional models.
-
-For example, given this input of shape `[1, 2, 2, 1]`, and block_size of 2:
-
-```prettyprint
-x = [[[[1], [2]],
- [[3], [4]]]]
-```
-
-This operation will output a tensor of shape `[1, 1, 1, 4]`:
-
-```prettyprint
-[[[[1, 2, 3, 4]]]]
-```
-
-Here, the input has a batch of 1 and each batch element has shape `[2, 2, 1]`,
-the corresponding output will have a single element (i.e. width and height are
-both 1) and will have a depth of 4 channels (1 * block_size * block_size).
-The output element shape is `[1, 1, 4]`.
-
-For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g.
-
-```prettyprint
-x = [[[[1, 2, 3], [4, 5, 6]],
- [[7, 8, 9], [10, 11, 12]]]]
-```
-
-This operation, for block_size of 2, will return the following tensor of shape
-`[1, 1, 1, 12]`
-
-```prettyprint
-[[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]
-```
-
-Similarly, for the following input of shape `[1 4 4 1]`, and a block size of 2:
-
-```prettyprint
-x = [[[[1], [2], [5], [6]],
- [[3], [4], [7], [8]],
- [[9], [10], [13], [14]],
- [[11], [12], [15], [16]]]]
-```
-
-the operator will return the following tensor of shape `[1 2 2 4]`:
-
-```prettyprint
-x = [[[[1, 2, 3, 4],
- [5, 6, 7, 8]],
- [[9, 10, 11, 12],
- [13, 14, 15, 16]]]]
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`.
-* <b>`block_size`</b>: An `int` that is `>= 2`. The size of the spatial block.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
-
-- - -
-
-### `tf.depth_to_space(input, block_size, name=None)` {#depth_to_space}
-
-DepthToSpace for tensors of type T.
-
-Rearranges data from depth into blocks of spatial data.
-This is the reverse transformation of SpaceToDepth. More specifically,
-this op outputs a copy of the input tensor where values from the `depth`
-dimension are moved in spatial blocks to the `height` and `width` dimensions.
-The attr `block_size` indicates the input block size and how the data is moved.
-
- * Chunks of data of size `block_size * block_size` from depth are rearranged
- into non-overlapping blocks of size `block_size x block_size`
- * The width the output tensor is `input_depth * block_size`, whereas the
- height is `input_height * block_size`.
- * The depth of the input tensor must be divisible by
- `block_size * block_size`.
-
-That is, assuming the input is in the shape:
-`[batch, height, width, depth]`,
-the shape of the output will be:
-`[batch, height*block_size, width*block_size, depth/(block_size*block_size)]`
-
-This operation requires that the input tensor be of rank 4, and that
-`block_size` be >=1 and that `block_size * block_size` be a divisor of the
-input depth.
-
-This operation is useful for resizing the activations between convolutions
-(but keeping all data), e.g. instead of pooling. It is also useful for training
-purely convolutional models.
-
-For example, given this input of shape `[1, 1, 1, 4]`, and a block size of 2:
-
-```prettyprint
-x = [[[[1, 2, 3, 4]]]]
-
-```
-
-This operation will output a tensor of shape `[1, 2, 2, 1]`:
-
-```prettyprint
- [[[[1], [2]],
- [[3], [4]]]]
-```
-
-Here, the input has a batch of 1 and each batch element has shape `[1, 1, 4]`,
-the corresponding output will have 2x2 elements and will have a depth of
-1 channel (1 = `4 / (block_size * block_size)`).
-The output element shape is `[2, 2, 1]`.
-
-For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g.
-
-```prettyprint
-x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]
-```
-
-This operation, for block size of 2, will return the following tensor of shape
-`[1, 2, 2, 3]`
-
-```prettyprint
- [[[[1, 2, 3], [4, 5, 6]],
- [[7, 8, 9], [10, 11, 12]]]]
-
-```
-
-Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2:
-
-```prettyprint
-x = [[[[1, 2, 3, 4],
- [5, 6, 7, 8]],
- [[9, 10, 11, 12],
- [13, 14, 15, 16]]]]
-```
-
-the operator will return the following tensor of shape `[1 4 4 1]`:
-
-```prettyprint
-x = [[ [1], [2], [5], [6]],
- [ [3], [4], [7], [8]],
- [ [9], [10], [13], [14]],
- [ [11], [12], [15], [16]]]
-
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`.
-* <b>`block_size`</b>: An `int` that is `>= 2`.
- The size of the spatial block, same as in Space2Depth.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
-
-- - -
-
-### `tf.gather(params, indices, validate_indices=None, name=None)` {#gather}
-
-Gather slices from `params` according to `indices`.
-
-`indices` must be an integer tensor of any dimension (usually 0-D or 1-D).
-Produces an output tensor with shape `indices.shape + params.shape[1:]` where:
-
-```python
- # Scalar indices
- output[:, ..., :] = params[indices, :, ... :]
-
- # Vector indices
- output[i, :, ..., :] = params[indices[i], :, ... :]
-
- # Higher rank indices
- output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]
-```
-
-If `indices` is a permutation and `len(indices) == params.shape[0]` then
-this operation will permute `params` accordingly.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/Gather.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`params`</b>: A `Tensor`.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
-* <b>`validate_indices`</b>: An optional `bool`. Defaults to `True`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `params`.
-
-
-- - -
-
-### `tf.gather_nd(params, indices, name=None)` {#gather_nd}
-
-Gather values or slices from `params` according to `indices`.
-
-`params` is a Tensor of rank `P` and `indices` is a Tensor of rank `Q`.
-
-`indices` must be integer tensor, containing indices into `params`.
-It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
-
-The innermost dimension of `indices` (with length `K`) corresponds to
-indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
-dimension of `params`.
-
-Produces an output tensor with shape
-
-```
-[d_0, ..., d_{Q-2}, params.shape[K], ..., params.shape[P-1]].
-```
-
-Some examples below.
-
-Simple indexing into a matrix:
-
-```python
- indices = [[0, 0], [1, 1]]
- params = [['a', 'b'], ['c', 'd']]
- output = ['a', 'd']
-```
-
-Slice indexing into a matrix:
-
-```python
- indices = [[1], [0]]
- params = [['a', 'b'], ['c', 'd']]
- output = [['c', 'd'], ['a', 'b']]
-```
-
-Indexing into a 3-tensor:
-
-```python
- indices = [[1]]
- params = [[['a0', 'b0'], ['c0', 'd0']],
- [['a1', 'b1'], ['c1', 'd1']]]
- output = [[['a1', 'b1'], ['c1', 'd1']]]
-
-
- indices = [[0, 1], [1, 0]]
- params = [[['a0', 'b0'], ['c0', 'd0']],
- [['a1', 'b1'], ['c1', 'd1']]]
- output = [['c0', 'd0'], ['a1', 'b1']]
-
-
- indices = [[0, 0, 1], [1, 0, 1]]
- params = [[['a0', 'b0'], ['c0', 'd0']],
- [['a1', 'b1'], ['c1', 'd1']]]
- output = ['b0', 'b1']
-```
-
-Batched indexing into a matrix:
-
-```python
- indices = [[[0, 0]], [[0, 1]]]
- params = [['a', 'b'], ['c', 'd']]
- output = [['a'], ['b']]
-```
-
-Batched slice indexing into a matrix:
-
-```python
- indices = [[[1]], [[0]]]
- params = [['a', 'b'], ['c', 'd']]
- output = [[['c', 'd']], [['a', 'b']]]
-```
-
-Batched indexing into a 3-tensor:
-
-```python
- indices = [[[1]], [[0]]]
- params = [[['a0', 'b0'], ['c0', 'd0']],
- [['a1', 'b1'], ['c1', 'd1']]]
- output = [[[['a1', 'b1'], ['c1', 'd1']]],
- [[['a0', 'b0'], ['c0', 'd0']]]]
-
- indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]]
- params = [[['a0', 'b0'], ['c0', 'd0']],
- [['a1', 'b1'], ['c1', 'd1']]]
- output = [[['c0', 'd0'], ['a1', 'b1']],
- [['a0', 'b0'], ['c1', 'd1']]]
-
-
- indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]]
- params = [[['a0', 'b0'], ['c0', 'd0']],
- [['a1', 'b1'], ['c1', 'd1']]]
- output = [['b0', 'b1'], ['d0', 'c1']]
-```
-
-##### Args:
-
-
-* <b>`params`</b>: A `Tensor`. `P-D`. The tensor from which to gather values.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- `Q-D`. Index tensor having shape `[d_0, ..., d_{Q-2}, K]`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `params`.
- `(P+Q-K-1)-D`. Values from `params` gathered from indices given by
- `indices`.
-
-
-- - -
-
-### `tf.unique_with_counts(x, out_idx=None, name=None)` {#unique_with_counts}
-
-Finds unique elements in a 1-D tensor.
-
-This operation returns a tensor `y` containing all of the unique elements of `x`
-sorted in the same order that they occur in `x`. This operation also returns a
-tensor `idx` the same size as `x` that contains the index of each value of `x`
-in the unique output `y`. Finally, it returns a third tensor `count` that
-contains the count of each element of `y` in `x`. In other words:
-
-`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`
-
-For example:
-
-```prettyprint
-# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
-y, idx, count = unique_with_counts(x)
-y ==> [1, 2, 4, 7, 8]
-idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
-count ==> [2, 1, 3, 1, 2]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. 1-D.
-* <b>`out_idx`</b>: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (y, idx, count).
-
-* <b>`y`</b>: A `Tensor`. Has the same type as `x`. 1-D.
-* <b>`idx`</b>: A `Tensor` of type `out_idx`. 1-D.
-* <b>`count`</b>: A `Tensor` of type `out_idx`. 1-D.
-
-
-- - -
-
-### `tf.scatter_nd(indices, updates, shape, name=None)` {#scatter_nd}
-
-Creates a new tensor by applying sparse `updates` to individual
-
-values or slices within a zero tensor of the given `shape` tensor according to
-indices. This operator is the inverse of the [tf.gather_nd](#gather_nd)
-operator which extracts values or slices from a given tensor.
-
-TODO(simister): Add a link to Variable.__getitem__ documentation on slice
-syntax.
-
-`shape` is a `TensorShape` with rank `P` and `indices` is a `Tensor` of rank
-`Q`.
-
-`indices` must be integer tensor, containing indices into `shape`.
-It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
-
-The innermost dimension of `indices` (with length `K`) corresponds to
-indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
-dimension of `shape`.
-
-`updates` is Tensor of rank `Q-1+P-K` with shape:
-
-```
-[d_0, ..., d_{Q-2}, shape[K], ..., shape[P-1]].
-```
-
-The simplest form of scatter is to insert individual elements in a tensor by
-index. For example, say we want to insert 4 scattered elements in a rank-1
-tensor with 8 elements.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/ScatterNd1.png" alt>
-</div>
-
-In Python, this scatter operation would look like this:
-
- indices = tf.constant([[4], [3], [1], [7]])
- updates = tf.constant([9, 10, 11, 12])
- shape = tf.constant([8])
- scatter = tf.scatter_nd(indices, updates, shape)
- with tf.Session() as sess:
- print sess.run(scatter)
-
-The resulting tensor would look like this:
-
- [0, 11, 0, 10, 9, 0, 0, 12]
-
-We can also, insert entire slices of a higher rank tensor all at once. For
-example, if we wanted to insert two slices in the first dimension of a
-rank-3 tensor with two matrices of new values.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/ScatterNd2.png" alt>
-</div>
-
-In Python, this scatter operation would look like this:
-
- indices = tf.constant([[0], [2]])
- updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],
- [7, 7, 7, 7], [8, 8, 8, 8]],
- [[5, 5, 5, 5], [6, 6, 6, 6],
- [7, 7, 7, 7], [8, 8, 8, 8]]])
- shape = tf.constant([4, 4, 4])
- scatter = tf.scatter_nd(indices, updates, shape)
- with tf.Session() as sess:
- print sess.run(scatter)
-
-The resulting tensor would look like this:
-
- [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
- [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
- [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
- [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]
-
-##### Args:
-
-
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A Tensor. Must be one of the following types: int32, int64.
- A tensor of indices into ref.
-* <b>`updates`</b>: A `Tensor`.
- A Tensor. Must have the same type as tensor. A tensor of updated values
- to store in ref.
-* <b>`shape`</b>: A `Tensor`. Must have the same type as `indices`.
- A vector. The shape of the resulting tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `updates`.
- A new tensor with the given shape and updates applied according
- to the indices.
-
-
-- - -
-
-### `tf.dynamic_partition(data, partitions, num_partitions, name=None)` {#dynamic_partition}
-
-Partitions `data` into `num_partitions` tensors using indices from `partitions`.
-
-For each index tuple `js` of size `partitions.ndim`, the slice `data[js, ...]`
-becomes part of `outputs[partitions[js]]`. The slices with `partitions[js] = i`
-are placed in `outputs[i]` in lexicographic order of `js`, and the first
-dimension of `outputs[i]` is the number of entries in `partitions` equal to `i`.
-In detail,
-
-```python
- outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]
-
- outputs[i] = pack([data[js, ...] for js if partitions[js] == i])
-```
-
-`data.shape` must start with `partitions.shape`.
-
-For example:
-
-```python
- # Scalar partitions.
- partitions = 1
- num_partitions = 2
- data = [10, 20]
- outputs[0] = [] # Empty with shape [0, 2]
- outputs[1] = [[10, 20]]
-
- # Vector partitions.
- partitions = [0, 0, 1, 1, 0]
- num_partitions = 2
- data = [10, 20, 30, 40, 50]
- outputs[0] = [10, 20, 50]
- outputs[1] = [30, 40]
-```
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/DynamicPartition.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`.
-* <b>`partitions`</b>: A `Tensor` of type `int32`.
- Any shape. Indices in the range `[0, num_partitions)`.
-* <b>`num_partitions`</b>: An `int` that is `>= 1`.
- The number of partitions to output.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A list of `num_partitions` `Tensor` objects of the same type as data.
-
-
-- - -
-
-### `tf.dynamic_stitch(indices, data, name=None)` {#dynamic_stitch}
-
-Interleave the values from the `data` tensors into a single tensor.
-
-Builds a merged tensor such that
-
-```python
- merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]
-```
-
-For example, if each `indices[m]` is scalar or vector, we have
-
-```python
- # Scalar indices:
- merged[indices[m], ...] = data[m][...]
-
- # Vector indices:
- merged[indices[m][i], ...] = data[m][i, ...]
-```
-
-Each `data[i].shape` must start with the corresponding `indices[i].shape`,
-and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we
-must have `data[i].shape = indices[i].shape + constant`. In terms of this
-`constant`, the output shape is
-
- merged.shape = [max(indices)] + constant
-
-Values are merged in order, so if an index appears in both `indices[m][i]` and
-`indices[n][j]` for `(m,i) < (n,j)` the slice `data[n][j]` will appear in the
-merged result.
-
-For example:
-
-```python
- indices[0] = 6
- indices[1] = [4, 1]
- indices[2] = [[5, 2], [0, 3]]
- data[0] = [61, 62]
- data[1] = [[41, 42], [11, 12]]
- data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]
- merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],
- [51, 52], [61, 62]]
-```
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/DynamicStitch.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`indices`</b>: A list of at least 1 `Tensor` objects of type `int32`.
-* <b>`data`</b>: A list with the same number of `Tensor` objects as `indices` of `Tensor` objects of the same type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
-
-
-- - -
-
-### `tf.boolean_mask(tensor, mask, name='boolean_mask')` {#boolean_mask}
-
-Apply boolean mask to tensor. Numpy equivalent is `tensor[mask]`.
-
-```python
-# 1-D example
-tensor = [0, 1, 2, 3]
-mask = np.array([True, False, True, False])
-boolean_mask(tensor, mask) ==> [0, 2]
-```
-
-In general, `0 < dim(mask) = K <= dim(tensor)`, and `mask`'s shape must match
-the first K dimensions of `tensor`'s shape. We then have:
- `boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]`
-where `(i1,...,iK)` is the ith `True` entry of `mask` (row-major order).
-
-##### Args:
-
-
-* <b>`tensor`</b>: N-D tensor.
-* <b>`mask`</b>: K-D boolean tensor, K <= N and K must be known statically.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- (N-K+1)-dimensional tensor populated by entries in `tensor` corresponding
- to `True` values in `mask`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If shapes do not conform.
-
-
-* <b>`Examples`</b>:
-
-```python
-# 2-D example
-tensor = [[1, 2], [3, 4], [5, 6]]
-mask = np.array([True, False, True])
-boolean_mask(tensor, mask) ==> [[1, 2], [5, 6]]
-```
-
-
-- - -
-
-### `tf.one_hot(indices, depth, on_value=None, off_value=None, axis=None, dtype=None, name=None)` {#one_hot}
-
-Returns a one-hot tensor.
-
-The locations represented by indices in `indices` take value `on_value`,
-while all other locations take value `off_value`.
-
-`on_value` and `off_value` must have matching data types. If `dtype` is also
-provided, they must be the same data type as specified by `dtype`.
-
-If `on_value` is not provided, it will default to the value `1` with type
-`dtype`
-
-If `off_value` is not provided, it will default to the value `0` with type
-`dtype`
-
-If the input `indices` is rank `N`, the output will have rank `N+1`. The
-new axis is created at dimension `axis` (default: the new axis is appended
-at the end).
-
-If `indices` is a scalar the output shape will be a vector of length `depth`
-
-If `indices` is a vector of length `features`, the output shape will be:
-
-```
- features x depth if axis == -1
- depth x features if axis == 0
-```
-
-If `indices` is a matrix (batch) with shape `[batch, features]`, the output
-shape will be:
-
-```
- batch x features x depth if axis == -1
- batch x depth x features if axis == 1
- depth x batch x features if axis == 0
-```
-
-If `dtype` is not provided, it will attempt to assume the data type of
-`on_value` or `off_value`, if one or both are passed in. If none of
-`on_value`, `off_value`, or `dtype` are provided, `dtype` will default to the
-value `tf.float32`.
-
-Note: If a non-numeric data type output is desired (`tf.string`, `tf.bool`,
-etc.), both `on_value` and `off_value` _must_ be provided to `one_hot`.
-
-Examples
-=========
-
-Suppose that
-
-```python
- indices = [0, 2, -1, 1]
- depth = 3
- on_value = 5.0
- off_value = 0.0
- axis = -1
-```
-
-Then output is `[4 x 3]`:
-
-```python
- output =
- [5.0 0.0 0.0] // one_hot(0)
- [0.0 0.0 5.0] // one_hot(2)
- [0.0 0.0 0.0] // one_hot(-1)
- [0.0 5.0 0.0] // one_hot(1)
-```
-
-Suppose that
-
-```python
- indices = [[0, 2], [1, -1]]
- depth = 3
- on_value = 1.0
- off_value = 0.0
- axis = -1
-```
-
-Then output is `[2 x 2 x 3]`:
-
-```python
- output =
- [
- [1.0, 0.0, 0.0] // one_hot(0)
- [0.0, 0.0, 1.0] // one_hot(2)
- ][
- [0.0, 1.0, 0.0] // one_hot(1)
- [0.0, 0.0, 0.0] // one_hot(-1)
- ]
-```
-
-Using default values for `on_value` and `off_value`:
-
-```python
- indices = [0, 1, 2]
- depth = 3
-```
-
-The output will be
-
-```python
- output =
- [[1., 0., 0.],
- [0., 1., 0.],
- [0., 0., 1.]]
-```
-
-##### Args:
-
-
-* <b>`indices`</b>: A `Tensor` of indices.
-* <b>`depth`</b>: A scalar defining the depth of the one hot dimension.
-* <b>`on_value`</b>: A scalar defining the value to fill in output when `indices[j]
- = i`. (default: 1)
-* <b>`off_value`</b>: A scalar defining the value to fill in output when `indices[j]
- != i`. (default: 0)
-* <b>`axis`</b>: The axis to fill (default: -1, a new inner-most axis).
-* <b>`dtype`</b>: The data type of the output tensor.
-
-##### Returns:
-
-
-* <b>`output`</b>: The one-hot tensor.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If dtype of either `on_value` or `off_value` don't match `dtype`
-* <b>`TypeError`</b>: If dtype of `on_value` and `off_value` don't match one another
-
-
-- - -
-
-### `tf.sequence_mask(lengths, maxlen=None, dtype=tf.bool, name=None)` {#sequence_mask}
-
-Return a mask tensor representing the first N positions of each row.
-
-Example:
-
-```python
-tf.sequence_mask([1, 3, 2], 5) =
- [[True, False, False, False, False],
- [True, True, True, False, False],
- [True, True, False, False, False]]
-```
-
-##### Args:
-
-
-* <b>`lengths`</b>: 1D integer tensor, all its values < maxlen.
-* <b>`maxlen`</b>: scalar integer tensor, maximum length of each row. Default: use
- maximum over lengths.
-* <b>`dtype`</b>: output type of the resulting tensor.
-* <b>`name`</b>: name of the op.
-
-##### Returns:
-
- A 2D mask tensor, as shown in the example above, cast to specified dtype.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the arguments have invalid rank.
-
-
-- - -
-
-### `tf.dequantize(input, min_range, max_range, mode=None, name=None)` {#dequantize}
-
-Dequantize the 'input' tensor into a float Tensor.
-
-[min_range, max_range] are scalar floats that specify the range for
-the 'input' data. The 'mode' attribute controls exactly which calculations are
-used to convert the float values to their quantized equivalents.
-
-In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:
-
-```
-if T == qint8, in[i] += (range(T) + 1)/ 2.0
-out[i] = min_range + (in[i]* (max_range - min_range) / range(T))
-```
-here `range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()`
-
-*MIN_COMBINED Mode Example*
-
-If the input comes from a QuantizedRelu6, the output type is
-quint8 (range of 0-255) but the possible range of QuantizedRelu6 is
-0-6. The min_range and max_range values are therefore 0.0 and 6.0.
-Dequantize on quint8 will take each value, cast to float, and multiply
-by 6 / 255.
-Note that if quantizedtype is qint8, the operation will additionally add
-each value by 128 prior to casting.
-
-If the mode is 'MIN_FIRST', then this approach is used:
-
-```
-number_of_steps = 1 << (# of bits in T)
-range_adjust = number_of_steps / (number_of_steps - 1)
-range = (range_max - range_min) * range_adjust
-range_scale = range / number_of_steps
-const double offset_input = static_cast<double>(input) - lowest_quantized;
-result = range_min + ((input - numeric_limits<T>::min()) * range_scale)
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
-* <b>`min_range`</b>: A `Tensor` of type `float32`.
- The minimum scalar value possibly produced for the input.
-* <b>`max_range`</b>: A `Tensor` of type `float32`.
- The maximum scalar value possibly produced for the input.
-* <b>`mode`</b>: An optional `string` from: `"MIN_COMBINED", "MIN_FIRST"`. Defaults to `"MIN_COMBINED"`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`.
-
-
-- - -
-
-### `tf.quantize_v2(input, min_range, max_range, T, mode=None, name=None)` {#quantize_v2}
-
-Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.
-
-[min_range, max_range] are scalar floats that specify the range for
-the 'input' data. The 'mode' attribute controls exactly which calculations are
-used to convert the float values to their quantized equivalents.
-
-In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:
-
-```
-out[i] = (in[i] - min_range) * range(T) / (max_range - min_range)
-if T == qint8, out[i] -= (range(T) + 1) / 2.0
-```
-here `range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()`
-
-*MIN_COMBINED Mode Example*
-
-Assume the input is type float and has a possible range of [0.0, 6.0] and the
-output type is quint8 ([0, 255]). The min_range and max_range values should be
-specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each
-value of the input by 255/6 and cast to quint8.
-
-If the output type was qint8 ([-128, 127]), the operation will additionally
-subtract each value by 128 prior to casting, so that the range of values aligns
-with the range of qint8.
-
-If the mode is 'MIN_FIRST', then this approach is used:
-
-```
-number_of_steps = 1 << (# of bits in T)
-range_adjust = number_of_steps / (number_of_steps - 1)
-range = (range_max - range_min) * range_adjust
-range_scale = number_of_steps / range
-quantized = round(input * range_scale) - round(range_min * range_scale) +
- numeric_limits<T>::min()
-quantized = max(quantized, numeric_limits<T>::min())
-quantized = min(quantized, numeric_limits<T>::max())
-```
-
-The biggest difference between this and MIN_COMBINED is that the minimum range
-is rounded first, before it's subtracted from the rounded value. With
-MIN_COMBINED, a small bias is introduced where repeated iterations of quantizing
-and dequantizing will introduce a larger and larger error.
-
-One thing to watch out for is that the operator may choose to adjust the
-requested minimum and maximum values slightly during the quantization process,
-so you should always use the output ports as the range for further calculations.
-For example, if the requested minimum and maximum values are close to equal,
-they will be separated by a small epsilon value to prevent ill-formed quantized
-buffers from being created. Otherwise, you can end up with buffers where all the
-quantized values map to the same float value, which causes problems for
-operations that have to perform further calculations on them.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `float32`.
-* <b>`min_range`</b>: A `Tensor` of type `float32`.
- The minimum scalar value possibly produced for the input.
-* <b>`max_range`</b>: A `Tensor` of type `float32`.
- The maximum scalar value possibly produced for the input.
-* <b>`T`</b>: A `tf.DType` from: `tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32`.
-* <b>`mode`</b>: An optional `string` from: `"MIN_COMBINED", "MIN_FIRST"`. Defaults to `"MIN_COMBINED"`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, output_min, output_max).
-
-* <b>`output`</b>: A `Tensor` of type `T`. The quantized data produced from the float input.
-* <b>`output_min`</b>: A `Tensor` of type `float32`. The actual minimum scalar value used for the output.
-* <b>`output_max`</b>: A `Tensor` of type `float32`. The actual maximum scalar value used for the output.
-
-
-- - -
-
-### `tf.quantized_concat(concat_dim, values, input_mins, input_maxes, name=None)` {#quantized_concat}
-
-Concatenates quantized tensors along one dimension.
-
-##### Args:
-
-
-* <b>`concat_dim`</b>: A `Tensor` of type `int32`.
- 0-D. The dimension along which to concatenate. Must be in the
- range [0, rank(values)).
-* <b>`values`</b>: A list of at least 2 `Tensor` objects of the same type.
- The `N` Tensors to concatenate. Their ranks and types must match,
- and their sizes must match in all dimensions except `concat_dim`.
-* <b>`input_mins`</b>: A list with the same number of `Tensor` objects as `values` of `Tensor` objects of type `float32`.
- The minimum scalar values for each of the input tensors.
-* <b>`input_maxes`</b>: A list with the same number of `Tensor` objects as `values` of `Tensor` objects of type `float32`.
- The maximum scalar values for each of the input tensors.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, output_min, output_max).
-
-* <b>`output`</b>: A `Tensor`. Has the same type as `values`. A `Tensor` with the concatenation of values stacked along the
- `concat_dim` dimension. This tensor's shape matches that of `values` except
- in `concat_dim` where it has the sum of the sizes.
-* <b>`output_min`</b>: A `Tensor` of type `float32`. The float value that the minimum quantized output value represents.
-* <b>`output_max`</b>: A `Tensor` of type `float32`. The float value that the maximum quantized output value represents.
-
-
-- - -
-
-### `tf.setdiff1d(x, y, index_dtype=tf.int32, name=None)` {#setdiff1d}
-
-Computes the difference between two lists of numbers or strings.
-
-Given a list `x` and a list `y`, this operation returns a list `out` that
-represents all values that are in `x` but not in `y`. The returned list `out`
-is sorted in the same order that the numbers appear in `x` (duplicates are
-preserved). This operation also returns a list `idx` that represents the
-position of each `out` element in `x`. In other words:
-
-`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]`
-
-For example, given this input:
-
-```prettyprint
-x = [1, 2, 3, 4, 5, 6]
-y = [1, 3, 5]
-```
-
-This operation would return:
-
-```prettyprint
-out ==> [2, 4, 6]
-idx ==> [1, 3, 5]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. 1-D. Values to keep.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`. 1-D. Values to remove.
-* <b>`out_idx`</b>: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (out, idx).
-
-* <b>`out`</b>: A `Tensor`. Has the same type as `x`. 1-D. Values present in `x` but not in `y`.
-* <b>`idx`</b>: A `Tensor` of type `out_idx`. 1-D. Positions of `x` values preserved in `out`.
-
-
-
-## Fake quantization
-Operations used to help train for better quantization accuracy.
-
-- - -
-
-### `tf.fake_quant_with_min_max_args(inputs, min=None, max=None, name=None)` {#fake_quant_with_min_max_args}
-
-Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type.
-
-Attributes [min; max] define the clamping range for the 'inputs' data. Op
-divides this range into 255 steps (total of 256 values), then replaces each
-'inputs' value with the closest of the quantized step values.
-
-Quantization is called fake since the output is still in floating point.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A `Tensor` of type `float32`.
-* <b>`min`</b>: An optional `float`. Defaults to `-6`.
-* <b>`max`</b>: An optional `float`. Defaults to `6`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`.
-
-
-- - -
-
-### `tf.fake_quant_with_min_max_args_gradient(gradients, inputs, min=None, max=None, name=None)` {#fake_quant_with_min_max_args_gradient}
-
-Compute gradients for a FakeQuantWithMinMaxArgs operation.
-
-##### Args:
-
-
-* <b>`gradients`</b>: A `Tensor` of type `float32`.
- Backpropagated gradients above the FakeQuantWithMinMaxArgs operation.
-* <b>`inputs`</b>: A `Tensor` of type `float32`.
- Values passed as inputs to the FakeQuantWithMinMaxArgs operation.
-* <b>`min`</b>: An optional `float`. Defaults to `-6`.
-* <b>`max`</b>: An optional `float`. Defaults to `6`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`.
- Backpropagated gradients below the FakeQuantWithMinMaxArgs operation:
- `gradients * (inputs >= min && inputs <= max)`.
-
-
-- - -
-
-### `tf.fake_quant_with_min_max_vars(inputs, min, max, name=None)` {#fake_quant_with_min_max_vars}
-
-Fake-quantize the 'inputs' tensor of type float via global float scalars `min`
-
-and `max` to 'outputs' tensor of same shape as `inputs`.
-
-[min; max] is the clamping range for the 'inputs' data. Op divides this range
-into 255 steps (total of 256 values), then replaces each 'inputs' value with the
-closest of the quantized step values.
-
-This operation has a gradient and thus allows for training `min` and `max` values.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A `Tensor` of type `float32`.
-* <b>`min`</b>: A `Tensor` of type `float32`.
-* <b>`max`</b>: A `Tensor` of type `float32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`.
-
-
-- - -
-
-### `tf.fake_quant_with_min_max_vars_gradient(gradients, inputs, min, max, name=None)` {#fake_quant_with_min_max_vars_gradient}
-
-Compute gradients for a FakeQuantWithMinMaxVars operation.
-
-##### Args:
-
-
-* <b>`gradients`</b>: A `Tensor` of type `float32`.
- Backpropagated gradients above the FakeQuantWithMinMaxVars operation.
-* <b>`inputs`</b>: A `Tensor` of type `float32`.
- Values passed as inputs to the FakeQuantWithMinMaxVars operation.
- min, max: Quantization interval, scalar floats.
-* <b>`min`</b>: A `Tensor` of type `float32`.
-* <b>`max`</b>: A `Tensor` of type `float32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max).
-
-* <b>`backprops_wrt_input`</b>: A `Tensor` of type `float32`. Backpropagated gradients w.r.t. inputs:
- `gradients * (inputs >= min && inputs <= max)`.
-* <b>`backprop_wrt_min`</b>: A `Tensor` of type `float32`. Backpropagated gradients w.r.t. min parameter:
- `sum(gradients * (inputs < min))`.
-* <b>`backprop_wrt_max`</b>: A `Tensor` of type `float32`. Backpropagated gradients w.r.t. max parameter:
- `sum(gradients * (inputs > max))`.
-
-
-- - -
-
-### `tf.fake_quant_with_min_max_vars_per_channel(inputs, min, max, name=None)` {#fake_quant_with_min_max_vars_per_channel}
-
-Fake-quantize the 'inputs' tensor of type float and one of the shapes: `[d]`,
-
-`[b, d]` `[b, h, w, d]` via per-channel floats `min` and `max` of shape `[d]`
-to 'outputs' tensor of same shape as `inputs`.
-
-[min; max] is the clamping range for the 'inputs' data in the corresponding
-depth channel. Op divides this range into 255 steps (total of 256 values), then
-replaces each 'inputs' value with the closest of the quantized step values.
-
-This operation has a gradient and thus allows for training `min` and `max` values.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A `Tensor` of type `float32`.
-* <b>`min`</b>: A `Tensor` of type `float32`.
-* <b>`max`</b>: A `Tensor` of type `float32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`.
-
-
-- - -
-
-### `tf.fake_quant_with_min_max_vars_per_channel_gradient(gradients, inputs, min, max, name=None)` {#fake_quant_with_min_max_vars_per_channel_gradient}
-
-Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.
-
-##### Args:
-
-
-* <b>`gradients`</b>: A `Tensor` of type `float32`.
- Backpropagated gradients above the FakeQuantWithMinMaxVars operation,
- shape one of: `[d]`, `[b, d]`, `[b, h, w, d]`.
-* <b>`inputs`</b>: A `Tensor` of type `float32`.
- Values passed as inputs to the FakeQuantWithMinMaxVars operation, shape
- same as `gradients`.
- min, max: Quantization interval, floats of shape `[d]`.
-* <b>`min`</b>: A `Tensor` of type `float32`.
-* <b>`max`</b>: A `Tensor` of type `float32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max).
-
-* <b>`backprops_wrt_input`</b>: A `Tensor` of type `float32`. Backpropagated gradients w.r.t. inputs, shape same as
- `inputs`:
- `gradients * (inputs >= min && inputs <= max)`.
-* <b>`backprop_wrt_min`</b>: A `Tensor` of type `float32`. Backpropagated gradients w.r.t. min parameter, shape `[d]`:
- `sum_per_d(gradients * (inputs < min))`.
-* <b>`backprop_wrt_max`</b>: A `Tensor` of type `float32`. Backpropagated gradients w.r.t. max parameter, shape `[d]`:
- `sum_per_d(gradients * (inputs > max))`.
-
-
-
-## Other Functions and Classes
-- - -
-
-### `tf.contrib.graph_editor.copy(sgv, dst_graph=None, dst_scope='', src_scope='', reuse_dst_scope=False)` {#copy}
-
-Copy a subgraph.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the source subgraph-view. This argument is converted to a subgraph
- using the same rules than the function subgraph.make_view.
-* <b>`dst_graph`</b>: the destination graph.
-* <b>`dst_scope`</b>: the destination scope.
-* <b>`src_scope`</b>: the source scope.
-* <b>`reuse_dst_scope`</b>: if True the dst_scope is re-used if it already exists.
- Otherwise, the scope is given a unique name based on the one given
- by appending an underscore followed by a digit (default).
-
-##### Returns:
-
- A tuple `(sgv, info)` where:
- `sgv` is the transformed subgraph view;
- `info` is an instance of TransformerInfo containing
- information about the transform, including mapping between
- original and transformed tensors and operations.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `dst_graph` is not a `tf.Graph`.
-* <b>`StandardError`</b>: if sgv cannot be converted to a SubGraphView using
- the same rules than the function subgraph.make_view.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/check_ops.md b/tensorflow/g3doc/api_docs/python/check_ops.md
deleted file mode 100644
index 9eec5e20ad..0000000000
--- a/tensorflow/g3doc/api_docs/python/check_ops.md
+++ /dev/null
@@ -1,510 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Asserts and boolean checks.
-[TOC]
-
-## Asserts and Boolean Checks
-
-- - -
-
-### `tf.assert_negative(x, data=None, summarize=None, message=None, name=None)` {#assert_negative}
-
-Assert the condition `x < 0` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_negative(x)]):
- output = tf.reduce_sum(x)
-```
-
-Negative means, for every element `x[i]` of `x`, we have `x[i] < 0`.
-If `x` is empty this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_negative".
-
-##### Returns:
-
- Op raising `InvalidArgumentError` unless `x` is all negative.
-
-
-- - -
-
-### `tf.assert_positive(x, data=None, summarize=None, message=None, name=None)` {#assert_positive}
-
-Assert the condition `x > 0` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_positive(x)]):
- output = tf.reduce_sum(x)
-```
-
-Positive means, for every element `x[i]` of `x`, we have `x[i] > 0`.
-If `x` is empty this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_positive".
-
-##### Returns:
-
- Op raising `InvalidArgumentError` unless `x` is all positive.
-
-
-- - -
-
-### `tf.assert_proper_iterable(values)` {#assert_proper_iterable}
-
-Static assert that values is a "proper" iterable.
-
-`Ops` that expect iterables of `Tensor` can call this to validate input.
-Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves.
-
-##### Args:
-
-
-* <b>`values`</b>: Object to be checked.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `values` is not iterable or is one of
- `Tensor`, `SparseTensor`, `np.array`, `tf.compat.bytes_or_text_types`.
-
-
-- - -
-
-### `tf.assert_non_negative(x, data=None, summarize=None, message=None, name=None)` {#assert_non_negative}
-
-Assert the condition `x >= 0` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_non_negative(x)]):
- output = tf.reduce_sum(x)
-```
-
-Non-negative means, for every element `x[i]` of `x`, we have `x[i] >= 0`.
-If `x` is empty this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional).
- Defaults to "assert_non_negative".
-
-##### Returns:
-
- Op raising `InvalidArgumentError` unless `x` is all non-negative.
-
-
-- - -
-
-### `tf.assert_non_positive(x, data=None, summarize=None, message=None, name=None)` {#assert_non_positive}
-
-Assert the condition `x <= 0` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_non_positive(x)]):
- output = tf.reduce_sum(x)
-```
-
-Non-positive means, for every element `x[i]` of `x`, we have `x[i] <= 0`.
-If `x` is empty this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional).
- Defaults to "assert_non_positive".
-
-##### Returns:
-
- Op raising `InvalidArgumentError` unless `x` is all non-positive.
-
-
-- - -
-
-### `tf.assert_equal(x, y, data=None, summarize=None, message=None, name=None)` {#assert_equal}
-
-Assert the condition `x == y` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_equal(x, y)]):
- output = tf.reduce_sum(x)
-```
-
-This condition holds if for every pair of (possibly broadcast) elements
-`x[i]`, `y[i]`, we have `x[i] == y[i]`.
-If both `x` and `y` are empty, this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`y`</b>: Numeric `Tensor`, same dtype as and broadcastable to `x`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`, `y`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_equal".
-
-##### Returns:
-
- Op that raises `InvalidArgumentError` if `x == y` is False.
-
-
-- - -
-
-### `tf.assert_integer(x, message=None, name=None)` {#assert_integer}
-
-Assert that `x` is of integer dtype.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_integer(x)]):
- output = tf.reduce_sum(x)
-```
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` whose basetype is integer and is not quantized.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_integer".
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x.dtype` is anything other than non-quantized integer.
-
-##### Returns:
-
- A `no_op` that does nothing. Type can be determined statically.
-
-
-- - -
-
-### `tf.assert_less(x, y, data=None, summarize=None, message=None, name=None)` {#assert_less}
-
-Assert the condition `x < y` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_less(x, y)]):
- output = tf.reduce_sum(x)
-```
-
-This condition holds if for every pair of (possibly broadcast) elements
-`x[i]`, `y[i]`, we have `x[i] < y[i]`.
-If both `x` and `y` are empty, this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`y`</b>: Numeric `Tensor`, same dtype as and broadcastable to `x`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`, `y`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_less".
-
-##### Returns:
-
- Op that raises `InvalidArgumentError` if `x < y` is False.
-
-
-- - -
-
-### `tf.assert_less_equal(x, y, data=None, summarize=None, message=None, name=None)` {#assert_less_equal}
-
-Assert the condition `x <= y` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_less_equal(x, y)]):
- output = tf.reduce_sum(x)
-```
-
-This condition holds if for every pair of (possibly broadcast) elements
-`x[i]`, `y[i]`, we have `x[i] <= y[i]`.
-If both `x` and `y` are empty, this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`y`</b>: Numeric `Tensor`, same dtype as and broadcastable to `x`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`, `y`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_less_equal"
-
-##### Returns:
-
- Op that raises `InvalidArgumentError` if `x <= y` is False.
-
-
-- - -
-
-### `tf.assert_greater(x, y, data=None, summarize=None, message=None, name=None)` {#assert_greater}
-
-Assert the condition `x > y` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_greater(x, y)]):
- output = tf.reduce_sum(x)
-```
-
-This condition holds if for every pair of (possibly broadcast) elements
-`x[i]`, `y[i]`, we have `x[i] > y[i]`.
-If both `x` and `y` are empty, this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`y`</b>: Numeric `Tensor`, same dtype as and broadcastable to `x`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`, `y`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_greater".
-
-##### Returns:
-
- Op that raises `InvalidArgumentError` if `x > y` is False.
-
-
-- - -
-
-### `tf.assert_greater_equal(x, y, data=None, summarize=None, message=None, name=None)` {#assert_greater_equal}
-
-Assert the condition `x >= y` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_greater_equal(x, y)]):
- output = tf.reduce_sum(x)
-```
-
-This condition holds if for every pair of (possibly broadcast) elements
-`x[i]`, `y[i]`, we have `x[i] >= y[i]`.
-If both `x` and `y` are empty, this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`y`</b>: Numeric `Tensor`, same dtype as and broadcastable to `x`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`, `y`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to
- "assert_greater_equal"
-
-##### Returns:
-
- Op that raises `InvalidArgumentError` if `x >= y` is False.
-
-
-- - -
-
-### `tf.assert_rank(x, rank, data=None, summarize=None, message=None, name=None)` {#assert_rank}
-
-Assert `x` has rank equal to `rank`.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_rank(x, 2)]):
- output = tf.reduce_sum(x)
-```
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`rank`</b>: Scalar integer `Tensor`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_rank".
-
-##### Returns:
-
- Op raising `InvalidArgumentError` unless `x` has specified rank.
- If static checks determine `x` has correct rank, a `no_op` is returned.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If static checks determine `x` has wrong rank.
-
-
-- - -
-
-### `tf.assert_rank_at_least(x, rank, data=None, summarize=None, message=None, name=None)` {#assert_rank_at_least}
-
-Assert `x` has rank equal to `rank` or higher.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_rank_at_least(x, 2)]):
- output = tf.reduce_sum(x)
-```
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`rank`</b>: Scalar `Tensor`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional).
- Defaults to "assert_rank_at_least".
-
-##### Returns:
-
- Op raising `InvalidArgumentError` unless `x` has specified rank or higher.
- If static checks determine `x` has correct rank, a `no_op` is returned.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If static checks determine `x` has wrong rank.
-
-
-- - -
-
-### `tf.assert_type(tensor, tf_type, message=None, name=None)` {#assert_type}
-
-Statically asserts that the given `Tensor` is of the specified type.
-
-##### Args:
-
-
-* <b>`tensor`</b>: A tensorflow `Tensor`.
-* <b>`tf_type`</b>: A tensorflow type (`dtypes.float32`, `tf.int64`, `dtypes.bool`,
- etc).
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name to give this `Op`. Defaults to "assert_type"
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the tensors data type doesn't match `tf_type`.
-
-##### Returns:
-
- A `no_op` that does nothing. Type can be determined statically.
-
-
-- - -
-
-### `tf.is_non_decreasing(x, name=None)` {#is_non_decreasing}
-
-Returns `True` if `x` is non-decreasing.
-
-Elements of `x` are compared in row-major order. The tensor `[x[0],...]`
-is non-decreasing if for every adjacent pair we have `x[i] <= x[i+1]`.
-If `x` has less than two elements, it is trivially non-decreasing.
-
-See also: `is_strictly_increasing`
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "is_non_decreasing"
-
-##### Returns:
-
- Boolean `Tensor`, equal to `True` iff `x` is non-decreasing.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `x` is not a numeric tensor.
-
-
-- - -
-
-### `tf.is_numeric_tensor(tensor)` {#is_numeric_tensor}
-
-
-
-
-- - -
-
-### `tf.is_strictly_increasing(x, name=None)` {#is_strictly_increasing}
-
-Returns `True` if `x` is strictly increasing.
-
-Elements of `x` are compared in row-major order. The tensor `[x[0],...]`
-is strictly increasing if for every adjacent pair we have `x[i] < x[i+1]`.
-If `x` has less than two elements, it is trivially strictly increasing.
-
-See also: `is_non_decreasing`
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`name`</b>: A name for this operation (optional).
- Defaults to "is_strictly_increasing"
-
-##### Returns:
-
- Boolean `Tensor`, equal to `True` iff `x` is strictly increasing.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `x` is not a numeric tensor.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/client.md b/tensorflow/g3doc/api_docs/python/client.md
deleted file mode 100644
index 19c5b269d5..0000000000
--- a/tensorflow/g3doc/api_docs/python/client.md
+++ /dev/null
@@ -1,1199 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Running Graphs
-[TOC]
-
-This library contains classes for launching graphs and executing operations.
-
-The [basic usage](../../get_started/index.md#basic-usage) guide has
-examples of how a graph is launched in a [`tf.Session`](#Session).
-
-## Session management
-
-- - -
-
-### `class tf.Session` {#Session}
-
-A class for running TensorFlow operations.
-
-A `Session` object encapsulates the environment in which `Operation`
-objects are executed, and `Tensor` objects are evaluated. For
-example:
-
-```python
-# Build a graph.
-a = tf.constant(5.0)
-b = tf.constant(6.0)
-c = a * b
-
-# Launch the graph in a session.
-sess = tf.Session()
-
-# Evaluate the tensor `c`.
-print(sess.run(c))
-```
-
-A session may own resources, such as
-[variables](../../api_docs/python/state_ops.md#Variable), [queues](../../api_docs/python/io_ops.md#QueueBase),
-and [readers](../../api_docs/python/io_ops.md#ReaderBase). It is important to release
-these resources when they are no longer required. To do this, either
-invoke the [`close()`](#Session.close) method on the session, or use
-the session as a context manager. The following two examples are
-equivalent:
-
-```python
-# Using the `close()` method.
-sess = tf.Session()
-sess.run(...)
-sess.close()
-
-# Using the context manager.
-with tf.Session() as sess:
- sess.run(...)
-```
-
-The [`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto)
-protocol buffer exposes various configuration options for a
-session. For example, to create a session that uses soft constraints
-for device placement, and log the resulting placement decisions,
-create a session as follows:
-
-```python
-# Launch the graph in a session that allows soft device placement and
-# logs the placement decisions.
-sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True,
- log_device_placement=True))
-```
-- - -
-
-#### `tf.Session.__del__()` {#Session.__del__}
-
-
-
-
-- - -
-
-#### `tf.Session.__enter__()` {#Session.__enter__}
-
-
-
-
-- - -
-
-#### `tf.Session.__exit__(exec_type, exec_value, exec_tb)` {#Session.__exit__}
-
-
-
-
-- - -
-
-#### `tf.Session.__init__(target='', graph=None, config=None)` {#Session.__init__}
-
-Creates a new TensorFlow session.
-
-If no `graph` argument is specified when constructing the session,
-the default graph will be launched in the session. If you are
-using more than one graph (created with `tf.Graph()` in the same
-process, you will have to use different sessions for each graph,
-but each graph can be used in multiple sessions. In this case, it
-is often clearer to pass the graph to be launched explicitly to
-the session constructor.
-
-##### Args:
-
-
-* <b>`target`</b>: (Optional.) The execution engine to connect to.
- Defaults to using an in-process engine. See
- [Distributed Tensorflow](https://www.tensorflow.org/how_tos/distributed/index.html)
- for more examples.
-* <b>`graph`</b>: (Optional.) The `Graph` to be launched (described above).
-* <b>`config`</b>: (Optional.) A [`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto)
- protocol buffer with configuration options for the session.
-
-
-- - -
-
-#### `tf.Session.as_default()` {#Session.as_default}
-
-Returns a context manager that makes this object the default session.
-
-Use with the `with` keyword to specify that calls to
-[`Operation.run()`](../../api_docs/python/framework.md#Operation.run) or
-[`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval) should be
-executed in this session.
-
-```python
-c = tf.constant(..)
-sess = tf.Session()
-
-with sess.as_default():
- assert tf.get_default_session() is sess
- print(c.eval())
-```
-
-To get the current default session, use
-[`tf.get_default_session()`](#get_default_session).
-
-
-*N.B.* The `as_default` context manager *does not* close the
-session when you exit the context, and you must close the session
-explicitly.
-
-```python
-c = tf.constant(...)
-sess = tf.Session()
-with sess.as_default():
- print(c.eval())
-# ...
-with sess.as_default():
- print(c.eval())
-
-sess.close()
-```
-
-Alternatively, you can use `with tf.Session():` to create a
-session that is automatically closed on exiting the context,
-including when an uncaught exception is raised.
-
-*N.B.* The default graph is a property of the current thread. If you
-create a new thread, and wish to use the default session in that
-thread, you must explicitly add a `with sess.as_default():` in that
-thread's function.
-
-##### Returns:
-
- A context manager using this session as the default session.
-
-
-- - -
-
-#### `tf.Session.close()` {#Session.close}
-
-Closes this session.
-
-Calling this method frees all resources associated with the session.
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses if an error occurs while
- closing the TensorFlow session.
-
-
-- - -
-
-#### `tf.Session.graph` {#Session.graph}
-
-The graph that was launched in this session.
-
-
-- - -
-
-#### `tf.Session.graph_def` {#Session.graph_def}
-
-A serializable version of the underlying TensorFlow graph.
-
-##### Returns:
-
- A graph_pb2.GraphDef proto containing nodes for all of the Operations in
- the underlying TensorFlow graph.
-
-
-- - -
-
-#### `tf.Session.partial_run(handle, fetches, feed_dict=None)` {#Session.partial_run}
-
-Continues the execution with more feeds and fetches.
-
-This is EXPERIMENTAL and subject to change.
-
-To use partial execution, a user first calls `partial_run_setup()` and
-then a sequence of `partial_run()`. `partial_run_setup` specifies the
-list of feeds and fetches that will be used in the subsequent
-`partial_run` calls.
-
-The optional `feed_dict` argument allows the caller to override
-the value of tensors in the graph. See run() for more information.
-
-Below is a simple example:
-
-```python
-a = array_ops.placeholder(dtypes.float32, shape=[])
-b = array_ops.placeholder(dtypes.float32, shape=[])
-c = array_ops.placeholder(dtypes.float32, shape=[])
-r1 = math_ops.add(a, b)
-r2 = math_ops.multiply(r1, c)
-
-h = sess.partial_run_setup([r1, r2], [a, b, c])
-res = sess.partial_run(h, r1, feed_dict={a: 1, b: 2})
-res = sess.partial_run(h, r2, feed_dict={c: res})
-```
-
-##### Args:
-
-
-* <b>`handle`</b>: A handle for a sequence of partial runs.
-* <b>`fetches`</b>: A single graph element, a list of graph elements,
- or a dictionary whose values are graph elements or lists of graph
- elements (see documentation for `run`).
-* <b>`feed_dict`</b>: A dictionary that maps graph elements to values
- (described above).
-
-##### Returns:
-
- Either a single value if `fetches` is a single graph element, or
- a list of values if `fetches` is a list, or a dictionary with the
- same keys as `fetches` if that is a dictionary
- (see documentation for `run`).
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses on error.
-
-
-- - -
-
-#### `tf.Session.partial_run_setup(fetches, feeds=None)` {#Session.partial_run_setup}
-
-Sets up a graph with feeds and fetches for partial run.
-
-This is EXPERIMENTAL and subject to change.
-
-Note that contrary to `run`, `feeds` only specifies the graph elements.
-The tensors will be supplied by the subsequent `partial_run` calls.
-
-##### Args:
-
-
-* <b>`fetches`</b>: A single graph element, or a list of graph elements.
-* <b>`feeds`</b>: A single graph element, or a list of graph elements.
-
-##### Returns:
-
- A handle for partial run.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If this `Session` is in an invalid state (e.g. has been
- closed).
-* <b>`TypeError`</b>: If `fetches` or `feed_dict` keys are of an inappropriate type.
- tf.errors.OpError: Or one of its subclasses if a TensorFlow error happens.
-
-
-- - -
-
-#### `tf.Session.reset(target, containers=None, config=None)` {#Session.reset}
-
-Resets resource containers on `target`, and close all connected sessions.
-
-A resource container is distributed across all workers in the
-same cluster as `target`. When a resource container on `target`
-is reset, resources associated with that container will be cleared.
-In particular, all Variables in the container will become undefined:
-they lose their values and shapes.
-
-NOTE:
-(i) reset() is currently only implemented for distributed sessions.
-(ii) Any sessions on the master named by `target` will be closed.
-
-If no resource containers are provided, all containers are reset.
-
-##### Args:
-
-
-* <b>`target`</b>: The execution engine to connect to.
-* <b>`containers`</b>: A list of resource container name strings, or `None` if all of
- all the containers are to be reset.
-* <b>`config`</b>: (Optional.) Protocol buffer with configuration options.
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses if an error occurs while
- resetting containers.
-
-
-- - -
-
-#### `tf.Session.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#Session.run}
-
-Runs operations and evaluates tensors in `fetches`.
-
-This method runs one "step" of TensorFlow computation, by
-running the necessary graph fragment to execute every `Operation`
-and evaluate every `Tensor` in `fetches`, substituting the values in
-`feed_dict` for the corresponding input values.
-
-The `fetches` argument may be a single graph element, or an arbitrarily
-nested list, tuple, namedtuple, dict, or OrderedDict containing graph
-elements at its leaves. A graph element can be one of the following types:
-
-* An [`Operation`](../../api_docs/python/framework.md#Operation).
- The corresponding fetched value will be `None`.
-* A [`Tensor`](../../api_docs/python/framework.md#Tensor).
- The corresponding fetched value will be a numpy ndarray containing the
- value of that tensor.
-* A [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor).
- The corresponding fetched value will be a
- [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue)
- containing the value of that sparse tensor.
-* A `get_tensor_handle` op. The corresponding fetched value will be a
- numpy ndarray containing the handle of that tensor.
-* A `string` which is the name of a tensor or operation in the graph.
-
-The value returned by `run()` has the same shape as the `fetches` argument,
-where the leaves are replaced by the corresponding values returned by
-TensorFlow.
-
-Example:
-
-```python
- a = tf.constant([10, 20])
- b = tf.constant([1.0, 2.0])
- # 'fetches' can be a singleton
- v = session.run(a)
- # v is the numpy array [10, 20]
- # 'fetches' can be a list.
- v = session.run([a, b])
- # v a Python list with 2 numpy arrays: the numpy array [10, 20] and the
- # 1-D array [1.0, 2.0]
- # 'fetches' can be arbitrary lists, tuples, namedtuple, dicts:
- MyData = collections.namedtuple('MyData', ['a', 'b'])
- v = session.run({'k1': MyData(a, b), 'k2': [b, a]})
- # v is a dict with
- # v['k1'] is a MyData namedtuple with 'a' the numpy array [10, 20] and
- # 'b' the numpy array [1.0, 2.0]
- # v['k2'] is a list with the numpy array [1.0, 2.0] and the numpy array
- # [10, 20].
-```
-
-The optional `feed_dict` argument allows the caller to override
-the value of tensors in the graph. Each key in `feed_dict` can be
-one of the following types:
-
-* If the key is a [`Tensor`](../../api_docs/python/framework.md#Tensor), the
- value may be a Python scalar, string, list, or numpy ndarray
- that can be converted to the same `dtype` as that
- tensor. Additionally, if the key is a
- [placeholder](../../api_docs/python/io_ops.md#placeholder), the shape of
- the value will be checked for compatibility with the placeholder.
-* If the key is a
- [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor),
- the value should be a
- [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue).
-* If the key is a nested tuple of `Tensor`s or `SparseTensor`s, the value
- should be a nested tuple with the same structure that maps to their
- corresponding values as above.
-
-Each value in `feed_dict` must be convertible to a numpy array of the dtype
-of the corresponding key.
-
-The optional `options` argument expects a [`RunOptions`] proto. The options
-allow controlling the behavior of this particular step (e.g. turning tracing
-on).
-
-The optional `run_metadata` argument expects a [`RunMetadata`] proto. When
-appropriate, the non-Tensor output of this step will be collected there. For
-example, when users turn on tracing in `options`, the profiled info will be
-collected into this argument and passed back.
-
-##### Args:
-
-
-* <b>`fetches`</b>: A single graph element, a list of graph elements,
- or a dictionary whose values are graph elements or lists of graph
- elements (described above).
-* <b>`feed_dict`</b>: A dictionary that maps graph elements to values
- (described above).
-* <b>`options`</b>: A [`RunOptions`] protocol buffer
-* <b>`run_metadata`</b>: A [`RunMetadata`] protocol buffer
-
-##### Returns:
-
- Either a single value if `fetches` is a single graph element, or
- a list of values if `fetches` is a list, or a dictionary with the
- same keys as `fetches` if that is a dictionary (described above).
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If this `Session` is in an invalid state (e.g. has been
- closed).
-* <b>`TypeError`</b>: If `fetches` or `feed_dict` keys are of an inappropriate type.
-* <b>`ValueError`</b>: If `fetches` or `feed_dict` keys are invalid or refer to a
- `Tensor` that doesn't exist.
-
-
-- - -
-
-#### `tf.Session.sess_str` {#Session.sess_str}
-
-
-
-
-
-- - -
-
-### `class tf.InteractiveSession` {#InteractiveSession}
-
-A TensorFlow `Session` for use in interactive contexts, such as a shell.
-
-The only difference with a regular `Session` is that an `InteractiveSession`
-installs itself as the default session on construction.
-The methods [`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval)
-and [`Operation.run()`](../../api_docs/python/framework.md#Operation.run)
-will use that session to run ops.
-
-This is convenient in interactive shells and [IPython
-notebooks](http://ipython.org), as it avoids having to pass an explicit
-`Session` object to run ops.
-
-For example:
-
-```python
-sess = tf.InteractiveSession()
-a = tf.constant(5.0)
-b = tf.constant(6.0)
-c = a * b
-# We can just use 'c.eval()' without passing 'sess'
-print(c.eval())
-sess.close()
-```
-
-Note that a regular session installs itself as the default session when it
-is created in a `with` statement. The common usage in non-interactive
-programs is to follow that pattern:
-
-```python
-a = tf.constant(5.0)
-b = tf.constant(6.0)
-c = a * b
-with tf.Session():
- # We can also use 'c.eval()' here.
- print(c.eval())
-```
-- - -
-
-#### `tf.InteractiveSession.__del__()` {#InteractiveSession.__del__}
-
-
-
-
-- - -
-
-#### `tf.InteractiveSession.__init__(target='', graph=None, config=None)` {#InteractiveSession.__init__}
-
-Creates a new interactive TensorFlow session.
-
-If no `graph` argument is specified when constructing the session,
-the default graph will be launched in the session. If you are
-using more than one graph (created with `tf.Graph()` in the same
-process, you will have to use different sessions for each graph,
-but each graph can be used in multiple sessions. In this case, it
-is often clearer to pass the graph to be launched explicitly to
-the session constructor.
-
-##### Args:
-
-
-* <b>`target`</b>: (Optional.) The execution engine to connect to.
- Defaults to using an in-process engine.
-* <b>`graph`</b>: (Optional.) The `Graph` to be launched (described above).
-* <b>`config`</b>: (Optional) `ConfigProto` proto used to configure the session.
-
-
-- - -
-
-#### `tf.InteractiveSession.as_default()` {#InteractiveSession.as_default}
-
-Returns a context manager that makes this object the default session.
-
-Use with the `with` keyword to specify that calls to
-[`Operation.run()`](../../api_docs/python/framework.md#Operation.run) or
-[`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval) should be
-executed in this session.
-
-```python
-c = tf.constant(..)
-sess = tf.Session()
-
-with sess.as_default():
- assert tf.get_default_session() is sess
- print(c.eval())
-```
-
-To get the current default session, use
-[`tf.get_default_session()`](#get_default_session).
-
-
-*N.B.* The `as_default` context manager *does not* close the
-session when you exit the context, and you must close the session
-explicitly.
-
-```python
-c = tf.constant(...)
-sess = tf.Session()
-with sess.as_default():
- print(c.eval())
-# ...
-with sess.as_default():
- print(c.eval())
-
-sess.close()
-```
-
-Alternatively, you can use `with tf.Session():` to create a
-session that is automatically closed on exiting the context,
-including when an uncaught exception is raised.
-
-*N.B.* The default graph is a property of the current thread. If you
-create a new thread, and wish to use the default session in that
-thread, you must explicitly add a `with sess.as_default():` in that
-thread's function.
-
-##### Returns:
-
- A context manager using this session as the default session.
-
-
-- - -
-
-#### `tf.InteractiveSession.close()` {#InteractiveSession.close}
-
-Closes an `InteractiveSession`.
-
-
-- - -
-
-#### `tf.InteractiveSession.graph` {#InteractiveSession.graph}
-
-The graph that was launched in this session.
-
-
-- - -
-
-#### `tf.InteractiveSession.graph_def` {#InteractiveSession.graph_def}
-
-A serializable version of the underlying TensorFlow graph.
-
-##### Returns:
-
- A graph_pb2.GraphDef proto containing nodes for all of the Operations in
- the underlying TensorFlow graph.
-
-
-- - -
-
-#### `tf.InteractiveSession.partial_run(handle, fetches, feed_dict=None)` {#InteractiveSession.partial_run}
-
-Continues the execution with more feeds and fetches.
-
-This is EXPERIMENTAL and subject to change.
-
-To use partial execution, a user first calls `partial_run_setup()` and
-then a sequence of `partial_run()`. `partial_run_setup` specifies the
-list of feeds and fetches that will be used in the subsequent
-`partial_run` calls.
-
-The optional `feed_dict` argument allows the caller to override
-the value of tensors in the graph. See run() for more information.
-
-Below is a simple example:
-
-```python
-a = array_ops.placeholder(dtypes.float32, shape=[])
-b = array_ops.placeholder(dtypes.float32, shape=[])
-c = array_ops.placeholder(dtypes.float32, shape=[])
-r1 = math_ops.add(a, b)
-r2 = math_ops.multiply(r1, c)
-
-h = sess.partial_run_setup([r1, r2], [a, b, c])
-res = sess.partial_run(h, r1, feed_dict={a: 1, b: 2})
-res = sess.partial_run(h, r2, feed_dict={c: res})
-```
-
-##### Args:
-
-
-* <b>`handle`</b>: A handle for a sequence of partial runs.
-* <b>`fetches`</b>: A single graph element, a list of graph elements,
- or a dictionary whose values are graph elements or lists of graph
- elements (see documentation for `run`).
-* <b>`feed_dict`</b>: A dictionary that maps graph elements to values
- (described above).
-
-##### Returns:
-
- Either a single value if `fetches` is a single graph element, or
- a list of values if `fetches` is a list, or a dictionary with the
- same keys as `fetches` if that is a dictionary
- (see documentation for `run`).
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses on error.
-
-
-- - -
-
-#### `tf.InteractiveSession.partial_run_setup(fetches, feeds=None)` {#InteractiveSession.partial_run_setup}
-
-Sets up a graph with feeds and fetches for partial run.
-
-This is EXPERIMENTAL and subject to change.
-
-Note that contrary to `run`, `feeds` only specifies the graph elements.
-The tensors will be supplied by the subsequent `partial_run` calls.
-
-##### Args:
-
-
-* <b>`fetches`</b>: A single graph element, or a list of graph elements.
-* <b>`feeds`</b>: A single graph element, or a list of graph elements.
-
-##### Returns:
-
- A handle for partial run.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If this `Session` is in an invalid state (e.g. has been
- closed).
-* <b>`TypeError`</b>: If `fetches` or `feed_dict` keys are of an inappropriate type.
- tf.errors.OpError: Or one of its subclasses if a TensorFlow error happens.
-
-
-- - -
-
-#### `tf.InteractiveSession.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#InteractiveSession.run}
-
-Runs operations and evaluates tensors in `fetches`.
-
-This method runs one "step" of TensorFlow computation, by
-running the necessary graph fragment to execute every `Operation`
-and evaluate every `Tensor` in `fetches`, substituting the values in
-`feed_dict` for the corresponding input values.
-
-The `fetches` argument may be a single graph element, or an arbitrarily
-nested list, tuple, namedtuple, dict, or OrderedDict containing graph
-elements at its leaves. A graph element can be one of the following types:
-
-* An [`Operation`](../../api_docs/python/framework.md#Operation).
- The corresponding fetched value will be `None`.
-* A [`Tensor`](../../api_docs/python/framework.md#Tensor).
- The corresponding fetched value will be a numpy ndarray containing the
- value of that tensor.
-* A [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor).
- The corresponding fetched value will be a
- [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue)
- containing the value of that sparse tensor.
-* A `get_tensor_handle` op. The corresponding fetched value will be a
- numpy ndarray containing the handle of that tensor.
-* A `string` which is the name of a tensor or operation in the graph.
-
-The value returned by `run()` has the same shape as the `fetches` argument,
-where the leaves are replaced by the corresponding values returned by
-TensorFlow.
-
-Example:
-
-```python
- a = tf.constant([10, 20])
- b = tf.constant([1.0, 2.0])
- # 'fetches' can be a singleton
- v = session.run(a)
- # v is the numpy array [10, 20]
- # 'fetches' can be a list.
- v = session.run([a, b])
- # v a Python list with 2 numpy arrays: the numpy array [10, 20] and the
- # 1-D array [1.0, 2.0]
- # 'fetches' can be arbitrary lists, tuples, namedtuple, dicts:
- MyData = collections.namedtuple('MyData', ['a', 'b'])
- v = session.run({'k1': MyData(a, b), 'k2': [b, a]})
- # v is a dict with
- # v['k1'] is a MyData namedtuple with 'a' the numpy array [10, 20] and
- # 'b' the numpy array [1.0, 2.0]
- # v['k2'] is a list with the numpy array [1.0, 2.0] and the numpy array
- # [10, 20].
-```
-
-The optional `feed_dict` argument allows the caller to override
-the value of tensors in the graph. Each key in `feed_dict` can be
-one of the following types:
-
-* If the key is a [`Tensor`](../../api_docs/python/framework.md#Tensor), the
- value may be a Python scalar, string, list, or numpy ndarray
- that can be converted to the same `dtype` as that
- tensor. Additionally, if the key is a
- [placeholder](../../api_docs/python/io_ops.md#placeholder), the shape of
- the value will be checked for compatibility with the placeholder.
-* If the key is a
- [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor),
- the value should be a
- [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue).
-* If the key is a nested tuple of `Tensor`s or `SparseTensor`s, the value
- should be a nested tuple with the same structure that maps to their
- corresponding values as above.
-
-Each value in `feed_dict` must be convertible to a numpy array of the dtype
-of the corresponding key.
-
-The optional `options` argument expects a [`RunOptions`] proto. The options
-allow controlling the behavior of this particular step (e.g. turning tracing
-on).
-
-The optional `run_metadata` argument expects a [`RunMetadata`] proto. When
-appropriate, the non-Tensor output of this step will be collected there. For
-example, when users turn on tracing in `options`, the profiled info will be
-collected into this argument and passed back.
-
-##### Args:
-
-
-* <b>`fetches`</b>: A single graph element, a list of graph elements,
- or a dictionary whose values are graph elements or lists of graph
- elements (described above).
-* <b>`feed_dict`</b>: A dictionary that maps graph elements to values
- (described above).
-* <b>`options`</b>: A [`RunOptions`] protocol buffer
-* <b>`run_metadata`</b>: A [`RunMetadata`] protocol buffer
-
-##### Returns:
-
- Either a single value if `fetches` is a single graph element, or
- a list of values if `fetches` is a list, or a dictionary with the
- same keys as `fetches` if that is a dictionary (described above).
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If this `Session` is in an invalid state (e.g. has been
- closed).
-* <b>`TypeError`</b>: If `fetches` or `feed_dict` keys are of an inappropriate type.
-* <b>`ValueError`</b>: If `fetches` or `feed_dict` keys are invalid or refer to a
- `Tensor` that doesn't exist.
-
-
-- - -
-
-#### `tf.InteractiveSession.sess_str` {#InteractiveSession.sess_str}
-
-
-
-
-
-
-- - -
-
-### `tf.get_default_session()` {#get_default_session}
-
-Returns the default session for the current thread.
-
-The returned `Session` will be the innermost session on which a
-`Session` or `Session.as_default()` context has been entered.
-
-NOTE: The default session is a property of the current thread. If you
-create a new thread, and wish to use the default session in that
-thread, you must explicitly add a `with sess.as_default():` in that
-thread's function.
-
-##### Returns:
-
- The default `Session` being used in the current thread.
-
-
-
-## Error classes and convenience functions
-
-- - -
-
-### `class tf.OpError` {#OpError}
-
-A generic error that is raised when TensorFlow execution fails.
-
-Whenever possible, the session will raise a more specific subclass
-of `OpError` from the `tf.errors` module.
-- - -
-
-#### `tf.OpError.__init__(node_def, op, message, error_code)` {#OpError.__init__}
-
-Creates a new `OpError` indicating that a particular op failed.
-
-##### Args:
-
-
-* <b>`node_def`</b>: The `node_def_pb2.NodeDef` proto representing the op that
- failed, if known; otherwise None.
-* <b>`op`</b>: The `ops.Operation` that failed, if known; otherwise None.
-* <b>`message`</b>: The message string describing the failure.
-* <b>`error_code`</b>: The `error_codes_pb2.Code` describing the error.
-
-
-- - -
-
-#### `tf.OpError.__str__()` {#OpError.__str__}
-
-
-
-
-- - -
-
-#### `tf.OpError.error_code` {#OpError.error_code}
-
-The integer error code that describes the error.
-
-
-- - -
-
-#### `tf.OpError.message` {#OpError.message}
-
-The error message that describes the error.
-
-
-- - -
-
-#### `tf.OpError.node_def` {#OpError.node_def}
-
-The `NodeDef` proto representing the op that failed.
-
-
-- - -
-
-#### `tf.OpError.op` {#OpError.op}
-
-The operation that failed, if known.
-
-*N.B.* If the failed op was synthesized at runtime, e.g. a `Send`
-or `Recv` op, there will be no corresponding
-[`Operation`](../../api_docs/python/framework.md#Operation)
-object. In that case, this will return `None`, and you should
-instead use the [`OpError.node_def`](#OpError.node_def) to
-discover information about the op.
-
-##### Returns:
-
- The `Operation` that failed, or None.
-
-
-
-- - -
-
-### `class tf.errors.CancelledError` {#CancelledError}
-
-Raised when an operation or step is cancelled.
-
-For example, a long-running operation (e.g.
-[`queue.enqueue()`](../../api_docs/python/io_ops.md#QueueBase.enqueue) may be
-cancelled by running another operation (e.g.
-[`queue.close(cancel_pending_enqueues=True)`](../../api_docs/python/io_ops.md#QueueBase.close),
-or by [closing the session](../../api_docs/python/client.md#Session.close).
-A step that is running such a long-running operation will fail by raising
-`CancelledError`.
-
-- - -
-
-#### `tf.errors.CancelledError.__init__(node_def, op, message)` {#CancelledError.__init__}
-
-Creates a `CancelledError`.
-
-
-
-- - -
-
-### `class tf.errors.UnknownError` {#UnknownError}
-
-Unknown error.
-
-An example of where this error may be returned is if a Status value
-received from another address space belongs to an error-space that
-is not known to this address space. Also errors raised by APIs that
-do not return enough error information may be converted to this
-error.
-
-- - -
-
-#### `tf.errors.UnknownError.__init__(node_def, op, message, error_code=2)` {#UnknownError.__init__}
-
-Creates an `UnknownError`.
-
-
-
-- - -
-
-### `class tf.errors.InvalidArgumentError` {#InvalidArgumentError}
-
-Raised when an operation receives an invalid argument.
-
-This may occur, for example, if an operation is receives an input
-tensor that has an invalid value or shape. For example, the
-[`tf.matmul()`](../../api_docs/python/math_ops.md#matmul) op will raise this
-error if it receives an input that is not a matrix, and the
-[`tf.reshape()`](../../api_docs/python/array_ops.md#reshape) op will raise
-this error if the new shape does not match the number of elements in the input
-tensor.
-
-- - -
-
-#### `tf.errors.InvalidArgumentError.__init__(node_def, op, message)` {#InvalidArgumentError.__init__}
-
-Creates an `InvalidArgumentError`.
-
-
-
-- - -
-
-### `class tf.errors.DeadlineExceededError` {#DeadlineExceededError}
-
-Raised when a deadline expires before an operation could complete.
-
-This exception is not currently used.
-
-- - -
-
-#### `tf.errors.DeadlineExceededError.__init__(node_def, op, message)` {#DeadlineExceededError.__init__}
-
-Creates a `DeadlineExceededError`.
-
-
-
-- - -
-
-### `class tf.errors.NotFoundError` {#NotFoundError}
-
-Raised when a requested entity (e.g., a file or directory) was not found.
-
-For example, running the
-[`tf.WholeFileReader.read()`](../../api_docs/python/io_ops.md#WholeFileReader)
-operation could raise `NotFoundError` if it receives the name of a file that
-does not exist.
-
-- - -
-
-#### `tf.errors.NotFoundError.__init__(node_def, op, message)` {#NotFoundError.__init__}
-
-Creates a `NotFoundError`.
-
-
-
-- - -
-
-### `class tf.errors.AlreadyExistsError` {#AlreadyExistsError}
-
-Raised when an entity that we attempted to create already exists.
-
-For example, running an operation that saves a file
-(e.g. [`tf.train.Saver.save()`](../../api_docs/python/train.md#Saver.save))
-could potentially raise this exception if an explicit filename for an
-existing file was passed.
-
-- - -
-
-#### `tf.errors.AlreadyExistsError.__init__(node_def, op, message)` {#AlreadyExistsError.__init__}
-
-Creates an `AlreadyExistsError`.
-
-
-
-- - -
-
-### `class tf.errors.PermissionDeniedError` {#PermissionDeniedError}
-
-Raised when the caller does not have permission to run an operation.
-
-For example, running the
-[`tf.WholeFileReader.read()`](../../api_docs/python/io_ops.md#WholeFileReader)
-operation could raise `PermissionDeniedError` if it receives the name of a
-file for which the user does not have the read file permission.
-
-- - -
-
-#### `tf.errors.PermissionDeniedError.__init__(node_def, op, message)` {#PermissionDeniedError.__init__}
-
-Creates a `PermissionDeniedError`.
-
-
-
-- - -
-
-### `class tf.errors.UnauthenticatedError` {#UnauthenticatedError}
-
-The request does not have valid authentication credentials.
-
-This exception is not currently used.
-
-- - -
-
-#### `tf.errors.UnauthenticatedError.__init__(node_def, op, message)` {#UnauthenticatedError.__init__}
-
-Creates an `UnauthenticatedError`.
-
-
-
-- - -
-
-### `class tf.errors.ResourceExhaustedError` {#ResourceExhaustedError}
-
-Some resource has been exhausted.
-
-For example, this error might be raised if a per-user quota is
-exhausted, or perhaps the entire file system is out of space.
-
-- - -
-
-#### `tf.errors.ResourceExhaustedError.__init__(node_def, op, message)` {#ResourceExhaustedError.__init__}
-
-Creates a `ResourceExhaustedError`.
-
-
-
-- - -
-
-### `class tf.errors.FailedPreconditionError` {#FailedPreconditionError}
-
-Operation was rejected because the system is not in a state to execute it.
-
-This exception is most commonly raised when running an operation
-that reads a [`tf.Variable`](../../api_docs/python/state_ops.md#Variable)
-before it has been initialized.
-
-- - -
-
-#### `tf.errors.FailedPreconditionError.__init__(node_def, op, message)` {#FailedPreconditionError.__init__}
-
-Creates a `FailedPreconditionError`.
-
-
-
-- - -
-
-### `class tf.errors.AbortedError` {#AbortedError}
-
-The operation was aborted, typically due to a concurrent action.
-
-For example, running a
-[`queue.enqueue()`](../../api_docs/python/io_ops.md#QueueBase.enqueue)
-operation may raise `AbortedError` if a
-[`queue.close()`](../../api_docs/python/io_ops.md#QueueBase.close) operation
-previously ran.
-
-- - -
-
-#### `tf.errors.AbortedError.__init__(node_def, op, message)` {#AbortedError.__init__}
-
-Creates an `AbortedError`.
-
-
-
-- - -
-
-### `class tf.errors.OutOfRangeError` {#OutOfRangeError}
-
-Raised when an operation iterates past the valid input range.
-
-This exception is raised in "end-of-file" conditions, such as when a
-[`queue.dequeue()`](../../api_docs/python/io_ops.md#QueueBase.dequeue)
-operation is blocked on an empty queue, and a
-[`queue.close()`](../../api_docs/python/io_ops.md#QueueBase.close)
-operation executes.
-
-- - -
-
-#### `tf.errors.OutOfRangeError.__init__(node_def, op, message)` {#OutOfRangeError.__init__}
-
-Creates an `OutOfRangeError`.
-
-
-
-- - -
-
-### `class tf.errors.UnimplementedError` {#UnimplementedError}
-
-Raised when an operation has not been implemented.
-
-Some operations may raise this error when passed otherwise-valid
-arguments that it does not currently support. For example, running
-the [`tf.nn.max_pool()`](../../api_docs/python/nn.md#max_pool) operation
-would raise this error if pooling was requested on the batch dimension,
-because this is not yet supported.
-
-- - -
-
-#### `tf.errors.UnimplementedError.__init__(node_def, op, message)` {#UnimplementedError.__init__}
-
-Creates an `UnimplementedError`.
-
-
-
-- - -
-
-### `class tf.errors.InternalError` {#InternalError}
-
-Raised when the system experiences an internal error.
-
-This exception is raised when some invariant expected by the runtime
-has been broken. Catching this exception is not recommended.
-
-- - -
-
-#### `tf.errors.InternalError.__init__(node_def, op, message)` {#InternalError.__init__}
-
-Creates an `InternalError`.
-
-
-
-- - -
-
-### `class tf.errors.UnavailableError` {#UnavailableError}
-
-Raised when the runtime is currently unavailable.
-
-This exception is not currently used.
-
-- - -
-
-#### `tf.errors.UnavailableError.__init__(node_def, op, message)` {#UnavailableError.__init__}
-
-Creates an `UnavailableError`.
-
-
-
-- - -
-
-### `class tf.errors.DataLossError` {#DataLossError}
-
-Raised when unrecoverable data loss or corruption is encountered.
-
-For example, this may be raised by running a
-[`tf.WholeFileReader.read()`](../../api_docs/python/io_ops.md#WholeFileReader)
-operation, if the file is truncated while it is being read.
-
-- - -
-
-#### `tf.errors.DataLossError.__init__(node_def, op, message)` {#DataLossError.__init__}
-
-Creates a `DataLossError`.
-
-
-
-
-- - -
-
-### `tf.errors.exception_type_from_error_code(error_code)` {#exception_type_from_error_code}
-
-
-
-
-- - -
-
-### `tf.errors.error_code_from_exception_type(cls)` {#error_code_from_exception_type}
-
-
-
-
-- - -
-
-### `tf.errors.raise_exception_on_not_ok_status()` {#raise_exception_on_not_ok_status}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/constant_op.md b/tensorflow/g3doc/api_docs/python/constant_op.md
deleted file mode 100644
index 62654874f4..0000000000
--- a/tensorflow/g3doc/api_docs/python/constant_op.md
+++ /dev/null
@@ -1,775 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Constants, Sequences, and Random Values
-
-Note: Functions taking `Tensor` arguments can also take anything accepted by
-[`tf.convert_to_tensor`](framework.md#convert_to_tensor).
-
-[TOC]
-
-## Constant Value Tensors
-
-TensorFlow provides several operations that you can use to generate constants.
-
-- - -
-
-### `tf.zeros(shape, dtype=tf.float32, name=None)` {#zeros}
-
-Creates a tensor with all elements set to zero.
-
-This operation returns a tensor of type `dtype` with shape `shape` and
-all elements set to zero.
-
-For example:
-
-```python
-tf.zeros([3, 4], tf.int32) ==> [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
-```
-
-##### Args:
-
-
-* <b>`shape`</b>: Either a list of integers, or a 1-D `Tensor` of type `int32`.
-* <b>`dtype`</b>: The type of an element in the resulting `Tensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` with all elements set to zero.
-
-
-- - -
-
-### `tf.zeros_like(tensor, dtype=None, name=None, optimize=True)` {#zeros_like}
-
-Creates a tensor with all elements set to zero.
-
-Given a single tensor (`tensor`), this operation returns a tensor of the
-same type and shape as `tensor` with all elements set to zero. Optionally,
-you can use `dtype` to specify a new type for the returned tensor.
-
-For example:
-
-```python
-# 'tensor' is [[1, 2, 3], [4, 5, 6]]
-tf.zeros_like(tensor) ==> [[0, 0, 0], [0, 0, 0]]
-```
-
-##### Args:
-
-
-* <b>`tensor`</b>: A `Tensor`.
-* <b>`dtype`</b>: A type for the returned `Tensor`. Must be `float32`, `float64`,
- `int8`, `int16`, `int32`, `int64`, `uint8`, `complex64`, or `complex128`.
-
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`optimize`</b>: if true, attempt to statically determine the shape of 'tensor'
- and encode it as a constant.
-
-##### Returns:
-
- A `Tensor` with all elements set to zero.
-
-
-
-- - -
-
-### `tf.ones(shape, dtype=tf.float32, name=None)` {#ones}
-
-Creates a tensor with all elements set to 1.
-
-This operation returns a tensor of type `dtype` with shape `shape` and all
-elements set to 1.
-
-For example:
-
-```python
-tf.ones([2, 3], tf.int32) ==> [[1, 1, 1], [1, 1, 1]]
-```
-
-##### Args:
-
-
-* <b>`shape`</b>: Either a list of integers, or a 1-D `Tensor` of type `int32`.
-* <b>`dtype`</b>: The type of an element in the resulting `Tensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` with all elements set to 1.
-
-
-- - -
-
-### `tf.ones_like(tensor, dtype=None, name=None, optimize=True)` {#ones_like}
-
-Creates a tensor with all elements set to 1.
-
-Given a single tensor (`tensor`), this operation returns a tensor of the same
-type and shape as `tensor` with all elements set to 1. Optionally, you can
-specify a new type (`dtype`) for the returned tensor.
-
-For example:
-
-```python
-# 'tensor' is [[1, 2, 3], [4, 5, 6]]
-tf.ones_like(tensor) ==> [[1, 1, 1], [1, 1, 1]]
-```
-
-##### Args:
-
-
-* <b>`tensor`</b>: A `Tensor`.
-* <b>`dtype`</b>: A type for the returned `Tensor`. Must be `float32`, `float64`,
- `int8`, `int16`, `int32`, `int64`, `uint8`, `complex64`, `complex128` or
- `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`optimize`</b>: if true, attempt to statically determine the shape of 'tensor'
- and encode it as a constant.
-
-##### Returns:
-
- A `Tensor` with all elements set to 1.
-
-
-
-- - -
-
-### `tf.fill(dims, value, name=None)` {#fill}
-
-Creates a tensor filled with a scalar value.
-
-This operation creates a tensor of shape `dims` and fills it with `value`.
-
-For example:
-
-```prettyprint
-# Output tensor has shape [2, 3].
-fill([2, 3], 9) ==> [[9, 9, 9]
- [9, 9, 9]]
-```
-
-##### Args:
-
-
-* <b>`dims`</b>: A `Tensor` of type `int32`.
- 1-D. Represents the shape of the output tensor.
-* <b>`value`</b>: A `Tensor`. 0-D (scalar). Value to fill the returned tensor.
-
- @compatibility(numpy)
- Equivalent to np.full
- @end_compatibility
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `value`.
-
-
-
-- - -
-
-### `tf.constant(value, dtype=None, shape=None, name='Const', verify_shape=False)` {#constant}
-
-Creates a constant tensor.
-
- The resulting tensor is populated with values of type `dtype`, as
- specified by arguments `value` and (optionally) `shape` (see examples
- below).
-
- The argument `value` can be a constant value, or a list of values of type
- `dtype`. If `value` is a list, then the length of the list must be less
- than or equal to the number of elements implied by the `shape` argument (if
- specified). In the case where the list length is less than the number of
- elements specified by `shape`, the last element in the list will be used
- to fill the remaining entries.
-
- The argument `shape` is optional. If present, it specifies the dimensions of
- the resulting tensor. If not present, the shape of `value` is used.
-
- If the argument `dtype` is not specified, then the type is inferred from
- the type of `value`.
-
- For example:
-
- ```python
- # Constant 1-D Tensor populated with value list.
- tensor = tf.constant([1, 2, 3, 4, 5, 6, 7]) => [1 2 3 4 5 6 7]
-
- # Constant 2-D tensor populated with scalar value -1.
- tensor = tf.constant(-1.0, shape=[2, 3]) => [[-1. -1. -1.]
- [-1. -1. -1.]]
- ```
-
-##### Args:
-
-
-* <b>`value`</b>: A constant value (or list) of output type `dtype`.
-
-
-* <b>`dtype`</b>: The type of the elements of the resulting tensor.
-
-
-* <b>`shape`</b>: Optional dimensions of resulting tensor.
-
-
-* <b>`name`</b>: Optional name for the tensor.
-
-
-* <b>`verify_shape`</b>: Boolean that enables verification of a shape of values.
-
-##### Returns:
-
- A Constant Tensor.
-
-
-
-## Sequences
-
-- - -
-
-### `tf.linspace(start, stop, num, name=None)` {#linspace}
-
-Generates values in an interval.
-
-A sequence of `num` evenly-spaced values are generated beginning at `start`.
-If `num > 1`, the values in the sequence increase by `stop - start / num - 1`,
-so that the last one is exactly `stop`.
-
-For example:
-
-```
-tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0]
-```
-
-##### Args:
-
-
-* <b>`start`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
- First entry in the range.
-* <b>`stop`</b>: A `Tensor`. Must have the same type as `start`.
- Last entry in the range.
-* <b>`num`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- Number of values to generate.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `start`. 1-D. The generated values.
-
-
-
-- - -
-
-### `tf.range(start, limit=None, delta=1, dtype=None, name='range')` {#range}
-
-Creates a sequence of numbers.
-
-Creates a sequence of numbers that begins at `start` and extends by
-increments of `delta` up to but not including `limit`.
-
-The dtype of the resulting tensor is inferred from the inputs unless
-it is provided explicitly.
-
-Like the Python builtin `range`, `start` defaults to 0, so that
-`range(n) = range(0, n)`.
-
-For example:
-
-```python
-# 'start' is 3
-# 'limit' is 18
-# 'delta' is 3
-tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15]
-
-# 'start' is 3
-# 'limit' is 1
-# 'delta' is -0.5
-tf.range(start, limit, delta) ==> [3, 2.5, 2, 1.5]
-
-# 'limit' is 5
-tf.range(limit) ==> [0, 1, 2, 3, 4]
-```
-
-##### Args:
-
-
-* <b>`start`</b>: A 0-D `Tensor` (scalar). Acts as first entry in the range if
- `limit` is not None; otherwise, acts as range limit and first entry
- defaults to 0.
-* <b>`limit`</b>: A 0-D `Tensor` (scalar). Upper limit of sequence,
- exclusive. If None, defaults to the value of `start` while the first
- entry of the range defaults to 0.
-* <b>`delta`</b>: A 0-D `Tensor` (scalar). Number that increments
- `start`. Defaults to 1.
-* <b>`dtype`</b>: The type of the elements of the resulting tensor.
-* <b>`name`</b>: A name for the operation. Defaults to "range".
-
-##### Returns:
-
- An 1-D `Tensor` of type `dtype`.
-
-@compatibility(numpy)
-Equivalent to np.arange
-@end_compatibility
-
-
-
-## Random Tensors
-
-TensorFlow has several ops that create random tensors with different
-distributions. The random ops are stateful, and create new random values each
-time they are evaluated.
-
-The `seed` keyword argument in these functions acts in conjunction with
-the graph-level random seed. Changing either the graph-level seed using
-[`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) or the
-op-level seed will change the underlying seed of these operations. Setting
-neither graph-level nor op-level seed, results in a random seed for all
-operations.
-See [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
-for details on the interaction between operation-level and graph-level random
-seeds.
-
-### Examples:
-
-```python
-# Create a tensor of shape [2, 3] consisting of random normal values, with mean
-# -1 and standard deviation 4.
-norm = tf.random_normal([2, 3], mean=-1, stddev=4)
-
-# Shuffle the first dimension of a tensor
-c = tf.constant([[1, 2], [3, 4], [5, 6]])
-shuff = tf.random_shuffle(c)
-
-# Each time we run these ops, different results are generated
-sess = tf.Session()
-print(sess.run(norm))
-print(sess.run(norm))
-
-# Set an op-level seed to generate repeatable sequences across sessions.
-norm = tf.random_normal([2, 3], seed=1234)
-sess = tf.Session()
-print(sess.run(norm))
-print(sess.run(norm))
-sess = tf.Session()
-print(sess.run(norm))
-print(sess.run(norm))
-```
-
-Another common use of random values is the initialization of variables. Also see
-the [Variables How To](../../how_tos/variables/index.md).
-
-```python
-# Use random uniform values in [0, 1) as the initializer for a variable of shape
-# [2, 3]. The default type is float32.
-var = tf.Variable(tf.random_uniform([2, 3]), name="var")
-init = tf.global_variables_initializer()
-
-sess = tf.Session()
-sess.run(init)
-print(sess.run(var))
-```
-
-- - -
-
-### `tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)` {#random_normal}
-
-Outputs random values from a normal distribution.
-
-##### Args:
-
-
-* <b>`shape`</b>: A 1-D integer Tensor or Python array. The shape of the output tensor.
-* <b>`mean`</b>: A 0-D Tensor or Python value of type `dtype`. The mean of the normal
- distribution.
-* <b>`stddev`</b>: A 0-D Tensor or Python value of type `dtype`. The standard deviation
- of the normal distribution.
-* <b>`dtype`</b>: The type of the output.
-* <b>`seed`</b>: A Python integer. Used to create a random seed for the distribution.
- See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tensor of the specified shape filled with random normal values.
-
-
-- - -
-
-### `tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)` {#truncated_normal}
-
-Outputs random values from a truncated normal distribution.
-
-The generated values follow a normal distribution with specified mean and
-standard deviation, except that values whose magnitude is more than 2 standard
-deviations from the mean are dropped and re-picked.
-
-##### Args:
-
-
-* <b>`shape`</b>: A 1-D integer Tensor or Python array. The shape of the output tensor.
-* <b>`mean`</b>: A 0-D Tensor or Python value of type `dtype`. The mean of the
- truncated normal distribution.
-* <b>`stddev`</b>: A 0-D Tensor or Python value of type `dtype`. The standard deviation
- of the truncated normal distribution.
-* <b>`dtype`</b>: The type of the output.
-* <b>`seed`</b>: A Python integer. Used to create a random seed for the distribution.
- See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tensor of the specified shape filled with random truncated normal values.
-
-
-- - -
-
-### `tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)` {#random_uniform}
-
-Outputs random values from a uniform distribution.
-
-The generated values follow a uniform distribution in the range
-`[minval, maxval)`. The lower bound `minval` is included in the range, while
-the upper bound `maxval` is excluded.
-
-For floats, the default range is `[0, 1)`. For ints, at least `maxval` must
-be specified explicitly.
-
-In the integer case, the random integers are slightly biased unless
-`maxval - minval` is an exact power of two. The bias is small for values of
-`maxval - minval` significantly smaller than the range of the output (either
-`2**32` or `2**64`).
-
-##### Args:
-
-
-* <b>`shape`</b>: A 1-D integer Tensor or Python array. The shape of the output tensor.
-* <b>`minval`</b>: A 0-D Tensor or Python value of type `dtype`. The lower bound on the
- range of random values to generate. Defaults to 0.
-* <b>`maxval`</b>: A 0-D Tensor or Python value of type `dtype`. The upper bound on
- the range of random values to generate. Defaults to 1 if `dtype` is
- floating point.
-* <b>`dtype`</b>: The type of the output: `float32`, `float64`, `int32`, or `int64`.
-* <b>`seed`</b>: A Python integer. Used to create a random seed for the distribution.
- See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tensor of the specified shape filled with random uniform values.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `dtype` is integral and `maxval` is not specified.
-
-
-- - -
-
-### `tf.random_shuffle(value, seed=None, name=None)` {#random_shuffle}
-
-Randomly shuffles a tensor along its first dimension.
-
-The tensor is shuffled along dimension 0, such that each `value[j]` is mapped
-to one and only one `output[i]`. For example, a mapping that might occur for a
-3x2 tensor is:
-
-```python
-[[1, 2], [[5, 6],
- [3, 4], ==> [1, 2],
- [5, 6]] [3, 4]]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: A Tensor to be shuffled.
-* <b>`seed`</b>: A Python integer. Used to create a random seed for the distribution.
- See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tensor of same shape and type as `value`, shuffled along its first
- dimension.
-
-
-- - -
-
-### `tf.random_crop(value, size, seed=None, name=None)` {#random_crop}
-
-Randomly crops a tensor to a given size.
-
-Slices a shape `size` portion out of `value` at a uniformly chosen offset.
-Requires `value.shape >= size`.
-
-If a dimension should not be cropped, pass the full size of that dimension.
-For example, RGB images can be cropped with
-`size = [crop_height, crop_width, 3]`.
-
-##### Args:
-
-
-* <b>`value`</b>: Input tensor to crop.
-* <b>`size`</b>: 1-D tensor with size the rank of `value`.
-* <b>`seed`</b>: Python integer. Used to create a random seed. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A cropped tensor of the same rank as `value` and shape `size`.
-
-
-- - -
-
-### `tf.multinomial(logits, num_samples, seed=None, name=None)` {#multinomial}
-
-Draws samples from a multinomial distribution.
-
-Example:
-
-```python
-# samples has shape [1, 5], where each value is either 0 or 1 with equal
-# probability.
-samples = tf.multinomial(tf.log([[10., 10.]]), 5)
-```
-
-##### Args:
-
-
-* <b>`logits`</b>: 2-D Tensor with shape `[batch_size, num_classes]`. Each slice
- `[i, :]` represents the unnormalized log probabilities for all classes.
-* <b>`num_samples`</b>: 0-D. Number of independent samples to draw for each row slice.
-* <b>`seed`</b>: A Python integer. Used to create a random seed for the distribution.
- See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- The drawn samples of shape `[batch_size, num_samples]`.
-
-
-- - -
-
-### `tf.random_gamma(shape, alpha, beta=None, dtype=tf.float32, seed=None, name=None)` {#random_gamma}
-
-Draws `shape` samples from each of the given Gamma distribution(s).
-
-`alpha` is the shape parameter describing the distribution(s), and `beta` is
-the inverse scale parameter(s).
-
-Example:
-
- samples = tf.random_gamma([10], [0.5, 1.5])
- # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents
- # the samples drawn from each distribution
-
- samples = tf.random_gamma([7, 5], [0.5, 1.5])
- # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]
- # represents the 7x5 samples drawn from each of the two distributions
-
- samples = tf.random_gamma([30], [[1.],[3.],[5.]], beta=[[3., 4.]])
- # samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions.
-
- Note that for small alpha values, there is a chance you will draw a value of
- exactly 0, which gets worse for lower-precision dtypes, even though zero is
- not in the support of the gamma distribution.
-
- Relevant cdfs (~chance you will draw a exactly-0 value):
- ```
- stats.gamma(.01).cdf(np.finfo(np.float16).tiny)
- 0.91269738769897879
- stats.gamma(.01).cdf(np.finfo(np.float32).tiny)
- 0.41992668622045726
- stats.gamma(.01).cdf(np.finfo(np.float64).tiny)
- 0.00084322740680686662
- stats.gamma(.35).cdf(np.finfo(np.float16).tiny)
- 0.037583276135263931
- stats.gamma(.35).cdf(np.finfo(np.float32).tiny)
- 5.9514895726818067e-14
- stats.gamma(.35).cdf(np.finfo(np.float64).tiny)
- 2.3529843400647272e-108
- ```
-
-##### Args:
-
-
-* <b>`shape`</b>: A 1-D integer Tensor or Python array. The shape of the output samples
- to be drawn per alpha/beta-parameterized distribution.
-* <b>`alpha`</b>: A Tensor or Python value or N-D array of type `dtype`. `alpha`
- provides the shape parameter(s) describing the gamma distribution(s) to
- sample. Must be broadcastable with `beta`.
-* <b>`beta`</b>: A Tensor or Python value or N-D array of type `dtype`. Defaults to 1.
- `beta` provides the inverse scale parameter(s) of the gamma
- distribution(s) to sample. Must be broadcastable with `alpha`.
-* <b>`dtype`</b>: The type of alpha, beta, and the output: `float16`, `float32`, or
- `float64`.
-* <b>`seed`</b>: A Python integer. Used to create a random seed for the distributions.
- See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` of shape `tf.concat(shape, tf.shape(alpha + beta))`
- with values of type `dtype`.
-
-
-- - -
-
-### `tf.random_poisson(lam, shape, dtype=tf.float32, seed=None, name=None)` {#random_poisson}
-
-Draws `shape` samples from each of the given Poisson distribution(s).
-
-`lam` is the rate parameter describing the distribution(s).
-
-Example:
-
- samples = tf.random_poisson([0.5, 1.5], [10])
- # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents
- # the samples drawn from each distribution
-
- samples = tf.random_poisson([12.2, 3.3], [7, 5])
- # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]
- # represents the 7x5 samples drawn from each of the two distributions
-
-##### Args:
-
-
-* <b>`lam`</b>: A Tensor or Python value or N-D array of type `dtype`.
- `lam` provides the rate parameter(s) describing the poisson
- distribution(s) to sample.
-* <b>`shape`</b>: A 1-D integer Tensor or Python array. The shape of the output samples
- to be drawn per "rate"-parameterized distribution.
-* <b>`dtype`</b>: The type of `lam` and the output: `float16`, `float32`, or
- `float64`.
-* <b>`seed`</b>: A Python integer. Used to create a random seed for the distributions.
- See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` of shape `tf.concat(shape, tf.shape(lam))` with
- values of type `dtype`.
-
-
-- - -
-
-### `tf.set_random_seed(seed)` {#set_random_seed}
-
-Sets the graph-level random seed.
-
-Operations that rely on a random seed actually derive it from two seeds:
-the graph-level and operation-level seeds. This sets the graph-level seed.
-
-Its interactions with operation-level seeds is as follows:
-
- 1. If neither the graph-level nor the operation seed is set:
- A random seed is used for this op.
- 2. If the graph-level seed is set, but the operation seed is not:
- The system deterministically picks an operation seed in conjunction
- with the graph-level seed so that it gets a unique random sequence.
- 3. If the graph-level seed is not set, but the operation seed is set:
- A default graph-level seed and the specified operation seed are used to
- determine the random sequence.
- 4. If both the graph-level and the operation seed are set:
- Both seeds are used in conjunction to determine the random sequence.
-
-To illustrate the user-visible effects, consider these examples:
-
-To generate different sequences across sessions, set neither
-graph-level nor op-level seeds:
-
-```python
-a = tf.random_uniform([1])
-b = tf.random_normal([1])
-
-print("Session 1")
-with tf.Session() as sess1:
- print(sess1.run(a)) # generates 'A1'
- print(sess1.run(a)) # generates 'A2'
- print(sess1.run(b)) # generates 'B1'
- print(sess1.run(b)) # generates 'B2'
-
-print("Session 2")
-with tf.Session() as sess2:
- print(sess2.run(a)) # generates 'A3'
- print(sess2.run(a)) # generates 'A4'
- print(sess2.run(b)) # generates 'B3'
- print(sess2.run(b)) # generates 'B4'
-```
-
-To generate the same repeatable sequence for an op across sessions, set the
-seed for the op:
-
-```python
-a = tf.random_uniform([1], seed=1)
-b = tf.random_normal([1])
-
-# Repeatedly running this block with the same graph will generate the same
-# sequence of values for 'a', but different sequences of values for 'b'.
-print("Session 1")
-with tf.Session() as sess1:
- print(sess1.run(a)) # generates 'A1'
- print(sess1.run(a)) # generates 'A2'
- print(sess1.run(b)) # generates 'B1'
- print(sess1.run(b)) # generates 'B2'
-
-print("Session 2")
-with tf.Session() as sess2:
- print(sess2.run(a)) # generates 'A1'
- print(sess2.run(a)) # generates 'A2'
- print(sess2.run(b)) # generates 'B3'
- print(sess2.run(b)) # generates 'B4'
-```
-
-To make the random sequences generated by all ops be repeatable across
-sessions, set a graph-level seed:
-
-```python
-tf.set_random_seed(1234)
-a = tf.random_uniform([1])
-b = tf.random_normal([1])
-
-# Repeatedly running this block with the same graph will generate the same
-# sequences of 'a' and 'b'.
-print("Session 1")
-with tf.Session() as sess1:
- print(sess1.run(a)) # generates 'A1'
- print(sess1.run(a)) # generates 'A2'
- print(sess1.run(b)) # generates 'B1'
- print(sess1.run(b)) # generates 'B2'
-
-print("Session 2")
-with tf.Session() as sess2:
- print(sess2.run(a)) # generates 'A1'
- print(sess2.run(a)) # generates 'A2'
- print(sess2.run(b)) # generates 'B1'
- print(sess2.run(b)) # generates 'B2'
-```
-
-##### Args:
-
-
-* <b>`seed`</b>: integer.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.bayesflow.entropy.md b/tensorflow/g3doc/api_docs/python/contrib.bayesflow.entropy.md
deleted file mode 100644
index bac58387e8..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.bayesflow.entropy.md
+++ /dev/null
@@ -1,304 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# BayesFlow Entropy (contrib)
-[TOC]
-
-Entropy Ops.
-
-## Background
-
-Common Shannon entropy, the Evidence Lower BOund (ELBO), KL divergence, and more
-all have information theoretic use and interpretations. They are also often
-used in variational inference. This library brings together `Ops` for
-estimating them, e.g. using Monte Carlo expectations.
-
-## Examples
-
-Example of fitting a variational posterior with the ELBO.
-
-```python
-# We start by assuming knowledge of the log of a joint density p(z, x) over
-# latent variable z and fixed measurement x. Since x is fixed, the Python
-# function does not take x as an argument.
-def log_joint(z):
- theta = tf.Variable(0.) # Trainable variable that helps define log_joint.
- ...
-
-# Next, define a Normal distribution with trainable parameters.
-q = distributions.Normal(mu=tf.Variable(0.), sigma=tf.Variable(1.))
-
-# Now, define a loss function (negative ELBO) that, when minimized, will adjust
-# mu, sigma, and theta, increasing the ELBO, which we hope will both reduce the
-# KL divergence between q(z) and p(z | x), and increase p(x). Note that we
-# cannot guarantee both, but in general we expect both to happen.
-elbo = entropy.elbo_ratio(log_p, q, n=10)
-loss = -elbo
-
-# Minimize the loss
-train_op = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
-tf.global_variables_initializer().run()
-for step in range(100):
- train_op.run()
-```
-
-## Ops
-
-- - -
-
-### `tf.contrib.bayesflow.entropy.elbo_ratio(log_p, q, z=None, n=None, seed=None, form=None, name='elbo_ratio')` {#elbo_ratio}
-
-Estimate of the ratio appearing in the `ELBO` and `KL` divergence.
-
-With `p(z) := exp{log_p(z)}`, this `Op` returns an approximation of
-
-```
-E_q[ Log[p(Z) / q(Z)] ]
-```
-
-The term `E_q[ Log[p(Z)] ]` is always computed as a sample mean.
-The term `E_q[ Log[q(z)] ]` can be computed with samples, or an exact formula
-if `q.entropy()` is defined. This is controlled with the kwarg `form`.
-
-This log-ratio appears in different contexts:
-
-#### `KL[q || p]`
-
-If `log_p(z) = Log[p(z)]` for distribution `p`, this `Op` approximates
-the negative Kullback-Leibler divergence.
-
-```
-elbo_ratio(log_p, q, n=100) = -1 * KL[q || p],
-KL[q || p] = E[ Log[q(Z)] - Log[p(Z)] ]
-```
-
-Note that if `p` is a `Distribution`, then `distributions.kl(q, p)` may be
-defined and available as an exact result.
-
-#### ELBO
-
-If `log_p(z) = Log[p(z, x)]` is the log joint of a distribution `p`, this is
-the Evidence Lower BOund (ELBO):
-
-```
-ELBO ~= E[ Log[p(Z, x)] - Log[q(Z)] ]
- = Log[p(x)] - KL[q || p]
- <= Log[p(x)]
-```
-
-User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
-
-##### Args:
-
-
-* <b>`log_p`</b>: Callable mapping samples from `q` to `Tensors` with
- shape broadcastable to `q.batch_shape`.
- For example, `log_p` works "just like" `q.log_prob`.
-* <b>`q`</b>: `tf.contrib.distributions.Distribution`.
-* <b>`z`</b>: `Tensor` of samples from `q`, produced by `q.sample(n)` for some `n`.
-* <b>`n`</b>: Integer `Tensor`. Number of samples to generate if `z` is not provided.
-* <b>`seed`</b>: Python integer to seed the random number generator.
-* <b>`form`</b>: Either `ELBOForms.analytic_entropy` (use formula for entropy of `q`)
- or `ELBOForms.sample` (sample estimate of entropy), or `ELBOForms.default`
- (attempt analytic entropy, fallback on sample).
- Default value is `ELBOForms.default`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- Scalar `Tensor` holding sample mean KL divergence. `shape` is the batch
- shape of `q`, and `dtype` is the same as `q`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `form` is not handled by this function.
-
-
-- - -
-
-### `tf.contrib.bayesflow.entropy.entropy_shannon(p, z=None, n=None, seed=None, form=None, name='entropy_shannon')` {#entropy_shannon}
-
-Monte Carlo or deterministic computation of Shannon's entropy.
-
-Depending on the kwarg `form`, this `Op` returns either the analytic entropy
-of the distribution `p`, or the sampled entropy:
-
-```
--n^{-1} sum_{i=1}^n p.log_prob(z_i), where z_i ~ p,
- \approx - E_p[ Log[p(Z)] ]
- = Entropy[p]
-```
-
-User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
-
-##### Args:
-
-
-* <b>`p`</b>: `tf.contrib.distributions.Distribution`
-* <b>`z`</b>: `Tensor` of samples from `p`, produced by `p.sample(n)` for some `n`.
-* <b>`n`</b>: Integer `Tensor`. Number of samples to generate if `z` is not provided.
-* <b>`seed`</b>: Python integer to seed the random number generator.
-* <b>`form`</b>: Either `ELBOForms.analytic_entropy` (use formula for entropy of `q`)
- or `ELBOForms.sample` (sample estimate of entropy), or `ELBOForms.default`
- (attempt analytic entropy, fallback on sample).
- Default value is `ELBOForms.default`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with same `dtype` as `p`, and shape equal to `p.batch_shape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `form` not handled by this function.
-* <b>`ValueError`</b>: If `form` is `ELBOForms.analytic_entropy` and `n` was provided.
-
-
-- - -
-
-### `tf.contrib.bayesflow.entropy.renyi_ratio(log_p, q, alpha, z=None, n=None, seed=None, name='renyi_ratio')` {#renyi_ratio}
-
-Monte Carlo estimate of the ratio appearing in Renyi divergence.
-
-This can be used to compute the Renyi (alpha) divergence, or a log evidence
-approximation based on Renyi divergence.
-
-#### Definition
-
-With `z_i` iid samples from `q`, and `exp{log_p(z)} = p(z)`, this `Op` returns
-the (biased for finite `n`) estimate:
-
-```
-(1 - alpha)^{-1} Log[ n^{-1} sum_{i=1}^n ( p(z_i) / q(z_i) )^{1 - alpha},
-\approx (1 - alpha)^{-1} Log[ E_q[ (p(Z) / q(Z))^{1 - alpha} ] ]
-```
-
-This ratio appears in different contexts:
-
-#### Renyi divergence
-
-If `log_p(z) = Log[p(z)]` is the log prob of a distribution, and
-`alpha > 0`, `alpha != 1`, this `Op` approximates `-1` times Renyi divergence:
-
-```
-# Choose reasonably high n to limit bias, see below.
-renyi_ratio(log_p, q, alpha, n=100)
- \approx -1 * D_alpha[q || p], where
-D_alpha[q || p] := (1 - alpha)^{-1} Log E_q[(p(Z) / q(Z))^{1 - alpha}]
-```
-
-The Renyi (or "alpha") divergence is non-negative and equal to zero iff
-`q = p`. Various limits of `alpha` lead to different special case results:
-
-```
-alpha D_alpha[q || p]
------ ---------------
---> 0 Log[ int_{q > 0} p(z) dz ]
-= 0.5, -2 Log[1 - Hel^2[q || p]], (\propto squared Hellinger distance)
---> 1 KL[q || p]
-= 2 Log[ 1 + chi^2[q || p] ], (\propto squared Chi-2 divergence)
---> infty Log[ max_z{q(z) / p(z)} ], (min description length principle).
-```
-
-See "Renyi Divergence Variational Inference", by Li and Turner.
-
-#### Log evidence approximation
-
-If `log_p(z) = Log[p(z, x)]` is the log of the joint distribution `p`, this is
-an alternative to the ELBO common in variational inference.
-
-```
-L_alpha(q, p) = Log[p(x)] - D_alpha[q || p]
-```
-
-If `q` and `p` have the same support, and `0 < a <= b < 1`, one can show
-`ELBO <= D_b <= D_a <= Log[p(x)]`. Thus, this `Op` allows a smooth
-interpolation between the ELBO and the true evidence.
-
-#### Stability notes
-
-Note that when `1 - alpha` is not small, the ratio `(p(z) / q(z))^{1 - alpha}`
-is subject to underflow/overflow issues. For that reason, it is evaluated in
-log-space after centering. Nonetheless, infinite/NaN results may occur. For
-that reason, one may wish to shrink `alpha` gradually. See the `Op`
-`renyi_alpha`. Using `float64` will also help.
-
-
-#### Bias for finite sample size
-
-Due to nonlinearity of the logarithm, for random variables `{X_1,...,X_n}`,
-`E[ Log[sum_{i=1}^n X_i] ] != Log[ E[sum_{i=1}^n X_i] ]`. As a result, this
-estimate is biased for finite `n`. For `alpha < 1`, it is non-decreasing
-with `n` (in expectation). For example, if `n = 1`, this estimator yields the
-same result as `elbo_ratio`, and as `n` increases the expected value
-of the estimator increases.
-
-#### Call signature
-
-User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
-
-##### Args:
-
-
-* <b>`log_p`</b>: Callable mapping samples from `q` to `Tensors` with
- shape broadcastable to `q.batch_shape`.
- For example, `log_p` works "just like" `q.log_prob`.
-* <b>`q`</b>: `tf.contrib.distributions.Distribution`.
- `float64` `dtype` recommended.
- `log_p` and `q` should be supported on the same set.
-* <b>`alpha`</b>: `Tensor` with shape `q.batch_shape` and values not equal to 1.
-* <b>`z`</b>: `Tensor` of samples from `q`, produced by `q.sample` for some `n`.
-* <b>`n`</b>: Integer `Tensor`. The number of samples to use if `z` is not provided.
- Note that this can be highly biased for small `n`, see docstring.
-* <b>`seed`</b>: Python integer to seed the random number generator.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
-
-* <b>`renyi_result`</b>: The scaled log of sample mean. `Tensor` with `shape` equal
- to batch shape of `q`, and `dtype` = `q.dtype`.
-
-
-- - -
-
-### `tf.contrib.bayesflow.entropy.renyi_alpha(step, decay_time, alpha_min, alpha_max=0.99999, name='renyi_alpha')` {#renyi_alpha}
-
-Exponentially decaying `Tensor` appropriate for Renyi ratios.
-
-When minimizing the Renyi divergence for `0 <= alpha < 1` (or maximizing the
-Renyi equivalent of elbo) in high dimensions, it is not uncommon to experience
-`NaN` and `inf` values when `alpha` is far from `1`.
-
-For that reason, it is often desirable to start the optimization with `alpha`
-very close to 1, and reduce it to a final `alpha_min` according to some
-schedule. The user may even want to optimize using `elbo_ratio` for
-some fixed time before switching to Renyi based methods.
-
-This `Op` returns an `alpha` decaying exponentially with step:
-
-```
-s(step) = (exp{step / decay_time} - 1) / (e - 1)
-t(s) = max(0, min(s, 1)), (smooth growth from 0 to 1)
-alpha(t) = (1 - t) alpha_min + t alpha_max
-```
-
-##### Args:
-
-
-* <b>`step`</b>: Non-negative scalar `Tensor`. Typically the global step or an
- offset version thereof.
-* <b>`decay_time`</b>: Positive scalar `Tensor`.
-* <b>`alpha_min`</b>: `float` or `double` `Tensor`.
- The minimal, final value of `alpha`, achieved when `step >= decay_time`
-* <b>`alpha_max`</b>: `Tensor` of same `dtype` as `alpha_min`.
- The maximal, beginning value of `alpha`, achieved when `step == 0`
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
-
-* <b>`alpha`</b>: A `Tensor` of same `dtype` as `alpha_min`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.bayesflow.monte_carlo.md b/tensorflow/g3doc/api_docs/python/contrib.bayesflow.monte_carlo.md
deleted file mode 100644
index 78ed0cb38f..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.bayesflow.monte_carlo.md
+++ /dev/null
@@ -1,206 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# BayesFlow Monte Carlo (contrib)
-[TOC]
-
-Monte Carlo integration and helpers.
-
-## Background
-
-Monte Carlo integration refers to the practice of estimating an expectation with
-a sample mean. For example, given random variable `Z in R^k` with density `p`,
-the expectation of function `f` can be approximated like:
-
-```
-E_p[f(Z)] = \int f(z) p(z) dz
- ~ S_n
- := n^{-1} \sum_{i=1}^n f(z_i), z_i iid samples from p.
-```
-
-If `E_p[|f(Z)|] < infinity`, then `S_n --> E_p[f(Z)]` by the strong law of large
-numbers. If `E_p[f(Z)^2] < infinity`, then `S_n` is asymptotically normal with
-variance `Var[f(Z)] / n`.
-
-Practitioners of Bayesian statistics often find themselves wanting to estimate
-`E_p[f(Z)]` when the distribution `p` is known only up to a constant. For
-example, the joint distribution `p(z, x)` may be known, but the evidence
-`p(x) = \int p(z, x) dz` may be intractable. In that case, a parameterized
-distribution family `q_lambda(z)` may be chosen, and the optimal `lambda` is the
-one minimizing the KL divergence between `q_lambda(z)` and
-`p(z | x)`. We only know `p(z, x)`, but that is sufficient to find `lambda`.
-
-
-## Log-space evaluation and subtracting the maximum.
-
-Care must be taken when the random variable lives in a high dimensional space.
-For example, the naive importance sample estimate `E_q[f(Z) p(Z) / q(Z)]`
-involves the ratio of two terms `p(Z) / q(Z)`, each of which must have tails
-dropping off faster than `O(|z|^{-(k + 1)})` in order to have finite integral.
-This ratio would often be zero or infinity up to numerical precision.
-
-For that reason, we write
-
-```
-Log E_q[ f(Z) p(Z) / q(Z) ]
- = Log E_q[ exp{Log[f(Z)] + Log[p(Z)] - Log[q(Z)] - C} ] + C, where
-C := Max[ Log[f(Z)] + Log[p(Z)] - Log[q(Z)] ].
-```
-
-The maximum value of the exponentiated term will be 0.0, and the expectation
-can be evaluated in a stable manner.
-
-## Ops
-
-- - -
-
-### `tf.contrib.bayesflow.monte_carlo.expectation(f, p, z=None, n=None, seed=None, name='expectation')` {#expectation}
-
-Monte Carlo estimate of an expectation: `E_p[f(Z)]` with sample mean.
-
-This `Op` returns
-
-```
-n^{-1} sum_{i=1}^n f(z_i), where z_i ~ p
-\approx E_p[f(Z)]
-```
-
-User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
-
-##### Args:
-
-
-* <b>`f`</b>: Callable mapping samples from `p` to `Tensors`.
-* <b>`p`</b>: `tf.contrib.distributions.Distribution`.
-* <b>`z`</b>: `Tensor` of samples from `p`, produced by `p.sample` for some `n`.
-* <b>`n`</b>: Integer `Tensor`. Number of samples to generate if `z` is not provided.
-* <b>`seed`</b>: Python integer to seed the random number generator.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with the same `dtype` as `p`.
-
-
-* <b>`Example`</b>:
-
-```python
-N_samples = 10000
-
-distributions = tf.contrib.distributions
-
-dist = distributions.Uniform([0.0, 0.0], [1.0, 2.0])
-elementwise_mean = lambda x: x
-mean_sum = lambda x: tf.reduce_sum(x, 1)
-
-estimate_elementwise_mean_tf = monte_carlo.expectation(elementwise_mean,
- dist,
- n=N_samples)
-estimate_mean_sum_tf = monte_carlo.expectation(mean_sum,
- dist,
- n=N_samples)
-
-with tf.Session() as sess:
- estimate_elementwise_mean, estimate_mean_sum = (
- sess.run([estimate_elementwise_mean_tf, estimate_mean_sum_tf]))
-print estimate_elementwise_mean
->>> np.array([ 0.50018013 1.00097895], dtype=np.float32)
-print estimate_mean_sum
->>> 1.49571
-
-```
-
-
-- - -
-
-### `tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler(f, log_p, sampling_dist_q, z=None, n=None, seed=None, name='expectation_importance_sampler')` {#expectation_importance_sampler}
-
-Monte Carlo estimate of `E_p[f(Z)] = E_q[f(Z) p(Z) / q(Z)]`.
-
-With `p(z) := exp{log_p(z)}`, this `Op` returns
-
-```
-n^{-1} sum_{i=1}^n [ f(z_i) p(z_i) / q(z_i) ], z_i ~ q,
-\approx E_q[ f(Z) p(Z) / q(Z) ]
-= E_p[f(Z)]
-```
-
-This integral is done in log-space with max-subtraction to better handle the
-often extreme values that `f(z) p(z) / q(z)` can take on.
-
-If `f >= 0`, it is up to 2x more efficient to exponentiate the result of
-`expectation_importance_sampler_logspace` applied to `Log[f]`.
-
-User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
-
-##### Args:
-
-
-* <b>`f`</b>: Callable mapping samples from `sampling_dist_q` to `Tensors` with shape
- broadcastable to `q.batch_shape`.
- For example, `f` works "just like" `q.log_prob`.
-* <b>`log_p`</b>: Callable mapping samples from `sampling_dist_q` to `Tensors` with
- shape broadcastable to `q.batch_shape`.
- For example, `log_p` works "just like" `sampling_dist_q.log_prob`.
-* <b>`sampling_dist_q`</b>: The sampling distribution.
- `tf.contrib.distributions.Distribution`.
- `float64` `dtype` recommended.
- `log_p` and `q` should be supported on the same set.
-* <b>`z`</b>: `Tensor` of samples from `q`, produced by `q.sample` for some `n`.
-* <b>`n`</b>: Integer `Tensor`. Number of samples to generate if `z` is not provided.
-* <b>`seed`</b>: Python integer to seed the random number generator.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- The importance sampling estimate. `Tensor` with `shape` equal
- to batch shape of `q`, and `dtype` = `q.dtype`.
-
-
-- - -
-
-### `tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler_logspace(log_f, log_p, sampling_dist_q, z=None, n=None, seed=None, name='expectation_importance_sampler_logspace')` {#expectation_importance_sampler_logspace}
-
-Importance sampling with a positive function, in log-space.
-
-With `p(z) := exp{log_p(z)}`, and `f(z) = exp{log_f(z)}`, this `Op`
-returns
-
-```
-Log[ n^{-1} sum_{i=1}^n [ f(z_i) p(z_i) / q(z_i) ] ], z_i ~ q,
-\approx Log[ E_q[ f(Z) p(Z) / q(Z) ] ]
-= Log[E_p[f(Z)]]
-```
-
-This integral is done in log-space with max-subtraction to better handle the
-often extreme values that `f(z) p(z) / q(z)` can take on.
-
-In contrast to `expectation_importance_sampler`, this `Op` returns values in
-log-space.
-
-
-User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
-
-##### Args:
-
-
-* <b>`log_f`</b>: Callable mapping samples from `sampling_dist_q` to `Tensors` with
- shape broadcastable to `q.batch_shape`.
- For example, `log_f` works "just like" `sampling_dist_q.log_prob`.
-* <b>`log_p`</b>: Callable mapping samples from `sampling_dist_q` to `Tensors` with
- shape broadcastable to `q.batch_shape`.
- For example, `log_p` works "just like" `q.log_prob`.
-* <b>`sampling_dist_q`</b>: The sampling distribution.
- `tf.contrib.distributions.Distribution`.
- `float64` `dtype` recommended.
- `log_p` and `q` should be supported on the same set.
-* <b>`z`</b>: `Tensor` of samples from `q`, produced by `q.sample` for some `n`.
-* <b>`n`</b>: Integer `Tensor`. Number of samples to generate if `z` is not provided.
-* <b>`seed`</b>: Python integer to seed the random number generator.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- Logarithm of the importance sampling estimate. `Tensor` with `shape` equal
- to batch shape of `q`, and `dtype` = `q.dtype`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.bayesflow.stochastic_graph.md b/tensorflow/g3doc/api_docs/python/contrib.bayesflow.stochastic_graph.md
deleted file mode 100644
index cd7bba275b..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.bayesflow.stochastic_graph.md
+++ /dev/null
@@ -1,46 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# BayesFlow Stochastic Graph (contrib)
-[TOC]
-
-Classes and helper functions for Stochastic Computation Graphs.
-
-## Stochastic Computation Graph Helper Functions
-
-- - -
-
-### `tf.contrib.bayesflow.stochastic_graph.surrogate_loss(sample_losses, stochastic_tensors=None, name='SurrogateLoss')` {#surrogate_loss}
-
-Surrogate loss for stochastic graphs.
-
-This function will call `loss_fn` on each `StochasticTensor`
-upstream of `sample_losses`, passing the losses that it influenced.
-
-Note that currently `surrogate_loss` does not work with `StochasticTensor`s
-instantiated in `while_loop`s or other control structures.
-
-##### Args:
-
-
-* <b>`sample_losses`</b>: a list or tuple of final losses. Each loss should be per
- example in the batch (and possibly per sample); that is, it should have
- dimensionality of 1 or greater. All losses should have the same shape.
-* <b>`stochastic_tensors`</b>: a list of `StochasticTensor`s to add loss terms for.
- If None, defaults to all `StochasticTensor`s in the graph upstream of
- the `Tensor`s in `sample_losses`.
-* <b>`name`</b>: the name with which to prepend created ops.
-
-##### Returns:
-
- `Tensor` loss, which is the sum of `sample_losses` and the
- `loss_fn`s returned by the `StochasticTensor`s.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `sample_losses` is not a list or tuple, or if its elements
- are not `Tensor`s.
-* <b>`ValueError`</b>: if any loss in `sample_losses` does not have dimensionality 1
- or greater.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.bayesflow.stochastic_tensor.md b/tensorflow/g3doc/api_docs/python/contrib.bayesflow.stochastic_tensor.md
deleted file mode 100644
index a0be205aea..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.bayesflow.stochastic_tensor.md
+++ /dev/null
@@ -1,467 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# BayesFlow Stochastic Tensors (contrib)
-[TOC]
-
-Classes and helper functions for creating Stochastic Tensors.
-
-`StochasticTensor` objects wrap `Distribution` objects. Their
-values may be samples from the underlying distribution, or the distribution
-mean (as governed by `value_type`). These objects provide a `loss`
-method for use when sampling from a non-reparameterized distribution.
-The `loss`method is used in conjunction with `stochastic_graph.surrogate_loss`
-to produce a single differentiable loss in stochastic graphs having
-both continuous and discrete stochastic nodes.
-
-## Stochastic Tensor Classes
-
-- - -
-
-### `class tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor` {#BaseStochasticTensor}
-
-Base Class for Tensor-like objects that emit stochastic values.
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.__init__()` {#BaseStochasticTensor.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.dtype` {#BaseStochasticTensor.dtype}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.graph` {#BaseStochasticTensor.graph}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.loss(sample_loss)` {#BaseStochasticTensor.loss}
-
-Returns the term to add to the surrogate loss.
-
-This method is called by `surrogate_loss`. The input `sample_loss` should
-have already had `stop_gradient` applied to it. This is because the
-surrogate_loss usually provides a Monte Carlo sample term of the form
-`differentiable_surrogate * sample_loss` where `sample_loss` is considered
-constant with respect to the input for purposes of the gradient.
-
-##### Args:
-
-
-* <b>`sample_loss`</b>: `Tensor`, sample loss downstream of this `StochasticTensor`.
-
-##### Returns:
-
- Either `None` or a `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.name` {#BaseStochasticTensor.name}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.value(name=None)` {#BaseStochasticTensor.value}
-
-
-
-
-
-- - -
-
-### `class tf.contrib.bayesflow.stochastic_tensor.StochasticTensor` {#StochasticTensor}
-
-StochasticTensor is a BaseStochasticTensor backed by a distribution.
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.__init__(dist, name='StochasticTensor', dist_value_type=None, loss_fn=score_function)` {#StochasticTensor.__init__}
-
-Construct a `StochasticTensor`.
-
-`StochasticTensor` is backed by the `dist` distribution and its `value`
-method will return the same value each time it is called. What `value` is
-returned is controlled by the `dist_value_type` (defaults to
-`SampleValue`).
-
-Some distributions' sample functions are not differentiable (e.g. a sample
-from a discrete distribution like a Bernoulli) and so to differentiate
-wrt parameters upstream of the sample requires a gradient estimator like
-the score function estimator. This is accomplished by passing a
-differentiable `loss_fn` to the `StochasticTensor`, which
-defaults to a function whose derivative is the score function estimator.
-Calling `stochastic_graph.surrogate_loss(final_losses)` will call
-`loss()` on every `StochasticTensor` upstream of final losses.
-
-`loss()` will return None for `StochasticTensor`s backed by
-reparameterized distributions; it will also return None if the value type is
-`MeanValueType` or if `loss_fn=None`.
-
-##### Args:
-
-
-* <b>`dist`</b>: an instance of `Distribution`.
-* <b>`name`</b>: a name for this `StochasticTensor` and its ops.
-* <b>`dist_value_type`</b>: a `_StochasticValueType`, which will determine what the
- `value` of this `StochasticTensor` will be. If not provided, the
- value type set with the `value_type` context manager will be used.
-* <b>`loss_fn`</b>: callable that takes
- `(st, st.value(), influenced_loss)`, where
- `st` is this `StochasticTensor`, and returns a `Tensor` loss. By
- default, `loss_fn` is the `score_function`, or more precisely, the
- integral of the score function, such that when the gradient is taken,
- the score function results. See the `stochastic_gradient_estimators`
- module for additional loss functions and baselines.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `dist` is not an instance of `Distribution`.
-* <b>`TypeError`</b>: if `loss_fn` is not `callable`.
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.distribution` {#StochasticTensor.distribution}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.dtype` {#StochasticTensor.dtype}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.entropy(name='entropy')` {#StochasticTensor.entropy}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.graph` {#StochasticTensor.graph}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.loss(final_loss, name='Loss')` {#StochasticTensor.loss}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.mean(name='mean')` {#StochasticTensor.mean}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.name` {#StochasticTensor.name}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.value(name='value')` {#StochasticTensor.value}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.value_type` {#StochasticTensor.value_type}
-
-
-
-
-
-
-## Stochastic Tensor Value Types
-
-- - -
-
-### `class tf.contrib.bayesflow.stochastic_tensor.MeanValue` {#MeanValue}
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.MeanValue.__init__(stop_gradient=False)` {#MeanValue.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.MeanValue.declare_inputs(unused_stochastic_tensor, unused_inputs_dict)` {#MeanValue.declare_inputs}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.MeanValue.popped_above(unused_value_type)` {#MeanValue.popped_above}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.MeanValue.pushed_above(unused_value_type)` {#MeanValue.pushed_above}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.MeanValue.stop_gradient` {#MeanValue.stop_gradient}
-
-
-
-
-
-- - -
-
-### `class tf.contrib.bayesflow.stochastic_tensor.SampleValue` {#SampleValue}
-
-Draw samples, possibly adding new outer dimensions along the way.
-
-This ValueType draws samples from StochasticTensors run within its
-context, increasing the rank according to the requested shape.
-
-Examples:
-
-```python
-mu = tf.zeros((2,3))
-sigma = tf.ones((2, 3))
-with sg.value_type(sg.SampleValue()):
- st = sg.StochasticTensor(
- tf.contrib.distributions.Normal, mu=mu, sigma=sigma)
-# draws 1 sample and does not reshape
-assertEqual(st.value().get_shape(), (2, 3))
-```
-
-```python
-mu = tf.zeros((2,3))
-sigma = tf.ones((2, 3))
-with sg.value_type(sg.SampleValue(4)):
- st = sg.StochasticTensor(
- tf.contrib.distributions.Normal, mu=mu, sigma=sigma)
-# draws 4 samples each with shape (2, 3) and concatenates
-assertEqual(st.value().get_shape(), (4, 2, 3))
-```
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.SampleValue.__init__(shape=(), stop_gradient=False)` {#SampleValue.__init__}
-
-Sample according to shape.
-
-For the given StochasticTensor `st` using this value type,
-the shape of `st.value()` will match that of
-`st.distribution.sample(shape)`.
-
-##### Args:
-
-
-* <b>`shape`</b>: A shape tuple or int32 tensor. The sample shape.
- Default is a scalar: take one sample and do not change the size.
-* <b>`stop_gradient`</b>: If `True`, StochasticTensors' values are wrapped in
- `stop_gradient`, to avoid backpropagation through.
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.SampleValue.declare_inputs(unused_stochastic_tensor, unused_inputs_dict)` {#SampleValue.declare_inputs}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.SampleValue.popped_above(unused_value_type)` {#SampleValue.popped_above}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.SampleValue.pushed_above(unused_value_type)` {#SampleValue.pushed_above}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.SampleValue.shape` {#SampleValue.shape}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.SampleValue.stop_gradient` {#SampleValue.stop_gradient}
-
-
-
-
-
-
-- - -
-
-### `tf.contrib.bayesflow.stochastic_tensor.value_type(dist_value_type)` {#value_type}
-
-Creates a value type context for any StochasticTensor created within.
-
-Typical usage:
-
-```
-with sg.value_type(sg.MeanValue(stop_gradients=True)):
- st = sg.StochasticTensor(tf.contrib.distributions.Normal, mu=mu,
- sigma=sigma)
-```
-
-In the example above, `st.value()` (or equivalently, `tf.identity(st)`) will
-be the mean value of the Normal distribution, i.e., `mu` (possibly
-broadcasted to the shape of `sigma`). Furthermore, because the `MeanValue`
-was marked with `stop_gradients=True`, this value will have been wrapped
-in a `stop_gradients` call to disable any possible backpropagation.
-
-##### Args:
-
-
-* <b>`dist_value_type`</b>: An instance of `MeanValue`, `SampleValue`, or
- any other stochastic value type.
-
-##### Yields:
-
- A context for `StochasticTensor` objects that controls the
- value created when they are initialized.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `dist_value_type` is not an instance of a stochastic value
- type.
-
-
-- - -
-
-### `tf.contrib.bayesflow.stochastic_tensor.get_current_value_type()` {#get_current_value_type}
-
-
-
-
-
-## Other Functions and Classes
-- - -
-
-### `class tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor` {#ObservedStochasticTensor}
-
-A StochasticTensor with an observed value.
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.__init__(dist, value, name=None)` {#ObservedStochasticTensor.__init__}
-
-Construct an `ObservedStochasticTensor`.
-
-`ObservedStochasticTensor` is backed by distribution `dist` and uses the
-provided value instead of using the current value type to draw a value from
-the distribution. The provided value argument must be appropriately shaped
-to have come from the distribution.
-
-##### Args:
-
-
-* <b>`dist`</b>: an instance of `Distribution`.
-* <b>`value`</b>: a Tensor containing the observed value
-* <b>`name`</b>: a name for this `ObservedStochasticTensor` and its ops.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `dist` is not an instance of `Distribution`.
-* <b>`ValueError`</b>: if `value` is not compatible with the distribution.
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.distribution` {#ObservedStochasticTensor.distribution}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.dtype` {#ObservedStochasticTensor.dtype}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.entropy(name='entropy')` {#ObservedStochasticTensor.entropy}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.graph` {#ObservedStochasticTensor.graph}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.loss(final_loss, name=None)` {#ObservedStochasticTensor.loss}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.mean(name='mean')` {#ObservedStochasticTensor.mean}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.name` {#ObservedStochasticTensor.name}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.value(name='value')` {#ObservedStochasticTensor.value}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.value_type` {#ObservedStochasticTensor.value_type}
-
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.bayesflow.variational_inference.md b/tensorflow/g3doc/api_docs/python/contrib.bayesflow.variational_inference.md
deleted file mode 100644
index 3da4aedcb6..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.bayesflow.variational_inference.md
+++ /dev/null
@@ -1,171 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# BayesFlow Variational Inference (contrib)
-[TOC]
-
-Variational inference.
-
-## Ops
-
-- - -
-
-### `tf.contrib.bayesflow.variational_inference.elbo(log_likelihood, variational_with_prior=None, keep_batch_dim=True, form=None, name='ELBO')` {#elbo}
-
-Evidence Lower BOund. `log p(x) >= ELBO`.
-
-Optimization objective for inference of hidden variables by variational
-inference.
-
-This function is meant to be used in conjunction with `StochasticTensor`.
-The user should build out the inference network, using `StochasticTensor`s
-as latent variables, and the generative network. `elbo` at minimum needs
-`p(x|Z)` and assumes that all `StochasticTensor`s upstream of `p(x|Z)` are
-the variational distributions. Use `register_prior` to register `Distribution`
-priors for each `StochasticTensor`. Alternatively, pass in
-`variational_with_prior` specifying all variational distributions and their
-priors.
-
-Mathematical details:
-
-```
-log p(x) = log \int p(x, Z) dZ
- = log \int \frac {q(Z)p(x, Z)}{q(Z)} dZ
- = log E_q[\frac {p(x, Z)}{q(Z)}]
- >= E_q[log \frac {p(x, Z)}{q(Z)}] = L[q; p, x] # ELBO
-
-L[q; p, x] = E_q[log p(x|Z)p(Z)] - E_q[log q(Z)]
- = E_q[log p(x|Z)p(Z)] + H[q] (1)
- = E_q[log p(x|Z)] - KL(q || p) (2)
-
-H - Entropy
-KL - Kullback-Leibler divergence
-```
-
-See section 2.2 of Stochastic Variational Inference by Hoffman et al. for
-more, including the ELBO's equivalence to minimizing `KL(q(Z)||p(Z|x))`
-in the fully Bayesian setting. https://arxiv.org/pdf/1206.7051.pdf.
-
-`form` specifies which form of the ELBO is used. `form=ELBOForms.default`
-tries, in order of preference: analytic KL, analytic entropy, sampling.
-
-Multiple entries in the `variational_with_prior` dict implies a factorization.
-e.g. `q(Z) = q(z1)q(z2)q(z3)`.
-
-##### Args:
-
-
-* <b>`log_likelihood`</b>: `Tensor` log p(x|Z).
-* <b>`variational_with_prior`</b>: dict from `StochasticTensor` q(Z) to
- `Distribution` p(Z). If `None`, defaults to all `StochasticTensor`
- objects upstream of `log_likelihood` with priors registered with
- `register_prior`.
-* <b>`keep_batch_dim`</b>: bool. Whether to keep the batch dimension when summing
- entropy/KL term. When the sample is per data point, this should be True;
- otherwise (e.g. in a Bayesian NN), this should be False.
-* <b>`form`</b>: ELBOForms constant. Controls how the ELBO is computed. Defaults to
- ELBOForms.default.
-* <b>`name`</b>: name to prefix ops with.
-
-##### Returns:
-
- `Tensor` ELBO of the same type and shape as `log_likelihood`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if variationals in `variational_with_prior` are not
- `StochasticTensor`s or if priors are not `Distribution`s.
-* <b>`TypeError`</b>: if form is not a valid ELBOForms constant.
-* <b>`ValueError`</b>: if `variational_with_prior` is None and there are no
- `StochasticTensor`s upstream of `log_likelihood`.
-* <b>`ValueError`</b>: if any variational does not have a prior passed or registered.
-
-
-- - -
-
-### `tf.contrib.bayesflow.variational_inference.elbo_with_log_joint(log_joint, variational=None, keep_batch_dim=True, form=None, name='ELBO')` {#elbo_with_log_joint}
-
-Evidence Lower BOund. `log p(x) >= ELBO`.
-
-This method is for models that have computed `p(x,Z)` instead of `p(x|Z)`.
-See `elbo` for further details.
-
-Because only the joint is specified, analytic KL is not available.
-
-##### Args:
-
-
-* <b>`log_joint`</b>: `Tensor` log p(x, Z).
-* <b>`variational`</b>: list of `StochasticTensor` q(Z). If `None`, defaults to all
- `StochasticTensor` objects upstream of `log_joint`.
-* <b>`keep_batch_dim`</b>: bool. Whether to keep the batch dimension when summing
- entropy term. When the sample is per data point, this should be True;
- otherwise (e.g. in a Bayesian NN), this should be False.
-* <b>`form`</b>: ELBOForms constant. Controls how the ELBO is computed. Defaults to
- ELBOForms.default.
-* <b>`name`</b>: name to prefix ops with.
-
-##### Returns:
-
- `Tensor` ELBO of the same type and shape as `log_joint`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if variationals in `variational` are not `StochasticTensor`s.
-* <b>`TypeError`</b>: if form is not a valid ELBOForms constant.
-* <b>`ValueError`</b>: if `variational` is None and there are no `StochasticTensor`s
- upstream of `log_joint`.
-* <b>`ValueError`</b>: if form is ELBOForms.analytic_kl.
-
-
-- - -
-
-### `class tf.contrib.bayesflow.variational_inference.ELBOForms` {#ELBOForms}
-
-Constants to control the `elbo` calculation.
-
-`analytic_kl` uses the analytic KL divergence between the
-variational distribution(s) and the prior(s).
-
-`analytic_entropy` uses the analytic entropy of the variational
-distribution(s).
-
-`sample` uses the sample KL or the sample entropy is the joint is provided.
-
-See `elbo` for what is used with `default`.
-- - -
-
-#### `tf.contrib.bayesflow.variational_inference.ELBOForms.check_form(form)` {#ELBOForms.check_form}
-
-
-
-
-
-- - -
-
-### `tf.contrib.bayesflow.variational_inference.register_prior(variational, prior)` {#register_prior}
-
-Associate a variational `StochasticTensor` with a `Distribution` prior.
-
-This is a helper function used in conjunction with `elbo` that allows users
-to specify the mapping between variational distributions and their priors
-without having to pass in `variational_with_prior` explicitly.
-
-##### Args:
-
-
-* <b>`variational`</b>: `StochasticTensor` q(Z). Approximating distribution.
-* <b>`prior`</b>: `Distribution` p(Z). Prior distribution.
-
-##### Returns:
-
- None
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if variational is not a `StochasticTensor` or `prior` is not
- a `Distribution`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.copy_graph.md b/tensorflow/g3doc/api_docs/python/contrib.copy_graph.md
deleted file mode 100644
index 90c16ce140..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.copy_graph.md
+++ /dev/null
@@ -1,86 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Copying Graph Elements (contrib)
-[TOC]
-
-Functions to copy elements between graphs.
-
-See the @{$python/contrib.copy_graph} guide.
-
-## Other Functions and Classes
-- - -
-
-### `tf.contrib.copy_graph.copy_op_to_graph(org_instance, to_graph, variables, scope='')` {#copy_op_to_graph}
-
-Given an `Operation` 'org_instance` from one `Graph`,
-initializes and returns a copy of it from another `Graph`,
-under the specified scope (default `""`).
-
-The copying is done recursively, so any `Operation` whose output
-is required to evaluate the `org_instance`, is also copied (unless
-already done).
-
-Since `Variable` instances are copied separately, those required
-to evaluate `org_instance` must be provided as input.
-
-Args:
-org_instance: An `Operation` from some `Graph`. Could be a
- `Placeholder` as well.
-to_graph: The `Graph` to copy `org_instance` to.
-variables: An iterable of `Variable` instances to copy `org_instance` to.
-scope: A scope for the new `Variable` (default `""`).
-
-##### Returns:
-
- The copied `Operation` from `to_graph`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `org_instance` is not an `Operation` or `Tensor`.
-
-
-- - -
-
-### `tf.contrib.copy_graph.copy_variable_to_graph(org_instance, to_graph, scope='')` {#copy_variable_to_graph}
-
-Given a `Variable` instance from one `Graph`, initializes and returns
-a copy of it from another `Graph`, under the specified scope
-(default `""`).
-
-Args:
-org_instance: A `Variable` from some `Graph`.
-to_graph: The `Graph` to copy the `Variable` to.
-scope: A scope for the new `Variable` (default `""`).
-
-##### Returns:
-
- The copied `Variable` from `to_graph`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `org_instance` is not a `Variable`.
-
-
-- - -
-
-### `tf.contrib.copy_graph.get_copied_op(org_instance, graph, scope='')` {#get_copied_op}
-
-Given an `Operation` instance from some `Graph`, returns
-its namesake from `graph`, under the specified scope
-(default `""`).
-
-If a copy of `org_instance` is present in `graph` under the given
-`scope`, it will be returned.
-
-Args:
-org_instance: An `Operation` from some `Graph`.
-graph: The `Graph` to be searched for a copr of `org_instance`.
-scope: The scope `org_instance` is present in.
-
-##### Returns:
-
- The `Operation` copy from `graph`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.crf.md b/tensorflow/g3doc/api_docs/python/contrib.crf.md
deleted file mode 100644
index 8966bcb38d..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.crf.md
+++ /dev/null
@@ -1,212 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# CRF (contrib)
-[TOC]
-
-Linear-chain CRF layer. See the @{$python/contrib.crf} guide.
-
-- - -
-
-### `tf.contrib.crf.crf_sequence_score(inputs, tag_indices, sequence_lengths, transition_params)` {#crf_sequence_score}
-
-Computes the unnormalized score for a tag sequence.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A [batch_size, max_seq_len, num_tags] tensor of unary potentials
- to use as input to the CRF layer.
-* <b>`tag_indices`</b>: A [batch_size, max_seq_len] matrix of tag indices for which we
- compute the unnormalized score.
-* <b>`sequence_lengths`</b>: A [batch_size] vector of true sequence lengths.
-* <b>`transition_params`</b>: A [num_tags, num_tags] transition matrix.
-
-##### Returns:
-
-
-* <b>`sequence_scores`</b>: A [batch_size] vector of unnormalized sequence scores.
-
-
-- - -
-
-### `tf.contrib.crf.crf_log_norm(inputs, sequence_lengths, transition_params)` {#crf_log_norm}
-
-Computes the normalization for a CRF.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A [batch_size, max_seq_len, num_tags] tensor of unary potentials
- to use as input to the CRF layer.
-* <b>`sequence_lengths`</b>: A [batch_size] vector of true sequence lengths.
-* <b>`transition_params`</b>: A [num_tags, num_tags] transition matrix.
-
-##### Returns:
-
-
-* <b>`log_norm`</b>: A [batch_size] vector of normalizers for a CRF.
-
-
-- - -
-
-### `tf.contrib.crf.crf_log_likelihood(inputs, tag_indices, sequence_lengths, transition_params=None)` {#crf_log_likelihood}
-
-Computes the log-likelihood of tag sequences in a CRF.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A [batch_size, max_seq_len, num_tags] tensor of unary potentials
- to use as input to the CRF layer.
-* <b>`tag_indices`</b>: A [batch_size, max_seq_len] matrix of tag indices for which we
- compute the log-likelihood.
-* <b>`sequence_lengths`</b>: A [batch_size] vector of true sequence lengths.
-* <b>`transition_params`</b>: A [num_tags, num_tags] transition matrix, if available.
-
-##### Returns:
-
-
-* <b>`log_likelihood`</b>: A scalar containing the log-likelihood of the given sequence
- of tag indices.
-* <b>`transition_params`</b>: A [num_tags, num_tags] transition matrix. This is either
- provided by the caller or created in this function.
-
-
-- - -
-
-### `tf.contrib.crf.crf_unary_score(tag_indices, sequence_lengths, inputs)` {#crf_unary_score}
-
-Computes the unary scores of tag sequences.
-
-##### Args:
-
-
-* <b>`tag_indices`</b>: A [batch_size, max_seq_len] matrix of tag indices.
-* <b>`sequence_lengths`</b>: A [batch_size] vector of true sequence lengths.
-* <b>`inputs`</b>: A [batch_size, max_seq_len, num_tags] tensor of unary potentials.
-
-##### Returns:
-
-
-* <b>`unary_scores`</b>: A [batch_size] vector of unary scores.
-
-
-- - -
-
-### `tf.contrib.crf.crf_binary_score(tag_indices, sequence_lengths, transition_params)` {#crf_binary_score}
-
-Computes the binary scores of tag sequences.
-
-##### Args:
-
-
-* <b>`tag_indices`</b>: A [batch_size, max_seq_len] matrix of tag indices.
-* <b>`sequence_lengths`</b>: A [batch_size] vector of true sequence lengths.
-* <b>`transition_params`</b>: A [num_tags, num_tags] matrix of binary potentials.
-
-##### Returns:
-
-
-* <b>`binary_scores`</b>: A [batch_size] vector of binary scores.
-
-
-- - -
-
-### `class tf.contrib.crf.CrfForwardRnnCell` {#CrfForwardRnnCell}
-
-Computes the alpha values in a linear-chain CRF.
-
-See http://www.cs.columbia.edu/~mcollins/fb.pdf for reference.
-- - -
-
-#### `tf.contrib.crf.CrfForwardRnnCell.__call__(inputs, state, scope=None)` {#CrfForwardRnnCell.__call__}
-
-Build the CrfForwardRnnCell.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A [batch_size, num_tags] matrix of unary potentials.
-* <b>`state`</b>: A [batch_size, num_tags] matrix containing the previous alpha
- values.
-* <b>`scope`</b>: Unused variable scope of this cell.
-
-##### Returns:
-
- new_alphas, new_alphas: A pair of [batch_size, num_tags] matrices
- values containing the new alpha values.
-
-
-- - -
-
-#### `tf.contrib.crf.CrfForwardRnnCell.__init__(transition_params)` {#CrfForwardRnnCell.__init__}
-
-Initialize the CrfForwardRnnCell.
-
-##### Args:
-
-
-* <b>`transition_params`</b>: A [num_tags, num_tags] matrix of binary potentials.
- This matrix is expanded into a [1, num_tags, num_tags] in preparation
- for the broadcast summation occurring within the cell.
-
-
-- - -
-
-#### `tf.contrib.crf.CrfForwardRnnCell.output_size` {#CrfForwardRnnCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.crf.CrfForwardRnnCell.state_size` {#CrfForwardRnnCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.crf.CrfForwardRnnCell.zero_state(batch_size, dtype)` {#CrfForwardRnnCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `tf.contrib.crf.viterbi_decode(score, transition_params)` {#viterbi_decode}
-
-Decode the highest scoring sequence of tags outside of TensorFlow.
-
-This should only be used at test time.
-
-##### Args:
-
-
-* <b>`score`</b>: A [seq_len, num_tags] matrix of unary potentials.
-* <b>`transition_params`</b>: A [num_tags, num_tags] matrix of binary potentials.
-
-##### Returns:
-
-
-* <b>`viterbi`</b>: A [seq_len] list of integers containing the highest scoring tag
- indicies.
-* <b>`viterbi_score`</b>: A float containing the score for the Viterbi sequence.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.distributions.bijector.md b/tensorflow/g3doc/api_docs/python/contrib.distributions.bijector.md
deleted file mode 100644
index e66fd67d50..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.distributions.bijector.md
+++ /dev/null
@@ -1,4336 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Random variable transformations (contrib)
-[TOC]
-
-Bijector Ops. See the @{$python/contrib.distributions.bijector} guide.
-
-- - -
-
-### `class tf.contrib.distributions.bijector.Affine` {#Affine}
-
-Compute `Y = g(X; shift, scale) = scale @ X + shift`.
-
-Here `scale = c * I + diag(D1) + tril(L) + V @ diag(D2) @ V.T`.
-
-In TF parlance, the `scale` term is logically equivalent to:
-
-```python
-scale = (
- scale_identity_multiplier * tf.diag(tf.ones(d)) +
- tf.diag(scale_diag) +
- scale_tril +
- scale_perturb_factor @ diag(scale_perturb_diag) @
- tf.transpose([scale_perturb_factor])
-)
-```
-
-The `scale` term is applied without necessarily materializing constituent
-matrices, i.e., the matmul is [matrix-free](
-https://en.wikipedia.org/wiki/Matrix-free_methods) when possible.
-
-Examples:
-
-```python
-# Y = X
-b = Affine()
-
-# Y = X + shift
-b = Affine(shift=[1., 2, 3])
-
-# Y = 2 * I @ X.T + shift
-b = Affine(shift=[1., 2, 3],
- scale_identity_multiplier=2.)
-
-# Y = tf.diag(d1) @ X.T + shift
-b = Affine(shift=[1., 2, 3],
- scale_diag=[-1., 2, 1]) # Implicitly 3x3.
-
-# Y = (I + v * v.T) @ X.T + shift
-b = Affine(shift=[1., 2, 3],
- scale_perturb_factor=[[1., 0],
- [0, 1],
- [1, 1]])
-
-# Y = (diag(d1) + v * diag(d2) * v.T) @ X.T + shift
-b = Affine(shift=[1., 2, 3],
- scale_diag=[1., 3, 3], # Implicitly 3x3.
- scale_perturb_diag=[2., 1], # Implicitly 2x2.
- scale_perturb_factor=[[1., 0],
- [0, 1],
- [1, 1]])
-
-```
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.__init__(shift=None, scale_identity_multiplier=None, scale_diag=None, scale_tril=None, scale_perturb_factor=None, scale_perturb_diag=None, event_ndims=1, validate_args=False, name='affine')` {#Affine.__init__}
-
-Instantiates the `Affine` bijector.
-
-This `Bijector` is initialized with `shift` `Tensor` and `scale` arguments,
-giving the forward operation:
-
-```none
-Y = g(X) = scale @ X + shift
-```
-
-where the `scale` term is logically equivalent to:
-
-```python
-scale = (
- scale_identity_multiplier * tf.diag(tf.ones(d)) +
- tf.diag(scale_diag) +
- scale_tril +
- scale_perturb_factor @ diag(scale_perturb_diag) @
- tf.transpose([scale_perturb_factor])
-)
-```
-
-If none of `scale_identity_multiplier`, `scale_diag`, or `scale_tril` are
-specified then `scale += IdentityMatrix`. Otherwise specifying a
-`scale` argument has the semantics of `scale += Expand(arg)`, i.e.,
-`scale_diag != None` means `scale += tf.diag(scale_diag)`.
-
-##### Args:
-
-
-* <b>`shift`</b>: Floating-point `Tensor`. If this is set to `None`, no shift is
- applied.
-* <b>`scale_identity_multiplier`</b>: floating point rank 0 `Tensor` representing a
- scaling done to the identity matrix.
- When `scale_identity_multiplier = scale_diag = scale_tril = None` then
- `scale += IdentityMatrix`. Otherwise no scaled-identity-matrix is added
- to `scale`.
-* <b>`scale_diag`</b>: Floating-point `Tensor` representing the diagonal matrix.
- `scale_diag` has shape [N1, N2, ... k], which represents a k x k
- diagonal matrix.
- When `None` no diagonal term is added to `scale`.
-* <b>`scale_tril`</b>: Floating-point `Tensor` representing the diagonal matrix.
- `scale_diag` has shape [N1, N2, ... k, k], which represents a k x k
- lower triangular matrix.
- When `None` no `scale_tril` term is added to `scale`.
- The upper triangular elements above the diagonal are ignored.
-* <b>`scale_perturb_factor`</b>: Floating-point `Tensor` representing factor matrix
- with last two dimensions of shape `(k, r)`. When `None`, no rank-r
- update is added to `scale`.
-* <b>`scale_perturb_diag`</b>: Floating-point `Tensor` representing the diagonal
- matrix. `scale_perturb_diag` has shape [N1, N2, ... r], which
- represents an `r x r` diagonal matrix. When `None` low rank updates will
- take the form `scale_perturb_factor * scale_perturb_factor.T`.
-* <b>`event_ndims`</b>: Scalar `int32` `Tensor` indicating the number of dimensions
- associated with a particular draw from the distribution. Must be 0 or 1.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str` name given to ops managed by this object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `perturb_diag` is specified but not `perturb_factor`.
-* <b>`TypeError`</b>: if `shift` has different `dtype` from `scale` arguments.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.dtype` {#Affine.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.event_ndims` {#Affine.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.forward(x, name='forward')` {#Affine.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.forward_event_shape(input_shape)` {#Affine.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Affine.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Affine.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.graph_parents` {#Affine.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.inverse(y, name='inverse')` {#Affine.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Affine.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.inverse_event_shape(output_shape)` {#Affine.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Affine.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Affine.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.is_constant_jacobian` {#Affine.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.name` {#Affine.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.scale` {#Affine.scale}
-
-The `scale` `LinearOperator` in `Y = scale @ X + shift`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.shift` {#Affine.shift}
-
-The `shift` `Tensor` in `Y = scale @ X + shift`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.validate_args` {#Affine.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.bijector.AffineLinearOperator` {#AffineLinearOperator}
-
-Compute `Y = g(X; shift, scale) = scale @ X + shift`.
-
-`shift` is a numeric `Tensor` and `scale` is a `LinearOperator`.
-
-If `X` is a scalar then the forward transformation is: `scale * X + shift`
-where `*` denotes the scalar product.
-
-Note: we don't always simply transpose `X` (but write it this way for
-brevity). Actually the input `X` undergoes the following transformation
-before being premultiplied by `scale`:
-
-1. If there are no sample dims, we call `X = tf.expand_dims(X, 0)`, i.e.,
- `new_sample_shape = [1]`. Otherwise do nothing.
-2. The sample shape is flattened to have one dimension, i.e.,
- `new_sample_shape = [n]` where `n = tf.reduce_prod(old_sample_shape)`.
-3. The sample dim is cyclically rotated left by 1, i.e.,
- `new_shape = [B1,...,Bb, k, n]` where `n` is as above, `k` is the
- event_shape, and `B1,...,Bb` are the batch shapes for each of `b` batch
- dimensions.
-
-(For more details see `shape.make_batch_of_event_sample_matrices`.)
-
-The result of the above transformation is that `X` can be regarded as a batch
-of matrices where each column is a draw from the distribution. After
-premultiplying by `scale`, we take the inverse of this procedure. The input
-`Y` also undergoes the same transformation before/after premultiplying by
-`inv(scale)`.
-
-Example Use:
-
-```python
-linalg = tf.contrib.linalg
-
-x = [1., 2, 3]
-
-shift = [-1., 0., 1]
-diag = [1., 2, 3]
-scale = linalg.LinearOperatorDiag(diag)
-affine = AffineLinearOperator(shift, scale)
-# In this case, `forward` is equivalent to:
-# y = scale @ x + shift
-y = affine.forward(x) # [0., 4, 10]
-
-shift = [2., 3, 1]
-tril = [[1., 0, 0],
- [2, 1, 0],
- [3, 2, 1]]
-scale = linalg.LinearOperatorTriL(tril)
-affine = AffineLinearOperator(shift, scale)
-# In this case, `forward` is equivalent to:
-# np.squeeze(np.matmul(tril, np.expand_dims(x, -1)), -1) + shift
-y = affine.forward(x) # [3., 7, 11]
-```
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.__init__(shift=None, scale=None, event_ndims=1, validate_args=False, name='affine_linear_operator')` {#AffineLinearOperator.__init__}
-
-Instantiates the `AffineLinearOperator` bijector.
-
-##### Args:
-
-
-* <b>`shift`</b>: Floating-point `Tensor`.
-* <b>`scale`</b>: Subclass of `LinearOperator`. Represents the (batch) positive
- definite matrix `M` in `R^{k x k}`.
-* <b>`event_ndims`</b>: Scalar `integer` `Tensor` indicating the number of dimensions
- associated with a particular draw from the distribution. Must be 0 or 1.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str` name given to ops managed by this object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `event_ndims` is not 0 or 1.
-* <b>`TypeError`</b>: if `scale` is not a `LinearOperator`.
-* <b>`TypeError`</b>: if `shift.dtype` does not match `scale.dtype`.
-* <b>`ValueError`</b>: if not `scale.is_non_singular`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.dtype` {#AffineLinearOperator.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.event_ndims` {#AffineLinearOperator.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.forward(x, name='forward')` {#AffineLinearOperator.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.forward_event_shape(input_shape)` {#AffineLinearOperator.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#AffineLinearOperator.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#AffineLinearOperator.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.graph_parents` {#AffineLinearOperator.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.inverse(y, name='inverse')` {#AffineLinearOperator.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#AffineLinearOperator.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.inverse_event_shape(output_shape)` {#AffineLinearOperator.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#AffineLinearOperator.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#AffineLinearOperator.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.is_constant_jacobian` {#AffineLinearOperator.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.name` {#AffineLinearOperator.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.scale` {#AffineLinearOperator.scale}
-
-The `scale` `LinearOperator` in `Y = scale @ X + shift`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.shift` {#AffineLinearOperator.shift}
-
-The `shift` `Tensor` in `Y = scale @ X + shift`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.validate_args` {#AffineLinearOperator.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.bijector.Bijector` {#Bijector}
-
-Interface for transforming a `Distribution` sample.
-
-A `Bijector` implements a
-[diffeomorphism](https://en.wikipedia.org/wiki/Diffeomorphism), i.e., a
-bijective, differentiable function. A `Bijector` is used by
-`TransformedDistribution` but can be generally used for transforming a
-`Distribution` generated `Tensor`. A `Bijector` is characterized by three
-operations:
-
-1. Forward Evaluation
-
- Useful for turning one random outcome into another random outcome from a
- different distribution.
-
-2. Inverse Evaluation
-
- Useful for "reversing" a transformation to compute one probability in
- terms of another.
-
-3. (log o det o Jacobian o inverse)(x)
-
- "The log of the determinant of the matrix of all first-order partial
- derivatives of the inverse function."
- Useful for inverting a transformation to compute one probability in terms
- of another. Geometrically, the det(Jacobian) is the volume of the
- transformation and is used to scale the probability.
-
-By convention, transformations of random variables are named in terms of the
-forward transformation. The forward transformation creates samples, the
-inverse is useful for computing probabilities.
-
-Example Use:
-
- - Basic properties:
-
- ```python
- x = ... # A tensor.
- # Evaluate forward transformation.
- fwd_x = my_bijector.forward(x)
- x == my_bijector.inverse(fwd_x)
- x != my_bijector.forward(fwd_x) # Not equal because g(x) != g(g(x)).
- ```
-
- - Computing a log-likelihood:
-
- ```python
- def transformed_log_prob(bijector, log_prob, x):
- return (bijector.inverse_log_det_jacobian(x) +
- log_prob(bijector.inverse(x)))
- ```
-
- - Transforming a random outcome:
-
- ```python
- def transformed_sample(bijector, x):
- return bijector.forward(x)
- ```
-
-Example transformations:
-
- - "Exponential"
-
- ```
- Y = g(X) = exp(X)
- X ~ Normal(0, 1) # Univariate.
- ```
-
- Implies:
-
- ```
- g^{-1}(Y) = log(Y)
- |Jacobian(g^{-1})(y)| = 1 / y
- Y ~ LogNormal(0, 1), i.e.,
- prob(Y=y) = |Jacobian(g^{-1})(y)| * prob(X=g^{-1}(y))
- = (1 / y) Normal(log(y); 0, 1)
- ```
-
- Here is an example of how one might implement the `Exp` bijector:
-
- ```
- class Exp(Bijector):
- def __init__(self, event_ndims=0, validate_args=False, name="exp"):
- super(Exp, self).__init__(event_ndims=event_ndims,
- validate_args=validate_args, name=name)
- def _forward(self, x):
- return math_ops.exp(x)
- def _inverse_and_inverse_log_det_jacobian(self, y):
- x = math_ops.log(y)
- return x, -self._forward_log_det_jacobian(x)
- def _forward_log_det_jacobian(self, x):
- if self.event_ndims is None:
- raise ValueError("Jacobian requires known event_ndims.")
- event_dims = array_ops.shape(x)[-self.event_ndims:]
- return math_ops.reduce_sum(x, axis=event_dims)
- ```
-
- - "Affine"
-
- ```
- Y = g(X) = sqrtSigma * X + mu
- X ~ MultivariateNormal(0, I_d)
- ```
-
- Implies:
-
- ```
- g^{-1}(Y) = inv(sqrtSigma) * (Y - mu)
- |Jacobian(g^{-1})(y)| = det(inv(sqrtSigma))
- Y ~ MultivariateNormal(mu, sqrtSigma) , i.e.,
- prob(Y=y) = |Jacobian(g^{-1})(y)| * prob(X=g^{-1}(y))
- = det(sqrtSigma)^(-d) *
- MultivariateNormal(inv(sqrtSigma) * (y - mu); 0, I_d)
- ```
-
-Example of why a `Bijector` needs to understand sample, batch, event
-partitioning:
-
-- Consider the `Exp` `Bijector` applied to a `Tensor` which has sample, batch,
- and event (S, B, E) shape semantics. Suppose the `Tensor`'s
- partitioned-shape is `(S=[4], B=[2], E=[3, 3])`.
-
- For `Exp`, the shape of the `Tensor` returned by `forward` and `inverse` is
- unchanged, i.e., `[4, 2, 3, 3]`. However the shape returned by
- `inverse_log_det_jacobian` is `[4, 2]` because the Jacobian is a reduction
- over the event dimensions.
-
-Subclass Requirements:
-
-- Typically subclasses implement `_forward` and one or both of:
- - `_inverse`, `_inverse_log_det_jacobian`,
- - `_inverse_and_inverse_log_det_jacobian`.
-
-- If the `Bijector`'s use is limited to `TransformedDistribution` (or friends
- like `QuantizedDistribution`) then depending on your use, you may not need
- to implement all of `_forward` and `_inverse` functions. Examples:
- 1. Sampling (e.g., `sample`) only requires `_forward`.
- 2. Probability functions (e.g., `prob`, `cdf`, `survival`) only require
- `_inverse` (and related).
- 3. Only calling probability functions on the output of `sample` means
- `_inverse` can be implemented as a cache lookup.
-
- See `Example Use` [above] which shows how these functions are used to
- transform a distribution. (Note: `_forward` could theoretically be
- implemented as a cache lookup but this would require controlling the
- underlying sample generation mechanism.)
-
-- If computation can be shared among `_inverse` and
- `_inverse_log_det_jacobian` it is preferable to implement
- `_inverse_and_inverse_log_det_jacobian`. This usually reduces
- graph-construction overhead because a `Distribution`'s implementation of
- `log_prob` will need to evaluate both the inverse Jacobian as well as the
- inverse function.
-
-- If an additional use case needs just `inverse` or just
- `inverse_log_det_jacobian` then he or she may also wish to implement these
- functions to avoid computing the `inverse_log_det_jacobian` or the
- `inverse`, respectively.
-
-- Subclasses should implement `_forward_event_shape`,
- `_forward_event_shape_tensor` (and `inverse` counterparts) if the
- transformation is shape-changing. By default the event-shape is assumed
- unchanged from input.
-
-Tips for implementing `_inverse` and `_inverse_log_det_jacobian`:
-
-- As case 3 [above] indicates, under some circumstances the inverse function
- can be implemented as a cache lookup.
-
-- The inverse `log o det o Jacobian` can be implemented as the negative of the
- forward `log o det o Jacobian`. This is useful if the `inverse` is
- implemented as a cache or the inverse Jacobian is computationally more
- expensive (e.g., `CholeskyOuterProduct` `Bijector`). The following
- demonstrates the suggested implementation.
-
- ```python
- def _inverse_and_log_det_jacobian(self, y):
- x = ... # implement inverse, possibly via cache.
- return x, -self._forward_log_det_jac(x) # Note negation.
- ```
-
- By overriding the `_inverse_and_log_det_jacobian` function we have access to
- the inverse in one call.
-
- The correctness of this approach can be seen from the following claim.
-
- - Claim:
-
- Assume `Y=g(X)` is a bijection whose derivative exists and is nonzero
- for its domain, i.e., `d/dX g(X)!=0`. Then:
-
- ```none
- (log o det o jacobian o g^{-1})(Y) = -(log o det o jacobian o g)(X)
- ```
-
- - Proof:
-
- From the bijective, nonzero differentiability of `g`, the
- [inverse function theorem](
- https://en.wikipedia.org/wiki/Inverse_function_theorem)
- implies `g^{-1}` is differentiable in the image of `g`.
- Applying the chain rule to `y = g(x) = g(g^{-1}(y))` yields
- `I = g'(g^{-1}(y))*g^{-1}'(y)`.
- The same theorem also implies `g{-1}'` is non-singular therefore:
- `inv[ g'(g^{-1}(y)) ] = g^{-1}'(y)`.
- The claim follows from [properties of determinant](
-https://en.wikipedia.org/wiki/Determinant#Multiplicativity_and_matrix_groups).
-
-- If possible, prefer a direct implementation of the inverse Jacobian. This
- should have superior numerical stability and will often share subgraphs with
- the `_inverse` implementation.
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.__init__(event_ndims=None, graph_parents=None, is_constant_jacobian=False, validate_args=False, dtype=None, name=None)` {#Bijector.__init__}
-
-Constructs Bijector.
-
-A `Bijector` transforms random variables into new random variables.
-
-Examples:
-
-```python
-# Create the Y = g(X) = X transform which operates on vector events.
-identity = Identity(event_ndims=1)
-
-# Create the Y = g(X) = exp(X) transform which operates on matrices.
-exp = Exp(event_ndims=2)
-```
-
-See `Bijector` subclass docstring for more details and specific examples.
-
-##### Args:
-
-
-* <b>`event_ndims`</b>: number of dimensions associated with event coordinates.
-* <b>`graph_parents`</b>: Python list of graph prerequisites of this `Bijector`.
-* <b>`is_constant_jacobian`</b>: Python `bool` indicating that the Jacobian is not a
- function of the input.
-* <b>`validate_args`</b>: Python `bool`, default `False`. Whether to validate input
- with asserts. If `validate_args` is `False`, and the inputs are invalid,
- correct behavior is not guaranteed.
-* <b>`dtype`</b>: `tf.dtype` supported by this `Bijector`. `None` means dtype is not
- enforced.
-* <b>`name`</b>: The name to give Ops created by the initializer.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.dtype` {#Bijector.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.event_ndims` {#Bijector.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.forward(x, name='forward')` {#Bijector.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.forward_event_shape(input_shape)` {#Bijector.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Bijector.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Bijector.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.graph_parents` {#Bijector.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.inverse(y, name='inverse')` {#Bijector.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Bijector.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.inverse_event_shape(output_shape)` {#Bijector.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Bijector.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Bijector.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.is_constant_jacobian` {#Bijector.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.name` {#Bijector.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.validate_args` {#Bijector.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.bijector.Chain` {#Chain}
-
-Bijector which applies a sequence of bijectors.
-
-Example Use:
-
-```python
-chain = Chain([Exp(), Softplus()], name="one_plus_exp")
-```
-
-Results in:
-
-* Forward:
-
- ```python
- exp = Exp()
- softplus = Softplus()
- Chain([exp, softplus]).forward(x)
- = exp.forward(softplus.forward(x))
- = tf.exp(tf.log(1. + tf.exp(x)))
- = 1. + tf.exp(x)
- ```
-
-* Inverse:
-
- ```python
- exp = Exp()
- softplus = Softplus()
- Chain([exp, softplus]).inverse(y)
- = softplus.inverse(exp.inverse(y))
- = tf.log(tf.exp(tf.log(y)) - 1.)
- = tf.log(y - 1.)
- ```
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.__init__(bijectors=(), validate_args=False, name=None)` {#Chain.__init__}
-
-Instantiates `Chain` bijector.
-
-##### Args:
-
-
-* <b>`bijectors`</b>: Python list of bijector instances. An empty list makes this
- bijector equivalent to the `Identity` bijector.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str`, name given to ops managed by this object. Default:
- E.g., `Chain([Exp(), Softplus()]).name == "chain_of_exp_of_softplus"`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if bijectors have different dtypes.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.bijectors` {#Chain.bijectors}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.dtype` {#Chain.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.event_ndims` {#Chain.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.forward(x, name='forward')` {#Chain.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.forward_event_shape(input_shape)` {#Chain.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Chain.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Chain.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.graph_parents` {#Chain.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.inverse(y, name='inverse')` {#Chain.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Chain.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.inverse_event_shape(output_shape)` {#Chain.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Chain.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Chain.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.is_constant_jacobian` {#Chain.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.name` {#Chain.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.validate_args` {#Chain.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.bijector.CholeskyOuterProduct` {#CholeskyOuterProduct}
-
-Compute `g(X) = X @ X.T`; X is lower-triangular, positive-diagonal matrix.
-
-`event_ndims` must be 0 or 2, i.e., scalar or matrix.
-
-Note: the upper-triangular part of X is ignored (whether or not its zero).
-
-Examples:
-
-```python
-bijector.CholeskyOuterProduct(event_ndims=2).forward(x=[[1., 0], [2, 1]])
-# Result: [[1., 2], [2, 5]], i.e., x @ x.T
-
-bijector.CholeskyOuterProduct(event_ndims=2).inverse(y=[[1., 2], [2, 5]])
-# Result: [[1., 0], [2, 1]], i.e., cholesky(y).
-```
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.__init__(event_ndims=2, validate_args=False, name='cholesky_outer_product')` {#CholeskyOuterProduct.__init__}
-
-Instantiates the `CholeskyOuterProduct` bijector.
-
-##### Args:
-
-
-* <b>`event_ndims`</b>: `constant` `int32` scalar `Tensor` indicating the number of
- dimensions associated with a particular draw from the distribution. Must
- be 0 or 2.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str` name given to ops managed by this object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if event_ndims is neither 0 or 2.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.dtype` {#CholeskyOuterProduct.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.event_ndims` {#CholeskyOuterProduct.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.forward(x, name='forward')` {#CholeskyOuterProduct.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.forward_event_shape(input_shape)` {#CholeskyOuterProduct.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#CholeskyOuterProduct.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#CholeskyOuterProduct.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.graph_parents` {#CholeskyOuterProduct.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.inverse(y, name='inverse')` {#CholeskyOuterProduct.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#CholeskyOuterProduct.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.inverse_event_shape(output_shape)` {#CholeskyOuterProduct.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#CholeskyOuterProduct.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#CholeskyOuterProduct.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.is_constant_jacobian` {#CholeskyOuterProduct.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.name` {#CholeskyOuterProduct.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.validate_args` {#CholeskyOuterProduct.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.bijector.Exp` {#Exp}
-
-Compute `Y = g(X) = exp(X)`.
-
-Example Use:
-
-```python
-# Create the Y=g(X)=exp(X) transform which works only on Tensors with 1
-# batch ndim and 2 event ndims (i.e., vector of matrices).
-exp = Exp(event_ndims=2)
-x = [[[1., 2],
- [3, 4]],
- [[5, 6],
- [7, 8]]]
-exp(x) == exp.forward(x)
-log(x) == exp.inverse(x)
-```
-
-Note: the exp(.) is applied element-wise but the Jacobian is a reduction
-over the event space.
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.__init__(event_ndims=0, validate_args=False, name='exp')` {#Exp.__init__}
-
-Instantiates the `Exp` bijector.
-
-##### Args:
-
-
-* <b>`event_ndims`</b>: Scalar `int32` `Tensor` indicating the number of dimensions
- associated with a particular draw from the distribution.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str` name given to ops managed by this object.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.dtype` {#Exp.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.event_ndims` {#Exp.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.forward(x, name='forward')` {#Exp.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.forward_event_shape(input_shape)` {#Exp.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Exp.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Exp.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.graph_parents` {#Exp.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.inverse(y, name='inverse')` {#Exp.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Exp.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.inverse_event_shape(output_shape)` {#Exp.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Exp.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Exp.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.is_constant_jacobian` {#Exp.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.name` {#Exp.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.power` {#Exp.power}
-
-The `c` in: `Y = g(X) = (1 + X * c)**(1 / c)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.validate_args` {#Exp.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.bijector.Identity` {#Identity}
-
-Compute Y = g(X) = X.
-
-Example Use:
-
-```python
-# Create the Y=g(X)=X transform which is intended for Tensors with 1 batch
-# ndim and 1 event ndim (i.e., vector of vectors).
-identity = Identity(event_ndims=1)
-x = [[1., 2],
- [3, 4]]
-x == identity.forward(x) == identity.inverse(x)
-```
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.__init__(validate_args=False, event_ndims=0, name='identity')` {#Identity.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.dtype` {#Identity.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.event_ndims` {#Identity.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.forward(x, name='forward')` {#Identity.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.forward_event_shape(input_shape)` {#Identity.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Identity.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Identity.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.graph_parents` {#Identity.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.inverse(y, name='inverse')` {#Identity.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Identity.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.inverse_event_shape(output_shape)` {#Identity.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Identity.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Identity.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.is_constant_jacobian` {#Identity.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.name` {#Identity.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.validate_args` {#Identity.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.bijector.Inline` {#Inline}
-
-Bijector constructed from custom callables.
-
-Example Use:
-
-```python
-exp = Inline(
- forward_fn=tf.exp,
- inverse_fn=tf.log,
- inverse_log_det_jacobian_fn=(
- lambda y: -tf.reduce_sum(tf.log(y), axis=-1)),
- name="exp")
-```
-
-The above example is equivalent to the `Bijector` `Exp(event_ndims=1)`.
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.__init__(forward_fn=None, inverse_fn=None, inverse_log_det_jacobian_fn=None, forward_log_det_jacobian_fn=None, forward_event_shape_fn=None, forward_event_shape_tensor_fn=None, inverse_event_shape_fn=None, inverse_event_shape_tensor_fn=None, is_constant_jacobian=False, validate_args=False, name='inline')` {#Inline.__init__}
-
-Creates a `Bijector` from callables.
-
-##### Args:
-
-
-* <b>`forward_fn`</b>: Python callable implementing the forward transformation.
-* <b>`inverse_fn`</b>: Python callable implementing the inverse transformation.
-* <b>`inverse_log_det_jacobian_fn`</b>: Python callable implementing the
- log o det o jacobian of the inverse transformation.
-* <b>`forward_log_det_jacobian_fn`</b>: Python callable implementing the
- log o det o jacobian of the forward transformation.
-* <b>`forward_event_shape_fn`</b>: Python callable implementing non-identical
- static event shape changes. Default: shape is assumed unchanged.
-* <b>`forward_event_shape_tensor_fn`</b>: Python callable implementing non-identical
- event shape changes. Default: shape is assumed unchanged.
-* <b>`inverse_event_shape_fn`</b>: Python callable implementing non-identical
- static event shape changes. Default: shape is assumed unchanged.
-* <b>`inverse_event_shape_tensor_fn`</b>: Python callable implementing non-identical
- event shape changes. Default: shape is assumed unchanged.
-* <b>`is_constant_jacobian`</b>: Python `bool` indicating that the Jacobian is
- constant for all input arguments.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str`, name given to ops managed by this object.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.dtype` {#Inline.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.event_ndims` {#Inline.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.forward(x, name='forward')` {#Inline.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.forward_event_shape(input_shape)` {#Inline.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Inline.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Inline.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.graph_parents` {#Inline.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.inverse(y, name='inverse')` {#Inline.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Inline.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.inverse_event_shape(output_shape)` {#Inline.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Inline.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Inline.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.is_constant_jacobian` {#Inline.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.name` {#Inline.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.validate_args` {#Inline.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.bijector.Invert` {#Invert}
-
-Bijector which inverts another Bijector.
-
-Example Use: [ExpGammaDistribution (see Background & Context)](
-https://reference.wolfram.com/language/ref/ExpGammaDistribution.html)
-models `Y=log(X)` where `X ~ Gamma`.
-
-```python
-exp_gamma_distribution = TransformedDistribution(
- distribution=Gamma(concentration=1., rate=2.),
- bijector=bijector.Invert(bijector.Exp())
-```
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.__init__(bijector, validate_args=False, name=None)` {#Invert.__init__}
-
-Creates a `Bijector` which swaps the meaning of `inverse` and `forward`.
-
-Note: An inverted bijector's `inverse_log_det_jacobian` is often more
-efficient if the base bijector implements `_forward_log_det_jacobian`. If
-`_forward_log_det_jacobian` is not implemented then the following code is
-used:
-
-```python
-y = self.inverse(x, **kwargs)
-return -self.inverse_log_det_jacobian(y, **kwargs)
-```
-
-##### Args:
-
-
-* <b>`bijector`</b>: Bijector instance.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str`, name given to ops managed by this object.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.bijector` {#Invert.bijector}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.dtype` {#Invert.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.event_ndims` {#Invert.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.forward(x, name='forward')` {#Invert.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.forward_event_shape(input_shape)` {#Invert.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Invert.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Invert.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.graph_parents` {#Invert.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.inverse(y, name='inverse')` {#Invert.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Invert.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.inverse_event_shape(output_shape)` {#Invert.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Invert.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Invert.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.is_constant_jacobian` {#Invert.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.name` {#Invert.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.validate_args` {#Invert.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.bijector.PowerTransform` {#PowerTransform}
-
-Compute `Y = g(X) = (1 + X * c)**(1 / c), X >= -1 / c`.
-
-The [power transform](https://en.wikipedia.org/wiki/Power_transform) maps
-inputs from `[0, inf]` to `[-1/c, inf]`; this is equivalent to the `inverse`
-of this bijector.
-
-This bijector is equivalent to the `Exp` bijector when `c=0`.
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.__init__(power=0.0, event_ndims=0, validate_args=False, name='power_transform')` {#PowerTransform.__init__}
-
-Instantiates the `PowerTransform` bijector.
-
-##### Args:
-
-
-* <b>`power`</b>: Python `float` scalar indicating the transform power, i.e.,
- `Y = g(X) = (1 + X * c)**(1 / c)` where `c` is the `power`.
-* <b>`event_ndims`</b>: Python scalar indicating the number of dimensions associated
- with a particular draw from the distribution.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str` name given to ops managed by this object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `power < 0` or is not known statically.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.dtype` {#PowerTransform.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.event_ndims` {#PowerTransform.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.forward(x, name='forward')` {#PowerTransform.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.forward_event_shape(input_shape)` {#PowerTransform.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#PowerTransform.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#PowerTransform.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.graph_parents` {#PowerTransform.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.inverse(y, name='inverse')` {#PowerTransform.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#PowerTransform.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.inverse_event_shape(output_shape)` {#PowerTransform.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#PowerTransform.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#PowerTransform.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.is_constant_jacobian` {#PowerTransform.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.name` {#PowerTransform.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.power` {#PowerTransform.power}
-
-The `c` in: `Y = g(X) = (1 + X * c)**(1 / c)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.validate_args` {#PowerTransform.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.bijector.SigmoidCentered` {#SigmoidCentered}
-
-Bijector which computes Y = g(X) = exp([X 0]) / (1 + exp(-X)).
-
-Equivalent to: `bijector.SoftmaxCentered(event_ndims=0)`.
-
-See `bijector.SoftmaxCentered` for more details.
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.__init__(validate_args=False, name='sigmoid_centered')` {#SigmoidCentered.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.dtype` {#SigmoidCentered.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.event_ndims` {#SigmoidCentered.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.forward(x, name='forward')` {#SigmoidCentered.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.forward_event_shape(input_shape)` {#SigmoidCentered.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#SigmoidCentered.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#SigmoidCentered.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.graph_parents` {#SigmoidCentered.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.inverse(y, name='inverse')` {#SigmoidCentered.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#SigmoidCentered.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.inverse_event_shape(output_shape)` {#SigmoidCentered.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#SigmoidCentered.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#SigmoidCentered.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.is_constant_jacobian` {#SigmoidCentered.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.name` {#SigmoidCentered.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.validate_args` {#SigmoidCentered.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.bijector.SoftmaxCentered` {#SoftmaxCentered}
-
-Bijector which computes `Y = g(X) = exp([X 0]) / sum(exp([X 0]))`.
-
-To implement [softmax](https://en.wikipedia.org/wiki/Softmax_function) as a
-bijection, the forward transformation appends a value to the input and the
-inverse removes this coordinate. The appended coordinate represents a pivot,
-e.g., `softmax(x) = exp(x-c) / sum(exp(x-c))` where `c` is the implicit last
-coordinate.
-
-Because we append a coordinate, this bijector only supports `event_ndim in [0,
-1]`, i.e., scalars and vectors.
-
-Example Use:
-
-```python
-bijector.SoftmaxCentered(event_ndims=1).forward(tf.log([2, 3, 4]))
-# Result: [0.2, 0.3, 0.4, 0.1]
-# Extra result: 0.1
-
-bijector.SoftmaxCentered(event_ndims=1).inverse([0.2, 0.3, 0.4, 0.1])
-# Result: tf.log([2, 3, 4])
-# Extra coordinate removed.
-```
-
-At first blush it may seem like the [Invariance of domain](
-https://en.wikipedia.org/wiki/Invariance_of_domain) theorem implies this
-implementation is not a bijection. However, the appended dimension
-makes the (forward) image non-open and the theorem does not directly apply.
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.__init__(event_ndims=0, validate_args=False, name='softmax_centered')` {#SoftmaxCentered.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.dtype` {#SoftmaxCentered.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.event_ndims` {#SoftmaxCentered.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.forward(x, name='forward')` {#SoftmaxCentered.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.forward_event_shape(input_shape)` {#SoftmaxCentered.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#SoftmaxCentered.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#SoftmaxCentered.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.graph_parents` {#SoftmaxCentered.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.inverse(y, name='inverse')` {#SoftmaxCentered.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#SoftmaxCentered.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.inverse_event_shape(output_shape)` {#SoftmaxCentered.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#SoftmaxCentered.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#SoftmaxCentered.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.is_constant_jacobian` {#SoftmaxCentered.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.name` {#SoftmaxCentered.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.validate_args` {#SoftmaxCentered.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.bijector.Softplus` {#Softplus}
-
-Bijector which computes `Y = g(X) = Log[1 + exp(X)]`.
-
-The softplus `Bijector` has the following two useful properties:
-
-* The domain is the positive real numbers
-* `softplus(x) approx x`, for large `x`, so it does not overflow as easily as
- the `Exp` `Bijector`.
-
- Example Use:
-
- ```python
- # Create the Y=g(X)=softplus(X) transform which works only on Tensors with 1
- # batch ndim and 2 event ndims (i.e., vector of matrices).
- softplus = Softplus(event_ndims=2)
- x = [[[1., 2],
- [3, 4]],
- [[5, 6],
- [7, 8]]]
- log(1 + exp(x)) == softplus.forward(x)
- log(exp(x) - 1) == softplus.inverse(x)
- ```
-
- Note: log(.) and exp(.) are applied element-wise but the Jacobian is a
- reduction over the event space.
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.__init__(event_ndims=0, validate_args=False, name='softplus')` {#Softplus.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.dtype` {#Softplus.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.event_ndims` {#Softplus.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.forward(x, name='forward')` {#Softplus.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.forward_event_shape(input_shape)` {#Softplus.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Softplus.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Softplus.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.graph_parents` {#Softplus.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.inverse(y, name='inverse')` {#Softplus.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Softplus.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.inverse_event_shape(output_shape)` {#Softplus.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Softplus.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Softplus.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.is_constant_jacobian` {#Softplus.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.name` {#Softplus.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.validate_args` {#Softplus.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.distributions.md b/tensorflow/g3doc/api_docs/python/contrib.distributions.md
deleted file mode 100644
index b3a7d661db..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.distributions.md
+++ /dev/null
@@ -1,27438 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Statistical Distributions (contrib)
-[TOC]
-
-Classes representing statistical distributions and ops for working with them.
-
-See the @{$python/contrib.distributions} guide.
-
-- - -
-
-### `class tf.contrib.distributions.ReparameterizationType` {#ReparameterizationType}
-
-Instances of this class represent how sampling is reparameterized.
-
-Two static instances exist in the distritributions library, signifying
-one of two possible properties for samples from a distribution:
-
-`FULLY_REPARAMETERIZED`: Samples from the distribution are fully
- reparameterized, and straight-through gradients are supported.
-
-`NOT_REPARAMETERIZED`: Samples from the distribution are not fully
- reparameterized, and straight-through gradients are either partially
- unsupported or are not supported at all. In this case, for purposes of
- e.g. RL or variational inference, it is generally safest to wrap the
- sample results in a `stop_gradients` call and instead use policy
- gradients / surrogate loss instead.
-- - -
-
-#### `tf.contrib.distributions.ReparameterizationType.__eq__(other)` {#ReparameterizationType.__eq__}
-
-Determine if this `ReparameterizationType` is equal to another.
-
-Since RepaparameterizationType instances are constant static global
-instances, equality checks if two instances' id() values are equal.
-
-##### Args:
-
-
-* <b>`other`</b>: Object to compare against.
-
-##### Returns:
-
- `self is other`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ReparameterizationType.__init__(rep_type)` {#ReparameterizationType.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ReparameterizationType.__repr__()` {#ReparameterizationType.__repr__}
-
-
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Distribution` {#Distribution}
-
-A generic probability distribution base class.
-
-`Distribution` is a base class for constructing and organizing properties
-(e.g., mean, variance) of random variables (e.g, Bernoulli, Gaussian).
-
-### Subclassing
-
-Subclasses are expected to implement a leading-underscore version of the
-same-named function. The argument signature should be identical except for
-the omission of `name="..."`. For example, to enable `log_prob(value,
-name="log_prob")` a subclass should implement `_log_prob(value)`.
-
-Subclasses can append to public-level docstrings by providing
-docstrings for their method specializations. For example:
-
-```python
-@distribution_util.AppendDocstring("Some other details.")
-def _log_prob(self, value):
- ...
-```
-
-would add the string "Some other details." to the `log_prob` function
-docstring. This is implemented as a simple decorator to avoid python
-linter complaining about missing Args/Returns/Raises sections in the
-partial docstrings.
-
-### Broadcasting, batching, and shapes
-
-All distributions support batches of independent distributions of that type.
-The batch shape is determined by broadcasting together the parameters.
-
-The shape of arguments to `__init__`, `cdf`, `log_cdf`, `prob`, and
-`log_prob` reflect this broadcasting, as does the return value of `sample` and
-`sample_n`.
-
-`sample_n_shape = [n] + batch_shape + event_shape`, where `sample_n_shape` is
-the shape of the `Tensor` returned from `sample_n`, `n` is the number of
-samples, `batch_shape` defines how many independent distributions there are,
-and `event_shape` defines the shape of samples from each of those independent
-distributions. Samples are independent along the `batch_shape` dimensions, but
-not necessarily so along the `event_shape` dimensions (depending on the
-particulars of the underlying distribution).
-
-Using the `Uniform` distribution as an example:
-
-```python
-minval = 3.0
-maxval = [[4.0, 6.0],
- [10.0, 12.0]]
-
-# Broadcasting:
-# This instance represents 4 Uniform distributions. Each has a lower bound at
-# 3.0 as the `minval` parameter was broadcasted to match `maxval`'s shape.
-u = Uniform(minval, maxval)
-
-# `event_shape` is `TensorShape([])`.
-event_shape = u.event_shape
-# `event_shape_t` is a `Tensor` which will evaluate to [].
-event_shape_t = u.event_shape_tensor()
-
-# Sampling returns a sample per distribution. `samples` has shape
-# [5, 2, 2], which is [n] + batch_shape + event_shape, where n=5,
-# batch_shape=[2, 2], and event_shape=[].
-samples = u.sample_n(5)
-
-# The broadcasting holds across methods. Here we use `cdf` as an example. The
-# same holds for `log_cdf` and the likelihood functions.
-
-# `cum_prob` has shape [2, 2] as the `value` argument was broadcasted to the
-# shape of the `Uniform` instance.
-cum_prob_broadcast = u.cdf(4.0)
-
-# `cum_prob`'s shape is [2, 2], one per distribution. No broadcasting
-# occurred.
-cum_prob_per_dist = u.cdf([[4.0, 5.0],
- [6.0, 7.0]])
-
-# INVALID as the `value` argument is not broadcastable to the distribution's
-# shape.
-cum_prob_invalid = u.cdf([4.0, 5.0, 6.0])
-```
-
-### Parameter values leading to undefined statistics or distributions.
-
-Some distributions do not have well-defined statistics for all initialization
-parameter values. For example, the beta distribution is parameterized by
-positive real numbers `concentration1` and `concentration0`, and does not have
-well-defined mode if `concentration1 < 1` or `concentration0 < 1`.
-
-The user is given the option of raising an exception or returning `NaN`.
-
-```python
-a = tf.exp(tf.matmul(logits, weights_a))
-b = tf.exp(tf.matmul(logits, weights_b))
-
-# Will raise exception if ANY batch member has a < 1 or b < 1.
-dist = distributions.beta(a, b, allow_nan_stats=False)
-mode = dist.mode().eval()
-
-# Will return NaN for batch members with either a < 1 or b < 1.
-dist = distributions.beta(a, b, allow_nan_stats=True) # Default behavior
-mode = dist.mode().eval()
-```
-
-In all cases, an exception is raised if *invalid* parameters are passed, e.g.
-
-```python
-# Will raise an exception if any Op is run.
-negative_a = -1.0 * a # beta distribution by definition has a > 0.
-dist = distributions.beta(negative_a, b, allow_nan_stats=True)
-dist.mean().eval()
-```
-- - -
-
-#### `tf.contrib.distributions.Distribution.__init__(dtype, is_continuous, reparameterization_type, validate_args, allow_nan_stats, parameters=None, graph_parents=None, name=None)` {#Distribution.__init__}
-
-Constructs the `Distribution`.
-
-**This is a private method for subclass use.**
-
-##### Args:
-
-
-* <b>`dtype`</b>: The type of the event samples. `None` implies no type-enforcement.
-* <b>`is_continuous`</b>: Python `bool`. If `True` this `Distribution` is continuous
- over its supported domain.
-* <b>`reparameterization_type`</b>: Instance of `ReparameterizationType`.
- If `distributions.FULLY_REPARAMETERIZED`, this
- `Distribution` can be reparameterized in terms of some standard
- distribution with a function whose Jacobian is constant for the support
- of the standard distribution. If `distributions.NOT_REPARAMETERIZED`,
- then no such reparameterization is available.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`parameters`</b>: Python `dict` of parameters used to instantiate this
- `Distribution`.
-* <b>`graph_parents`</b>: Python `list` of graph prerequisites of this
- `Distribution`.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class. Default:
- subclass name.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if any member of graph_parents is `None` or not a `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.allow_nan_stats` {#Distribution.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.batch_shape` {#Distribution.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.batch_shape_tensor(name='batch_shape_tensor')` {#Distribution.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.cdf(value, name='cdf')` {#Distribution.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.copy(**override_parameters_kwargs)` {#Distribution.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.covariance(name='covariance')` {#Distribution.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.dtype` {#Distribution.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.entropy(name='entropy')` {#Distribution.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.event_shape` {#Distribution.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.event_shape_tensor(name='event_shape_tensor')` {#Distribution.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.is_continuous` {#Distribution.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.is_scalar_batch(name='is_scalar_batch')` {#Distribution.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.is_scalar_event(name='is_scalar_event')` {#Distribution.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.log_cdf(value, name='log_cdf')` {#Distribution.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.log_prob(value, name='log_prob')` {#Distribution.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.log_survival_function(value, name='log_survival_function')` {#Distribution.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.mean(name='mean')` {#Distribution.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.mode(name='mode')` {#Distribution.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.name` {#Distribution.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Distribution.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.param_static_shapes(cls, sample_shape)` {#Distribution.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.parameters` {#Distribution.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.prob(value, name='prob')` {#Distribution.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.reparameterization_type` {#Distribution.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.sample(sample_shape=(), seed=None, name='sample')` {#Distribution.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.stddev(name='stddev')` {#Distribution.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.survival_function(value, name='survival_function')` {#Distribution.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.validate_args` {#Distribution.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.variance(name='variance')` {#Distribution.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Binomial` {#Binomial}
-
-Binomial distribution.
-
-This distribution is parameterized by `probs`, a (batch of) probabilities for
-drawing a `1` and `total_count`, the number of trials per draw from the
-Binomial.
-
-#### Mathematical Details
-
-The Binomial is a distribution over the number of `1`'s in `total_count`
-independent trials, with each trial having the same probability of `1`, i.e.,
-`probs`.
-
-The probability mass function (pmf) is,
-
-```none
-pmf(k; n, p) = p**k (1 - p)**(n - k) / Z
-Z = k! (n - k)! / n!
-```
-
-where:
-* `total_count = n`,
-* `probs = p`,
-* `Z` is the normalizaing constant, and,
-* `n!` is the factorial of `n`.
-
-#### Examples
-
-Create a single distribution, corresponding to 5 coin flips.
-
-```python
-dist = Binomial(total_count=5., probs=.5)
-```
-
-Create a single distribution (using logits), corresponding to 5 coin flips.
-
-```python
-dist = Binomial(total_count=5., logits=0.)
-```
-
-Creates 3 distributions with the third distribution most likely to have
-successes.
-
-```python
-p = [.2, .3, .8]
-# n will be broadcast to [4., 4., 4.], to match p.
-dist = Binomial(total_count=4., probs=p)
-```
-
-The distribution functions can be evaluated on counts.
-
-```python
-# counts same shape as p.
-counts = [1., 2, 3]
-dist.prob(counts) # Shape [3]
-
-# p will be broadcast to [[.2, .3, .8], [.2, .3, .8]] to match counts.
-counts = [[1., 2, 1], [2, 2, 4]]
-dist.prob(counts) # Shape [2, 3]
-
-# p will be broadcast to shape [5, 7, 3] to match counts.
-counts = [[...]] # Shape [5, 7, 3]
-dist.prob(counts) # Shape [5, 7, 3]
-```
-- - -
-
-#### `tf.contrib.distributions.Binomial.__init__(total_count, logits=None, probs=None, validate_args=False, allow_nan_stats=True, name='Binomial')` {#Binomial.__init__}
-
-Initialize a batch of Binomial distributions.
-
-##### Args:
-
-
-* <b>`total_count`</b>: Non-negative floating point tensor with shape broadcastable
- to `[N1,..., Nm]` with `m >= 0` and the same dtype as `probs` or
- `logits`. Defines this as a batch of `N1 x ... x Nm` different Binomial
- distributions. Its components should be equal to integer values.
-* <b>`logits`</b>: Floating point tensor representing the log-odds of a
- positive event with shape broadcastable to `[N1,..., Nm]` `m >= 0`, and
- the same dtype as `total_count`. Each entry represents logits for the
- probability of success for independent Binomial distributions. Only one
- of `logits` or `probs` should be passed in.
-* <b>`probs`</b>: Positive floating point tensor with shape broadcastable to
- `[N1,..., Nm]` `m >= 0`, `probs in [0, 1]`. Each entry represents the
- probability of success for independent Binomial distributions. Only one
- of `logits` or `probs` should be passed in.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.allow_nan_stats` {#Binomial.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.batch_shape` {#Binomial.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.batch_shape_tensor(name='batch_shape_tensor')` {#Binomial.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.cdf(value, name='cdf')` {#Binomial.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.copy(**override_parameters_kwargs)` {#Binomial.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.covariance(name='covariance')` {#Binomial.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.dtype` {#Binomial.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.entropy(name='entropy')` {#Binomial.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.event_shape` {#Binomial.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.event_shape_tensor(name='event_shape_tensor')` {#Binomial.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.is_continuous` {#Binomial.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.is_scalar_batch(name='is_scalar_batch')` {#Binomial.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.is_scalar_event(name='is_scalar_event')` {#Binomial.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.log_cdf(value, name='log_cdf')` {#Binomial.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.log_prob(value, name='log_prob')` {#Binomial.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Binomial`:
-
-For each batch member of counts `value`, `P[value]` is the probability that
-after sampling `self.total_count` draws from this Binomial distribution, the
-number of successes is `value`. Since different sequences of draws can result in
-the same counts, the probability includes a combinatorial coefficient.
-
-Note: `value` must be a non-negative tensor with dtype `dtype` and whose shape
-can be broadcast with `self.probs` and `self.total_count`. `value` is only legal
-if it is less than or equal to `self.total_count` and its components are equal
-to integer values.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.log_survival_function(value, name='log_survival_function')` {#Binomial.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.logits` {#Binomial.logits}
-
-Log-odds of drawing a `1`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.mean(name='mean')` {#Binomial.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.mode(name='mode')` {#Binomial.mode}
-
-Mode.
-
-Additional documentation from `Binomial`:
-
-Note that when `(1 + total_count) * probs` is an integer, there are
-actually two modes. Namely, `(1 + total_count) * probs` and
-`(1 + total_count) * probs - 1` are both modes. Here we return only the
-larger of the two modes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.name` {#Binomial.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Binomial.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.param_static_shapes(cls, sample_shape)` {#Binomial.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.parameters` {#Binomial.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.prob(value, name='prob')` {#Binomial.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Binomial`:
-
-For each batch member of counts `value`, `P[value]` is the probability that
-after sampling `self.total_count` draws from this Binomial distribution, the
-number of successes is `value`. Since different sequences of draws can result in
-the same counts, the probability includes a combinatorial coefficient.
-
-Note: `value` must be a non-negative tensor with dtype `dtype` and whose shape
-can be broadcast with `self.probs` and `self.total_count`. `value` is only legal
-if it is less than or equal to `self.total_count` and its components are equal
-to integer values.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.probs` {#Binomial.probs}
-
-Probability of of drawing a `1`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.reparameterization_type` {#Binomial.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.sample(sample_shape=(), seed=None, name='sample')` {#Binomial.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.stddev(name='stddev')` {#Binomial.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.survival_function(value, name='survival_function')` {#Binomial.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.total_count` {#Binomial.total_count}
-
-Number of trials.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.validate_args` {#Binomial.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.variance(name='variance')` {#Binomial.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Bernoulli` {#Bernoulli}
-
-Bernoulli distribution.
-
-The Bernoulli distribution with `probs` parameter, i.e., the probability of a
-`1` outcome (vs a `0` outcome).
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.__init__(logits=None, probs=None, dtype=tf.int32, validate_args=False, allow_nan_stats=True, name='Bernoulli')` {#Bernoulli.__init__}
-
-Construct Bernoulli distributions.
-
-##### Args:
-
-
-* <b>`logits`</b>: An N-D `Tensor` representing the log-odds of a `1` event. Each
- entry in the `Tensor` parametrizes an independent Bernoulli distribution
- where the probability of an event is sigmoid(logits). Only one of
- `logits` or `probs` should be passed in.
-* <b>`probs`</b>: An N-D `Tensor` representing the probability of a `1`
- event. Each entry in the `Tensor` parameterizes an independent
- Bernoulli distribution. Only one of `logits` or `probs` should be passed
- in.
-* <b>`dtype`</b>: The type of the event samples. Default: `int32`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`,
- statistics (e.g., mean, mode, variance) use the value "`NaN`" to
- indicate the result is undefined. When `False`, an exception is raised
- if one or more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If p and logits are passed, or if neither are passed.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.allow_nan_stats` {#Bernoulli.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.batch_shape` {#Bernoulli.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.batch_shape_tensor(name='batch_shape_tensor')` {#Bernoulli.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.cdf(value, name='cdf')` {#Bernoulli.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.copy(**override_parameters_kwargs)` {#Bernoulli.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.covariance(name='covariance')` {#Bernoulli.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.dtype` {#Bernoulli.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.entropy(name='entropy')` {#Bernoulli.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.event_shape` {#Bernoulli.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.event_shape_tensor(name='event_shape_tensor')` {#Bernoulli.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.is_continuous` {#Bernoulli.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.is_scalar_batch(name='is_scalar_batch')` {#Bernoulli.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.is_scalar_event(name='is_scalar_event')` {#Bernoulli.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.log_cdf(value, name='log_cdf')` {#Bernoulli.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.log_prob(value, name='log_prob')` {#Bernoulli.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.log_survival_function(value, name='log_survival_function')` {#Bernoulli.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.logits` {#Bernoulli.logits}
-
-Log-odds of a `1` outcome (vs `0`).
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.mean(name='mean')` {#Bernoulli.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.mode(name='mode')` {#Bernoulli.mode}
-
-Mode.
-
-Additional documentation from `Bernoulli`:
-
-Returns `1` if `prob > 0.5` and `0` otherwise.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.name` {#Bernoulli.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Bernoulli.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.param_static_shapes(cls, sample_shape)` {#Bernoulli.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.parameters` {#Bernoulli.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.prob(value, name='prob')` {#Bernoulli.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.probs` {#Bernoulli.probs}
-
-Probability of a `1` outcome (vs `0`).
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.reparameterization_type` {#Bernoulli.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.sample(sample_shape=(), seed=None, name='sample')` {#Bernoulli.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.stddev(name='stddev')` {#Bernoulli.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.survival_function(value, name='survival_function')` {#Bernoulli.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.validate_args` {#Bernoulli.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.variance(name='variance')` {#Bernoulli.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.BernoulliWithSigmoidProbs` {#BernoulliWithSigmoidProbs}
-
-Bernoulli with `probs = nn.sigmoid(logits)`.
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.__init__(logits=None, dtype=tf.int32, validate_args=False, allow_nan_stats=True, name='BernoulliWithSigmoidProbs')` {#BernoulliWithSigmoidProbs.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.allow_nan_stats` {#BernoulliWithSigmoidProbs.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.batch_shape` {#BernoulliWithSigmoidProbs.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.batch_shape_tensor(name='batch_shape_tensor')` {#BernoulliWithSigmoidProbs.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.cdf(value, name='cdf')` {#BernoulliWithSigmoidProbs.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.copy(**override_parameters_kwargs)` {#BernoulliWithSigmoidProbs.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.covariance(name='covariance')` {#BernoulliWithSigmoidProbs.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.dtype` {#BernoulliWithSigmoidProbs.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.entropy(name='entropy')` {#BernoulliWithSigmoidProbs.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.event_shape` {#BernoulliWithSigmoidProbs.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.event_shape_tensor(name='event_shape_tensor')` {#BernoulliWithSigmoidProbs.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.is_continuous` {#BernoulliWithSigmoidProbs.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.is_scalar_batch(name='is_scalar_batch')` {#BernoulliWithSigmoidProbs.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.is_scalar_event(name='is_scalar_event')` {#BernoulliWithSigmoidProbs.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.log_cdf(value, name='log_cdf')` {#BernoulliWithSigmoidProbs.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.log_prob(value, name='log_prob')` {#BernoulliWithSigmoidProbs.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.log_survival_function(value, name='log_survival_function')` {#BernoulliWithSigmoidProbs.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.logits` {#BernoulliWithSigmoidProbs.logits}
-
-Log-odds of a `1` outcome (vs `0`).
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.mean(name='mean')` {#BernoulliWithSigmoidProbs.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.mode(name='mode')` {#BernoulliWithSigmoidProbs.mode}
-
-Mode.
-
-Additional documentation from `Bernoulli`:
-
-Returns `1` if `prob > 0.5` and `0` otherwise.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.name` {#BernoulliWithSigmoidProbs.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#BernoulliWithSigmoidProbs.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.param_static_shapes(cls, sample_shape)` {#BernoulliWithSigmoidProbs.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.parameters` {#BernoulliWithSigmoidProbs.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.prob(value, name='prob')` {#BernoulliWithSigmoidProbs.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.probs` {#BernoulliWithSigmoidProbs.probs}
-
-Probability of a `1` outcome (vs `0`).
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.reparameterization_type` {#BernoulliWithSigmoidProbs.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.sample(sample_shape=(), seed=None, name='sample')` {#BernoulliWithSigmoidProbs.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.stddev(name='stddev')` {#BernoulliWithSigmoidProbs.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.survival_function(value, name='survival_function')` {#BernoulliWithSigmoidProbs.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.validate_args` {#BernoulliWithSigmoidProbs.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.variance(name='variance')` {#BernoulliWithSigmoidProbs.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Beta` {#Beta}
-
-Beta distribution.
-
-The Beta distribution is defined over the `(0, 1)` interval using parameters
-`concentration1` (aka "alpha") and `concentration0` (aka "beta").
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; alpha, beta) = x**(alpha - 1) (1 - x)**(beta - 1) / Z
-Z = Gamma(alpha) Gamma(beta) / Gamma(alpha + beta)
-```
-
-where:
-
-* `concentration1 = alpha`,
-* `concentration0 = beta`,
-* `Z` is the normalization constant, and,
-* `Gamma` is the [gamma function](
- https://en.wikipedia.org/wiki/Gamma_function).
-
-The concentration parameters represent mean total counts of a `1` or a `0`,
-i.e.,
-
-```none
-concentration1 = alpha = mean * total_concentration
-concentration0 = beta = (1. - mean) * total_concentration
-```
-
-where `mean` in `(0, 1)` and `total_concentration` is a positive real number
-representing a mean `total_count = concentration1 + concentration0`.
-
-Distribution parameters are automatically broadcast in all functions; see
-examples for details.
-
-#### Examples
-
-```python
-# Create a batch of three Beta distributions.
-alpha = [1, 2, 3]
-beta = [1, 2, 3]
-dist = Beta(alpha, beta)
-
-dist.sample([4, 5]) # Shape [4, 5, 3]
-
-# `x` has three batch entries, each with two samples.
-x = [[.1, .4, .5],
- [.2, .3, .5]]
-# Calculate the probability of each pair of samples under the corresponding
-# distribution in `dist`.
-dist.prob(x) # Shape [2, 3]
-```
-
-```python
-# Create batch_shape=[2, 3] via parameter broadcast:
-alpha = [[1.], [2]] # Shape [2, 1]
-beta = [3., 4, 5] # Shape [3]
-dist = Beta(alpha, beta)
-
-# alpha broadcast as: [[1., 1, 1,],
-# [2, 2, 2]]
-# beta broadcast as: [[3., 4, 5],
-# [3, 4, 5]]
-# batch_Shape [2, 3]
-dist.sample([4, 5]) # Shape [4, 5, 2, 3]
-
-x = [.2, .3, .5]
-# x will be broadcast as [[.2, .3, .5],
-# [.2, .3, .5]],
-# thus matching batch_shape [2, 3].
-dist.prob(x) # Shape [2, 3]
-```
-- - -
-
-#### `tf.contrib.distributions.Beta.__init__(concentration1=None, concentration0=None, validate_args=False, allow_nan_stats=True, name='Beta')` {#Beta.__init__}
-
-Initialize a batch of Beta distributions.
-
-##### Args:
-
-
-* <b>`concentration1`</b>: Positive floating-point `Tensor` indicating mean
- number of successes; aka "alpha". Implies `self.dtype` and
- `self.batch_shape`, i.e.,
- `concentration1.shape = [N1, N2, ..., Nm] = self.batch_shape`.
-* <b>`concentration0`</b>: Positive floating-point `Tensor` indicating mean
- number of failures; aka "beta". Otherwise has same semantics as
- `concentration1`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.allow_nan_stats` {#Beta.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.batch_shape` {#Beta.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.batch_shape_tensor(name='batch_shape_tensor')` {#Beta.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.cdf(value, name='cdf')` {#Beta.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.concentration0` {#Beta.concentration0}
-
-Concentration parameter associated with a `0` outcome.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.concentration1` {#Beta.concentration1}
-
-Concentration parameter associated with a `1` outcome.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.copy(**override_parameters_kwargs)` {#Beta.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.covariance(name='covariance')` {#Beta.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.dtype` {#Beta.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.entropy(name='entropy')` {#Beta.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.event_shape` {#Beta.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.event_shape_tensor(name='event_shape_tensor')` {#Beta.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.is_continuous` {#Beta.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.is_scalar_batch(name='is_scalar_batch')` {#Beta.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.is_scalar_event(name='is_scalar_event')` {#Beta.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.log_cdf(value, name='log_cdf')` {#Beta.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.log_prob(value, name='log_prob')` {#Beta.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.log_survival_function(value, name='log_survival_function')` {#Beta.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.mean(name='mean')` {#Beta.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.mode(name='mode')` {#Beta.mode}
-
-Mode.
-
-Additional documentation from `Beta`:
-
-Note: The mode is undefined when `concentration1 <= 1` or
-`concentration0 <= 1`. If `self.allow_nan_stats` is `True`, `NaN`
-is used for undefined modes. If `self.allow_nan_stats` is `False` an
-exception is raised when one or more modes are undefined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.name` {#Beta.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Beta.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.param_static_shapes(cls, sample_shape)` {#Beta.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.parameters` {#Beta.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.prob(value, name='prob')` {#Beta.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.reparameterization_type` {#Beta.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.sample(sample_shape=(), seed=None, name='sample')` {#Beta.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.stddev(name='stddev')` {#Beta.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.survival_function(value, name='survival_function')` {#Beta.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.total_concentration` {#Beta.total_concentration}
-
-Sum of concentration parameters.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.validate_args` {#Beta.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.variance(name='variance')` {#Beta.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.BetaWithSoftplusConcentration` {#BetaWithSoftplusConcentration}
-
-Beta with softplus transform of `concentration1` and `concentration0`.
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.__init__(concentration1, concentration0, validate_args=False, allow_nan_stats=True, name='BetaWithSoftplusConcentration')` {#BetaWithSoftplusConcentration.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.allow_nan_stats` {#BetaWithSoftplusConcentration.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.batch_shape` {#BetaWithSoftplusConcentration.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.batch_shape_tensor(name='batch_shape_tensor')` {#BetaWithSoftplusConcentration.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.cdf(value, name='cdf')` {#BetaWithSoftplusConcentration.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.concentration0` {#BetaWithSoftplusConcentration.concentration0}
-
-Concentration parameter associated with a `0` outcome.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.concentration1` {#BetaWithSoftplusConcentration.concentration1}
-
-Concentration parameter associated with a `1` outcome.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.copy(**override_parameters_kwargs)` {#BetaWithSoftplusConcentration.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.covariance(name='covariance')` {#BetaWithSoftplusConcentration.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.dtype` {#BetaWithSoftplusConcentration.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.entropy(name='entropy')` {#BetaWithSoftplusConcentration.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.event_shape` {#BetaWithSoftplusConcentration.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.event_shape_tensor(name='event_shape_tensor')` {#BetaWithSoftplusConcentration.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.is_continuous` {#BetaWithSoftplusConcentration.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.is_scalar_batch(name='is_scalar_batch')` {#BetaWithSoftplusConcentration.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.is_scalar_event(name='is_scalar_event')` {#BetaWithSoftplusConcentration.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.log_cdf(value, name='log_cdf')` {#BetaWithSoftplusConcentration.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.log_prob(value, name='log_prob')` {#BetaWithSoftplusConcentration.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.log_survival_function(value, name='log_survival_function')` {#BetaWithSoftplusConcentration.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.mean(name='mean')` {#BetaWithSoftplusConcentration.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.mode(name='mode')` {#BetaWithSoftplusConcentration.mode}
-
-Mode.
-
-Additional documentation from `Beta`:
-
-Note: The mode is undefined when `concentration1 <= 1` or
-`concentration0 <= 1`. If `self.allow_nan_stats` is `True`, `NaN`
-is used for undefined modes. If `self.allow_nan_stats` is `False` an
-exception is raised when one or more modes are undefined.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.name` {#BetaWithSoftplusConcentration.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#BetaWithSoftplusConcentration.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.param_static_shapes(cls, sample_shape)` {#BetaWithSoftplusConcentration.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.parameters` {#BetaWithSoftplusConcentration.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.prob(value, name='prob')` {#BetaWithSoftplusConcentration.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.reparameterization_type` {#BetaWithSoftplusConcentration.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.sample(sample_shape=(), seed=None, name='sample')` {#BetaWithSoftplusConcentration.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.stddev(name='stddev')` {#BetaWithSoftplusConcentration.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.survival_function(value, name='survival_function')` {#BetaWithSoftplusConcentration.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.total_concentration` {#BetaWithSoftplusConcentration.total_concentration}
-
-Sum of concentration parameters.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.validate_args` {#BetaWithSoftplusConcentration.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.variance(name='variance')` {#BetaWithSoftplusConcentration.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Categorical` {#Categorical}
-
-Categorical distribution.
-
-The categorical distribution is parameterized by the log-probabilities
-of a set of classes.
-
-#### Examples
-
-Creates a 3-class distiribution, with the 2nd class, the most likely to be
-drawn from.
-
-```python
-p = [0.1, 0.5, 0.4]
-dist = Categorical(probs=p)
-```
-
-Creates a 3-class distiribution, with the 2nd class the most likely to be
-drawn from, using logits.
-
-```python
-logits = [-50, 400, 40]
-dist = Categorical(logits=logits)
-```
-
-Creates a 3-class distribution, with the 3rd class is most likely to be drawn.
-The distribution functions can be evaluated on counts.
-
-```python
-# counts is a scalar.
-p = [0.1, 0.4, 0.5]
-dist = Categorical(probs=p)
-dist.prob(0) # Shape []
-
-# p will be broadcast to [[0.1, 0.4, 0.5], [0.1, 0.4, 0.5]] to match counts.
-counts = [1, 0]
-dist.prob(counts) # Shape [2]
-
-# p will be broadcast to shape [3, 5, 7, 3] to match counts.
-counts = [[...]] # Shape [5, 7, 3]
-dist.prob(counts) # Shape [5, 7, 3]
-```
-- - -
-
-#### `tf.contrib.distributions.Categorical.__init__(logits=None, probs=None, dtype=tf.int32, validate_args=False, allow_nan_stats=True, name='Categorical')` {#Categorical.__init__}
-
-Initialize Categorical distributions using class log-probabilities.
-
-##### Args:
-
-
-* <b>`logits`</b>: An N-D `Tensor`, `N >= 1`, representing the log probabilities
- of a set of Categorical distributions. The first `N - 1` dimensions
- index into a batch of independent distributions and the last dimension
- represents a vector of logits for each class. Only one of `logits` or
- `probs` should be passed in.
-* <b>`probs`</b>: An N-D `Tensor`, `N >= 1`, representing the probabilities
- of a set of Categorical distributions. The first `N - 1` dimensions
- index into a batch of independent distributions and the last dimension
- represents a vector of probabilities for each class. Only one of
- `logits` or `probs` should be passed in.
-* <b>`dtype`</b>: The type of the event samples (default: int32).
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.allow_nan_stats` {#Categorical.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.batch_shape` {#Categorical.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.batch_shape_tensor(name='batch_shape_tensor')` {#Categorical.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.cdf(value, name='cdf')` {#Categorical.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.copy(**override_parameters_kwargs)` {#Categorical.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.covariance(name='covariance')` {#Categorical.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.dtype` {#Categorical.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.entropy(name='entropy')` {#Categorical.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.event_shape` {#Categorical.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.event_shape_tensor(name='event_shape_tensor')` {#Categorical.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.event_size` {#Categorical.event_size}
-
-Scalar `int32` tensor: the number of classes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.is_continuous` {#Categorical.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.is_scalar_batch(name='is_scalar_batch')` {#Categorical.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.is_scalar_event(name='is_scalar_event')` {#Categorical.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.log_cdf(value, name='log_cdf')` {#Categorical.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.log_prob(value, name='log_prob')` {#Categorical.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.log_survival_function(value, name='log_survival_function')` {#Categorical.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.logits` {#Categorical.logits}
-
-Vector of coordinatewise logits.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.mean(name='mean')` {#Categorical.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.mode(name='mode')` {#Categorical.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.name` {#Categorical.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Categorical.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.param_static_shapes(cls, sample_shape)` {#Categorical.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.parameters` {#Categorical.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.prob(value, name='prob')` {#Categorical.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.probs` {#Categorical.probs}
-
-Vector of coordinatewise probabilities.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.reparameterization_type` {#Categorical.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.sample(sample_shape=(), seed=None, name='sample')` {#Categorical.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.stddev(name='stddev')` {#Categorical.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.survival_function(value, name='survival_function')` {#Categorical.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.validate_args` {#Categorical.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.variance(name='variance')` {#Categorical.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Chi2` {#Chi2}
-
-Chi2 distribution.
-
-The Chi2 distribution is defined over positive real numbers using a degrees of
-freedom ("df") parameter.
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; df, x > 0) = x**(0.5 df - 1) exp(-0.5 x) / Z
-Z = 2**(0.5 df) Gamma(0.5 df)
-```
-
-where:
-
-* `df` denotes the degrees of freedom,
-* `Z` is the normalization constant, and,
-* `Gamma` is the [gamma function](
- https://en.wikipedia.org/wiki/Gamma_function).
-
-The Chi2 distribution is a special case of the Gamma distribution, i.e.,
-
-```python
-Chi2(df) = Gamma(concentration=0.5 * df, rate=0.5)
-```
-- - -
-
-#### `tf.contrib.distributions.Chi2.__init__(df, validate_args=False, allow_nan_stats=True, name='Chi2')` {#Chi2.__init__}
-
-Construct Chi2 distributions with parameter `df`.
-
-##### Args:
-
-
-* <b>`df`</b>: Floating point tensor, the degrees of freedom of the
- distribution(s). `df` must contain only positive values.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.allow_nan_stats` {#Chi2.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.batch_shape` {#Chi2.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.batch_shape_tensor(name='batch_shape_tensor')` {#Chi2.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.cdf(value, name='cdf')` {#Chi2.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.concentration` {#Chi2.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.copy(**override_parameters_kwargs)` {#Chi2.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.covariance(name='covariance')` {#Chi2.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.df` {#Chi2.df}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.dtype` {#Chi2.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.entropy(name='entropy')` {#Chi2.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.event_shape` {#Chi2.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.event_shape_tensor(name='event_shape_tensor')` {#Chi2.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.is_continuous` {#Chi2.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.is_scalar_batch(name='is_scalar_batch')` {#Chi2.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.is_scalar_event(name='is_scalar_event')` {#Chi2.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.log_cdf(value, name='log_cdf')` {#Chi2.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.log_prob(value, name='log_prob')` {#Chi2.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.log_survival_function(value, name='log_survival_function')` {#Chi2.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.mean(name='mean')` {#Chi2.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.mode(name='mode')` {#Chi2.mode}
-
-Mode.
-
-Additional documentation from `Gamma`:
-
-The mode of a gamma distribution is `(shape - 1) / rate` when
-`shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`,
-an exception will be raised rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.name` {#Chi2.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Chi2.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.param_static_shapes(cls, sample_shape)` {#Chi2.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.parameters` {#Chi2.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.prob(value, name='prob')` {#Chi2.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.rate` {#Chi2.rate}
-
-Rate parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.reparameterization_type` {#Chi2.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.sample(sample_shape=(), seed=None, name='sample')` {#Chi2.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.stddev(name='stddev')` {#Chi2.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.survival_function(value, name='survival_function')` {#Chi2.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.validate_args` {#Chi2.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.variance(name='variance')` {#Chi2.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Chi2WithAbsDf` {#Chi2WithAbsDf}
-
-Chi2 with parameter transform `df = floor(abs(df))`.
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.__init__(df, validate_args=False, allow_nan_stats=True, name='Chi2WithAbsDf')` {#Chi2WithAbsDf.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.allow_nan_stats` {#Chi2WithAbsDf.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.batch_shape` {#Chi2WithAbsDf.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.batch_shape_tensor(name='batch_shape_tensor')` {#Chi2WithAbsDf.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.cdf(value, name='cdf')` {#Chi2WithAbsDf.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.concentration` {#Chi2WithAbsDf.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.copy(**override_parameters_kwargs)` {#Chi2WithAbsDf.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.covariance(name='covariance')` {#Chi2WithAbsDf.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.df` {#Chi2WithAbsDf.df}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.dtype` {#Chi2WithAbsDf.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.entropy(name='entropy')` {#Chi2WithAbsDf.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.event_shape` {#Chi2WithAbsDf.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.event_shape_tensor(name='event_shape_tensor')` {#Chi2WithAbsDf.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.is_continuous` {#Chi2WithAbsDf.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.is_scalar_batch(name='is_scalar_batch')` {#Chi2WithAbsDf.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.is_scalar_event(name='is_scalar_event')` {#Chi2WithAbsDf.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.log_cdf(value, name='log_cdf')` {#Chi2WithAbsDf.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.log_prob(value, name='log_prob')` {#Chi2WithAbsDf.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.log_survival_function(value, name='log_survival_function')` {#Chi2WithAbsDf.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.mean(name='mean')` {#Chi2WithAbsDf.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.mode(name='mode')` {#Chi2WithAbsDf.mode}
-
-Mode.
-
-Additional documentation from `Gamma`:
-
-The mode of a gamma distribution is `(shape - 1) / rate` when
-`shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`,
-an exception will be raised rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.name` {#Chi2WithAbsDf.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Chi2WithAbsDf.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.param_static_shapes(cls, sample_shape)` {#Chi2WithAbsDf.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.parameters` {#Chi2WithAbsDf.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.prob(value, name='prob')` {#Chi2WithAbsDf.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.rate` {#Chi2WithAbsDf.rate}
-
-Rate parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.reparameterization_type` {#Chi2WithAbsDf.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.sample(sample_shape=(), seed=None, name='sample')` {#Chi2WithAbsDf.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.stddev(name='stddev')` {#Chi2WithAbsDf.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.survival_function(value, name='survival_function')` {#Chi2WithAbsDf.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.validate_args` {#Chi2WithAbsDf.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.variance(name='variance')` {#Chi2WithAbsDf.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Exponential` {#Exponential}
-
-Exponential distribution.
-
-The Exponential distribution is parameterized by an event `rate` parameter.
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; lambda, x > 0) = exp(-lambda x) / Z
-Z = 1 / lambda
-```
-
-where `rate = lambda` and `Z` is the normalizaing constant.
-
-The Exponential distribution is a special case of the Gamma distribution,
-i.e.,
-
-```python
-Exponential(rate) = Gamma(concentration=1., rate)
-```
-
-The Exponential distribution uses a `rate` parameter, or "inverse scale",
-which can be intuited as,
-
-```none
-X ~ Exponential(rate=1)
-Y = X / rate
-```
-- - -
-
-#### `tf.contrib.distributions.Exponential.__init__(rate, validate_args=False, allow_nan_stats=True, name='Exponential')` {#Exponential.__init__}
-
-Construct Exponential distribution with parameter `rate`.
-
-##### Args:
-
-
-* <b>`rate`</b>: Floating point tensor, equivalent to `1 / mean`. Must contain only
- positive values.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.allow_nan_stats` {#Exponential.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.batch_shape` {#Exponential.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.batch_shape_tensor(name='batch_shape_tensor')` {#Exponential.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.cdf(value, name='cdf')` {#Exponential.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.concentration` {#Exponential.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.copy(**override_parameters_kwargs)` {#Exponential.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.covariance(name='covariance')` {#Exponential.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.dtype` {#Exponential.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.entropy(name='entropy')` {#Exponential.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.event_shape` {#Exponential.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.event_shape_tensor(name='event_shape_tensor')` {#Exponential.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.is_continuous` {#Exponential.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.is_scalar_batch(name='is_scalar_batch')` {#Exponential.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.is_scalar_event(name='is_scalar_event')` {#Exponential.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.log_cdf(value, name='log_cdf')` {#Exponential.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.log_prob(value, name='log_prob')` {#Exponential.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.log_survival_function(value, name='log_survival_function')` {#Exponential.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.mean(name='mean')` {#Exponential.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.mode(name='mode')` {#Exponential.mode}
-
-Mode.
-
-Additional documentation from `Gamma`:
-
-The mode of a gamma distribution is `(shape - 1) / rate` when
-`shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`,
-an exception will be raised rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.name` {#Exponential.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Exponential.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.param_static_shapes(cls, sample_shape)` {#Exponential.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.parameters` {#Exponential.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.prob(value, name='prob')` {#Exponential.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.rate` {#Exponential.rate}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.reparameterization_type` {#Exponential.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.sample(sample_shape=(), seed=None, name='sample')` {#Exponential.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.stddev(name='stddev')` {#Exponential.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.survival_function(value, name='survival_function')` {#Exponential.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.validate_args` {#Exponential.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.variance(name='variance')` {#Exponential.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.ExponentialWithSoftplusRate` {#ExponentialWithSoftplusRate}
-
-Exponential with softplus transform on `rate`.
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.__init__(rate, validate_args=False, allow_nan_stats=True, name='ExponentialWithSoftplusRate')` {#ExponentialWithSoftplusRate.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.allow_nan_stats` {#ExponentialWithSoftplusRate.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.batch_shape` {#ExponentialWithSoftplusRate.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.batch_shape_tensor(name='batch_shape_tensor')` {#ExponentialWithSoftplusRate.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.cdf(value, name='cdf')` {#ExponentialWithSoftplusRate.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.concentration` {#ExponentialWithSoftplusRate.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.copy(**override_parameters_kwargs)` {#ExponentialWithSoftplusRate.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.covariance(name='covariance')` {#ExponentialWithSoftplusRate.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.dtype` {#ExponentialWithSoftplusRate.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.entropy(name='entropy')` {#ExponentialWithSoftplusRate.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.event_shape` {#ExponentialWithSoftplusRate.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.event_shape_tensor(name='event_shape_tensor')` {#ExponentialWithSoftplusRate.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.is_continuous` {#ExponentialWithSoftplusRate.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.is_scalar_batch(name='is_scalar_batch')` {#ExponentialWithSoftplusRate.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.is_scalar_event(name='is_scalar_event')` {#ExponentialWithSoftplusRate.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.log_cdf(value, name='log_cdf')` {#ExponentialWithSoftplusRate.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.log_prob(value, name='log_prob')` {#ExponentialWithSoftplusRate.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.log_survival_function(value, name='log_survival_function')` {#ExponentialWithSoftplusRate.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.mean(name='mean')` {#ExponentialWithSoftplusRate.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.mode(name='mode')` {#ExponentialWithSoftplusRate.mode}
-
-Mode.
-
-Additional documentation from `Gamma`:
-
-The mode of a gamma distribution is `(shape - 1) / rate` when
-`shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`,
-an exception will be raised rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.name` {#ExponentialWithSoftplusRate.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#ExponentialWithSoftplusRate.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.param_static_shapes(cls, sample_shape)` {#ExponentialWithSoftplusRate.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.parameters` {#ExponentialWithSoftplusRate.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.prob(value, name='prob')` {#ExponentialWithSoftplusRate.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.rate` {#ExponentialWithSoftplusRate.rate}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.reparameterization_type` {#ExponentialWithSoftplusRate.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.sample(sample_shape=(), seed=None, name='sample')` {#ExponentialWithSoftplusRate.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.stddev(name='stddev')` {#ExponentialWithSoftplusRate.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.survival_function(value, name='survival_function')` {#ExponentialWithSoftplusRate.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.validate_args` {#ExponentialWithSoftplusRate.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.variance(name='variance')` {#ExponentialWithSoftplusRate.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Gamma` {#Gamma}
-
-Gamma distribution.
-
-The Gamma distribution is defined over positive real numbers using
-parameters `concentration` (aka "alpha") and `rate` (aka "beta").
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; alpha, beta, x > 0) = x**(alpha - 1) exp(-x beta) / Z
-Z = Gamma(alpha) beta**alpha
-```
-
-where:
-
-* `concentration = alpha`, `alpha > 0`,
-* `rate = beta`, `beta > 0`,
-* `Z` is the normalizing constant, and,
-* `Gamma` is the [gamma function](
- https://en.wikipedia.org/wiki/Gamma_function).
-
-The cumulative density function (cdf) is,
-
-```none
-cdf(x; alpha, beta, x > 0) = GammaInc(alpha, beta x) / Gamma(alpha)
-```
-
-where `GammaInc` is the [lower incomplete Gamma function](
-https://en.wikipedia.org/wiki/Incomplete_gamma_function).
-
-The parameters can be intuited via their relationship to mean and stddev,
-
-```none
-concentration = alpha = (mean / stddev)**2
-rate = beta = mean / stddev**2 = concentration / mean
-```
-
-Distribution parameters are automatically broadcast in all functions; see
-examples for details.
-
-WARNING: This distribution may draw 0-valued samples for small `concentration`
-values. See note in `tf.random_gamma` docstring.
-
-#### Examples
-
-```python
-dist = Gamma(concentration=3.0, rate=2.0)
-dist2 = Gamma(concentration=[3.0, 4.0], rate=[2.0, 3.0])
-```
-- - -
-
-#### `tf.contrib.distributions.Gamma.__init__(concentration, rate, validate_args=False, allow_nan_stats=True, name='Gamma')` {#Gamma.__init__}
-
-Construct Gamma with `concentration` and `rate` parameters.
-
-The parameters `concentration` and `rate` must be shaped in a way that
-supports broadcasting (e.g. `concentration + rate` is a valid operation).
-
-##### Args:
-
-
-* <b>`concentration`</b>: Floating point tensor, the concentration params of the
- distribution(s). Must contain only positive values.
-* <b>`rate`</b>: Floating point tensor, the inverse scale params of the
- distribution(s). Must contain only positive values.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `concentration` and `rate` are different dtypes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.allow_nan_stats` {#Gamma.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.batch_shape` {#Gamma.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.batch_shape_tensor(name='batch_shape_tensor')` {#Gamma.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.cdf(value, name='cdf')` {#Gamma.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.concentration` {#Gamma.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.copy(**override_parameters_kwargs)` {#Gamma.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.covariance(name='covariance')` {#Gamma.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.dtype` {#Gamma.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.entropy(name='entropy')` {#Gamma.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.event_shape` {#Gamma.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.event_shape_tensor(name='event_shape_tensor')` {#Gamma.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.is_continuous` {#Gamma.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.is_scalar_batch(name='is_scalar_batch')` {#Gamma.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.is_scalar_event(name='is_scalar_event')` {#Gamma.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.log_cdf(value, name='log_cdf')` {#Gamma.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.log_prob(value, name='log_prob')` {#Gamma.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.log_survival_function(value, name='log_survival_function')` {#Gamma.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.mean(name='mean')` {#Gamma.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.mode(name='mode')` {#Gamma.mode}
-
-Mode.
-
-Additional documentation from `Gamma`:
-
-The mode of a gamma distribution is `(shape - 1) / rate` when
-`shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`,
-an exception will be raised rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.name` {#Gamma.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Gamma.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.param_static_shapes(cls, sample_shape)` {#Gamma.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.parameters` {#Gamma.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.prob(value, name='prob')` {#Gamma.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.rate` {#Gamma.rate}
-
-Rate parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.reparameterization_type` {#Gamma.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.sample(sample_shape=(), seed=None, name='sample')` {#Gamma.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.stddev(name='stddev')` {#Gamma.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.survival_function(value, name='survival_function')` {#Gamma.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.validate_args` {#Gamma.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.variance(name='variance')` {#Gamma.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.GammaWithSoftplusConcentrationRate` {#GammaWithSoftplusConcentrationRate}
-
-`Gamma` with softplus of `concentration` and `rate`.
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.__init__(concentration, rate, validate_args=False, allow_nan_stats=True, name='GammaWithSoftplusConcentrationRate')` {#GammaWithSoftplusConcentrationRate.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.allow_nan_stats` {#GammaWithSoftplusConcentrationRate.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.batch_shape` {#GammaWithSoftplusConcentrationRate.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.batch_shape_tensor(name='batch_shape_tensor')` {#GammaWithSoftplusConcentrationRate.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.cdf(value, name='cdf')` {#GammaWithSoftplusConcentrationRate.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.concentration` {#GammaWithSoftplusConcentrationRate.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.copy(**override_parameters_kwargs)` {#GammaWithSoftplusConcentrationRate.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.covariance(name='covariance')` {#GammaWithSoftplusConcentrationRate.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.dtype` {#GammaWithSoftplusConcentrationRate.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.entropy(name='entropy')` {#GammaWithSoftplusConcentrationRate.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.event_shape` {#GammaWithSoftplusConcentrationRate.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.event_shape_tensor(name='event_shape_tensor')` {#GammaWithSoftplusConcentrationRate.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.is_continuous` {#GammaWithSoftplusConcentrationRate.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.is_scalar_batch(name='is_scalar_batch')` {#GammaWithSoftplusConcentrationRate.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.is_scalar_event(name='is_scalar_event')` {#GammaWithSoftplusConcentrationRate.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.log_cdf(value, name='log_cdf')` {#GammaWithSoftplusConcentrationRate.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.log_prob(value, name='log_prob')` {#GammaWithSoftplusConcentrationRate.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.log_survival_function(value, name='log_survival_function')` {#GammaWithSoftplusConcentrationRate.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.mean(name='mean')` {#GammaWithSoftplusConcentrationRate.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.mode(name='mode')` {#GammaWithSoftplusConcentrationRate.mode}
-
-Mode.
-
-Additional documentation from `Gamma`:
-
-The mode of a gamma distribution is `(shape - 1) / rate` when
-`shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`,
-an exception will be raised rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.name` {#GammaWithSoftplusConcentrationRate.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#GammaWithSoftplusConcentrationRate.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.param_static_shapes(cls, sample_shape)` {#GammaWithSoftplusConcentrationRate.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.parameters` {#GammaWithSoftplusConcentrationRate.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.prob(value, name='prob')` {#GammaWithSoftplusConcentrationRate.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.rate` {#GammaWithSoftplusConcentrationRate.rate}
-
-Rate parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.reparameterization_type` {#GammaWithSoftplusConcentrationRate.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.sample(sample_shape=(), seed=None, name='sample')` {#GammaWithSoftplusConcentrationRate.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.stddev(name='stddev')` {#GammaWithSoftplusConcentrationRate.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.survival_function(value, name='survival_function')` {#GammaWithSoftplusConcentrationRate.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.validate_args` {#GammaWithSoftplusConcentrationRate.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.variance(name='variance')` {#GammaWithSoftplusConcentrationRate.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.InverseGamma` {#InverseGamma}
-
-InverseGamma distribution.
-
-The `InverseGamma` distribution is defined over positive real numbers using
-parameters `concentration` (aka "alpha") and `rate` (aka "beta").
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; alpha, beta, x > 0) = x**(-alpha - 1) exp(-beta / x) / Z
-Z = Gamma(alpha) beta**-alpha
-```
-
-where:
-
-* `concentration = alpha`,
-* `rate = beta`,
-* `Z` is the normalizing constant, and,
-* `Gamma` is the [gamma function](
- https://en.wikipedia.org/wiki/Gamma_function).
-
-The cumulative density function (cdf) is,
-
-```none
-cdf(x; alpha, beta, x > 0) = GammaInc(alpha, beta / x) / Gamma(alpha)
-```
-
-where `GammaInc` is the [upper incomplete Gamma function](
-https://en.wikipedia.org/wiki/Incomplete_gamma_function).
-
-The parameters can be intuited via their relationship to mean and stddev,
-
-```none
-concentration = alpha = (mean / stddev)**2
-rate = beta = mean / stddev**2
-```
-
-Distribution parameters are automatically broadcast in all functions; see
-examples for details.
-
-WARNING: This distribution may draw 0-valued samples for small concentration
-values. See note in `tf.random_gamma` docstring.
-
-#### Examples
-
-```python
-dist = InverseGamma(concentration=3.0, rate=2.0)
-dist2 = InverseGamma(concentration=[3.0, 4.0], rate=[2.0, 3.0])
-```
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.__init__(concentration, rate, validate_args=False, allow_nan_stats=True, name='InverseGamma')` {#InverseGamma.__init__}
-
-Construct InverseGamma with `concentration` and `rate` parameters.
-
-The parameters `concentration` and `rate` must be shaped in a way that
-supports broadcasting (e.g. `concentration + rate` is a valid operation).
-
-##### Args:
-
-
-* <b>`concentration`</b>: Floating point tensor, the concentration params of the
- distribution(s). Must contain only positive values.
-* <b>`rate`</b>: Floating point tensor, the inverse scale params of the
- distribution(s). Must contain only positive values.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `concentration` and `rate` are different dtypes.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.allow_nan_stats` {#InverseGamma.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.batch_shape` {#InverseGamma.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.batch_shape_tensor(name='batch_shape_tensor')` {#InverseGamma.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.cdf(value, name='cdf')` {#InverseGamma.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.concentration` {#InverseGamma.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.copy(**override_parameters_kwargs)` {#InverseGamma.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.covariance(name='covariance')` {#InverseGamma.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.dtype` {#InverseGamma.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.entropy(name='entropy')` {#InverseGamma.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.event_shape` {#InverseGamma.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.event_shape_tensor(name='event_shape_tensor')` {#InverseGamma.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.is_continuous` {#InverseGamma.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.is_scalar_batch(name='is_scalar_batch')` {#InverseGamma.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.is_scalar_event(name='is_scalar_event')` {#InverseGamma.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.log_cdf(value, name='log_cdf')` {#InverseGamma.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.log_prob(value, name='log_prob')` {#InverseGamma.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.log_survival_function(value, name='log_survival_function')` {#InverseGamma.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.mean(name='mean')` {#InverseGamma.mean}
-
-Mean.
-
-Additional documentation from `InverseGamma`:
-
-The mean of an inverse gamma distribution is
-`rate / (concentration - 1)`, when `concentration > 1`, and `NaN`
-otherwise. If `self.allow_nan_stats` is `False`, an exception will be
-raised rather than returning `NaN`
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.mode(name='mode')` {#InverseGamma.mode}
-
-Mode.
-
-Additional documentation from `InverseGamma`:
-
-The mode of an inverse gamma distribution is `rate / (concentration +
-1)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.name` {#InverseGamma.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#InverseGamma.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.param_static_shapes(cls, sample_shape)` {#InverseGamma.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.parameters` {#InverseGamma.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.prob(value, name='prob')` {#InverseGamma.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.rate` {#InverseGamma.rate}
-
-Rate parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.reparameterization_type` {#InverseGamma.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.sample(sample_shape=(), seed=None, name='sample')` {#InverseGamma.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.stddev(name='stddev')` {#InverseGamma.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.survival_function(value, name='survival_function')` {#InverseGamma.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.validate_args` {#InverseGamma.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.variance(name='variance')` {#InverseGamma.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-
-Additional documentation from `InverseGamma`:
-
-Variance for inverse gamma is defined only for `concentration > 2`. If
-`self.allow_nan_stats` is `False`, an exception will be raised rather
-than returning `NaN`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate` {#InverseGammaWithSoftplusConcentrationRate}
-
-`InverseGamma` with softplus of `concentration` and `rate`.
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.__init__(concentration, rate, validate_args=False, allow_nan_stats=True, name='InverseGammaWithSoftplusConcentrationRate')` {#InverseGammaWithSoftplusConcentrationRate.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.allow_nan_stats` {#InverseGammaWithSoftplusConcentrationRate.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.batch_shape` {#InverseGammaWithSoftplusConcentrationRate.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.batch_shape_tensor(name='batch_shape_tensor')` {#InverseGammaWithSoftplusConcentrationRate.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.cdf(value, name='cdf')` {#InverseGammaWithSoftplusConcentrationRate.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.concentration` {#InverseGammaWithSoftplusConcentrationRate.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.copy(**override_parameters_kwargs)` {#InverseGammaWithSoftplusConcentrationRate.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.covariance(name='covariance')` {#InverseGammaWithSoftplusConcentrationRate.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.dtype` {#InverseGammaWithSoftplusConcentrationRate.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.entropy(name='entropy')` {#InverseGammaWithSoftplusConcentrationRate.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.event_shape` {#InverseGammaWithSoftplusConcentrationRate.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.event_shape_tensor(name='event_shape_tensor')` {#InverseGammaWithSoftplusConcentrationRate.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.is_continuous` {#InverseGammaWithSoftplusConcentrationRate.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.is_scalar_batch(name='is_scalar_batch')` {#InverseGammaWithSoftplusConcentrationRate.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.is_scalar_event(name='is_scalar_event')` {#InverseGammaWithSoftplusConcentrationRate.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.log_cdf(value, name='log_cdf')` {#InverseGammaWithSoftplusConcentrationRate.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.log_prob(value, name='log_prob')` {#InverseGammaWithSoftplusConcentrationRate.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.log_survival_function(value, name='log_survival_function')` {#InverseGammaWithSoftplusConcentrationRate.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.mean(name='mean')` {#InverseGammaWithSoftplusConcentrationRate.mean}
-
-Mean.
-
-Additional documentation from `InverseGamma`:
-
-The mean of an inverse gamma distribution is
-`rate / (concentration - 1)`, when `concentration > 1`, and `NaN`
-otherwise. If `self.allow_nan_stats` is `False`, an exception will be
-raised rather than returning `NaN`
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.mode(name='mode')` {#InverseGammaWithSoftplusConcentrationRate.mode}
-
-Mode.
-
-Additional documentation from `InverseGamma`:
-
-The mode of an inverse gamma distribution is `rate / (concentration +
-1)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.name` {#InverseGammaWithSoftplusConcentrationRate.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#InverseGammaWithSoftplusConcentrationRate.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.param_static_shapes(cls, sample_shape)` {#InverseGammaWithSoftplusConcentrationRate.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.parameters` {#InverseGammaWithSoftplusConcentrationRate.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.prob(value, name='prob')` {#InverseGammaWithSoftplusConcentrationRate.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.rate` {#InverseGammaWithSoftplusConcentrationRate.rate}
-
-Rate parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.reparameterization_type` {#InverseGammaWithSoftplusConcentrationRate.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.sample(sample_shape=(), seed=None, name='sample')` {#InverseGammaWithSoftplusConcentrationRate.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.stddev(name='stddev')` {#InverseGammaWithSoftplusConcentrationRate.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.survival_function(value, name='survival_function')` {#InverseGammaWithSoftplusConcentrationRate.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.validate_args` {#InverseGammaWithSoftplusConcentrationRate.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.variance(name='variance')` {#InverseGammaWithSoftplusConcentrationRate.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-
-Additional documentation from `InverseGamma`:
-
-Variance for inverse gamma is defined only for `concentration > 2`. If
-`self.allow_nan_stats` is `False`, an exception will be raised rather
-than returning `NaN`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Laplace` {#Laplace}
-
-The Laplace distribution with location `loc` and `scale` parameters.
-
-#### Mathematical details
-
-The probability density function (pdf) of this distribution is,
-
-```none
-pdf(x; mu, sigma) = exp(-|x - mu| / sigma) / Z
-Z = 2 sigma
-```
-
-where `loc = mu`, `scale = sigma`, and `Z` is the normalization constant.
-
-Note that the Laplace distribution can be thought of two exponential
-distributions spliced together "back-to-back."
-
-The Lpalce distribution is a member of the [location-scale family](
-https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be
-constructed as,
-
-```none
-X ~ Laplace(loc=0, scale=1)
-Y = loc + scale * X
-```
-- - -
-
-#### `tf.contrib.distributions.Laplace.__init__(loc, scale, validate_args=False, allow_nan_stats=True, name='Laplace')` {#Laplace.__init__}
-
-Construct Laplace distribution with parameters `loc` and `scale`.
-
-The parameters `loc` and `scale` must be shaped in a way that supports
-broadcasting (e.g., `loc / scale` is a valid operation).
-
-##### Args:
-
-
-* <b>`loc`</b>: Floating point tensor which characterizes the location (center)
- of the distribution.
-* <b>`scale`</b>: Positive floating point tensor which characterizes the spread of
- the distribution.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`,
- statistics (e.g., mean, mode, variance) use the value "`NaN`" to
- indicate the result is undefined. When `False`, an exception is raised
- if one or more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `loc` and `scale` are of different dtype.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.allow_nan_stats` {#Laplace.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.batch_shape` {#Laplace.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.batch_shape_tensor(name='batch_shape_tensor')` {#Laplace.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.cdf(value, name='cdf')` {#Laplace.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.copy(**override_parameters_kwargs)` {#Laplace.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.covariance(name='covariance')` {#Laplace.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.dtype` {#Laplace.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.entropy(name='entropy')` {#Laplace.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.event_shape` {#Laplace.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.event_shape_tensor(name='event_shape_tensor')` {#Laplace.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.is_continuous` {#Laplace.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.is_scalar_batch(name='is_scalar_batch')` {#Laplace.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.is_scalar_event(name='is_scalar_event')` {#Laplace.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.loc` {#Laplace.loc}
-
-Distribution parameter for the location.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.log_cdf(value, name='log_cdf')` {#Laplace.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.log_prob(value, name='log_prob')` {#Laplace.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.log_survival_function(value, name='log_survival_function')` {#Laplace.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.mean(name='mean')` {#Laplace.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.mode(name='mode')` {#Laplace.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.name` {#Laplace.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Laplace.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.param_static_shapes(cls, sample_shape)` {#Laplace.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.parameters` {#Laplace.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.prob(value, name='prob')` {#Laplace.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.reparameterization_type` {#Laplace.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.sample(sample_shape=(), seed=None, name='sample')` {#Laplace.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.scale` {#Laplace.scale}
-
-Distribution parameter for scale.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.stddev(name='stddev')` {#Laplace.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.survival_function(value, name='survival_function')` {#Laplace.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.validate_args` {#Laplace.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.variance(name='variance')` {#Laplace.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.LaplaceWithSoftplusScale` {#LaplaceWithSoftplusScale}
-
-Laplace with softplus applied to `scale`.
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.__init__(loc, scale, validate_args=False, allow_nan_stats=True, name='LaplaceWithSoftplusScale')` {#LaplaceWithSoftplusScale.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.allow_nan_stats` {#LaplaceWithSoftplusScale.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.batch_shape` {#LaplaceWithSoftplusScale.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.batch_shape_tensor(name='batch_shape_tensor')` {#LaplaceWithSoftplusScale.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.cdf(value, name='cdf')` {#LaplaceWithSoftplusScale.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.copy(**override_parameters_kwargs)` {#LaplaceWithSoftplusScale.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.covariance(name='covariance')` {#LaplaceWithSoftplusScale.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.dtype` {#LaplaceWithSoftplusScale.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.entropy(name='entropy')` {#LaplaceWithSoftplusScale.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.event_shape` {#LaplaceWithSoftplusScale.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.event_shape_tensor(name='event_shape_tensor')` {#LaplaceWithSoftplusScale.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.is_continuous` {#LaplaceWithSoftplusScale.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.is_scalar_batch(name='is_scalar_batch')` {#LaplaceWithSoftplusScale.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.is_scalar_event(name='is_scalar_event')` {#LaplaceWithSoftplusScale.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.loc` {#LaplaceWithSoftplusScale.loc}
-
-Distribution parameter for the location.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.log_cdf(value, name='log_cdf')` {#LaplaceWithSoftplusScale.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.log_prob(value, name='log_prob')` {#LaplaceWithSoftplusScale.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.log_survival_function(value, name='log_survival_function')` {#LaplaceWithSoftplusScale.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.mean(name='mean')` {#LaplaceWithSoftplusScale.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.mode(name='mode')` {#LaplaceWithSoftplusScale.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.name` {#LaplaceWithSoftplusScale.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#LaplaceWithSoftplusScale.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.param_static_shapes(cls, sample_shape)` {#LaplaceWithSoftplusScale.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.parameters` {#LaplaceWithSoftplusScale.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.prob(value, name='prob')` {#LaplaceWithSoftplusScale.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.reparameterization_type` {#LaplaceWithSoftplusScale.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.sample(sample_shape=(), seed=None, name='sample')` {#LaplaceWithSoftplusScale.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.scale` {#LaplaceWithSoftplusScale.scale}
-
-Distribution parameter for scale.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.stddev(name='stddev')` {#LaplaceWithSoftplusScale.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.survival_function(value, name='survival_function')` {#LaplaceWithSoftplusScale.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.validate_args` {#LaplaceWithSoftplusScale.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.variance(name='variance')` {#LaplaceWithSoftplusScale.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Logistic` {#Logistic}
-
-The Logistic distribution with location `loc` and `scale` parameters.
-
-#### Mathematical details
-
-The cumulative density function of this distribution is:
-
-```none
-cdf(x; mu, sigma) = 1 / (1 + exp(-(x - mu) / sigma))
-```
-
-where `loc = mu` and `scale = sigma`.
-
-The Logistic distribution is a member of the [location-scale family](
-https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be
-constructed as,
-
-```none
-X ~ Logistic(loc=0, scale=1)
-Y = loc + scale * X
-```
-
-#### Examples
-
-Examples of initialization of one or a batch of distributions.
-
-```python
-# Define a single scalar Logistic distribution.
-dist = tf.contrib.distributions.Logistic(loc=0., scale=3.)
-
-# Evaluate the cdf at 1, returning a scalar.
-dist.cdf(1.)
-
-# Define a batch of two scalar valued Logistics.
-# The first has mean 1 and scale 11, the second 2 and 22.
-dist = tf.contrib.distributions.Logistic(loc=[1, 2.], scale=[11, 22.])
-
-# Evaluate the pdf of the first distribution on 0, and the second on 1.5,
-# returning a length two tensor.
-dist.prob([0, 1.5])
-
-# Get 3 samples, returning a 3 x 2 tensor.
-dist.sample([3])
-```
-
-Arguments are broadcast when possible.
-
-```python
-# Define a batch of two scalar valued Logistics.
-# Both have mean 1, but different scales.
-dist = tf.contrib.distributions.Logistic(loc=1., scale=[11, 22.])
-
-# Evaluate the pdf of both distributions on the same point, 3.0,
-# returning a length 2 tensor.
-dist.prob(3.0)
-```
-- - -
-
-#### `tf.contrib.distributions.Logistic.__init__(loc, scale, validate_args=False, allow_nan_stats=True, name='Logistic')` {#Logistic.__init__}
-
-Construct Logistic distributions with mean and scale `loc` and `scale`.
-
-The parameters `loc` and `scale` must be shaped in a way that supports
-broadcasting (e.g. `loc + scale` is a valid operation).
-
-##### Args:
-
-
-* <b>`loc`</b>: Floating point tensor, the means of the distribution(s).
-* <b>`scale`</b>: Floating point tensor, the scales of the distribution(s). Must
- contain only positive values.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: The name to give Ops created by the initializer.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if loc and scale are different dtypes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.allow_nan_stats` {#Logistic.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.batch_shape` {#Logistic.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.batch_shape_tensor(name='batch_shape_tensor')` {#Logistic.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.cdf(value, name='cdf')` {#Logistic.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.copy(**override_parameters_kwargs)` {#Logistic.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.covariance(name='covariance')` {#Logistic.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.dtype` {#Logistic.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.entropy(name='entropy')` {#Logistic.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.event_shape` {#Logistic.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.event_shape_tensor(name='event_shape_tensor')` {#Logistic.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.is_continuous` {#Logistic.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.is_scalar_batch(name='is_scalar_batch')` {#Logistic.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.is_scalar_event(name='is_scalar_event')` {#Logistic.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.loc` {#Logistic.loc}
-
-Distribution parameter for the location.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.log_cdf(value, name='log_cdf')` {#Logistic.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.log_prob(value, name='log_prob')` {#Logistic.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.log_survival_function(value, name='log_survival_function')` {#Logistic.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.mean(name='mean')` {#Logistic.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.mode(name='mode')` {#Logistic.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.name` {#Logistic.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Logistic.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.param_static_shapes(cls, sample_shape)` {#Logistic.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.parameters` {#Logistic.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.prob(value, name='prob')` {#Logistic.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.reparameterization_type` {#Logistic.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.sample(sample_shape=(), seed=None, name='sample')` {#Logistic.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.scale` {#Logistic.scale}
-
-Distribution parameter for scale.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.stddev(name='stddev')` {#Logistic.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.survival_function(value, name='survival_function')` {#Logistic.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.validate_args` {#Logistic.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.variance(name='variance')` {#Logistic.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Normal` {#Normal}
-
-The Normal distribution with location `loc` and `scale` parameters.
-
-#### Mathematical details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; mu, sigma) = exp(-0.5 (x - mu)**2 / sigma**2) / Z
-Z = (2 pi sigma**2)**0.5
-```
-
-where `loc = mu` is the mean, `scale = sigma` is the std. deviation, and, `Z`
-is the normalization constant.
-
-The Normal distribution is a member of the [location-scale family](
-https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be
-constructed as,
-
-```none
-X ~ Normal(loc=0, scale=1)
-Y = loc + scale * X
-```
-
-#### Examples
-
-Examples of initialization of one or a batch of distributions.
-
-```python
-# Define a single scalar Normal distribution.
-dist = tf.contrib.distributions.Normal(loc=0., scale=3.)
-
-# Evaluate the cdf at 1, returning a scalar.
-dist.cdf(1.)
-
-# Define a batch of two scalar valued Normals.
-# The first has mean 1 and standard deviation 11, the second 2 and 22.
-dist = tf.contrib.distributions.Normal(loc=[1, 2.], scale=[11, 22.])
-
-# Evaluate the pdf of the first distribution on 0, and the second on 1.5,
-# returning a length two tensor.
-dist.prob([0, 1.5])
-
-# Get 3 samples, returning a 3 x 2 tensor.
-dist.sample([3])
-```
-
-Arguments are broadcast when possible.
-
-```python
-# Define a batch of two scalar valued Normals.
-# Both have mean 1, but different standard deviations.
-dist = tf.contrib.distributions.Normal(loc=1., scale=[11, 22.])
-
-# Evaluate the pdf of both distributions on the same point, 3.0,
-# returning a length 2 tensor.
-dist.prob(3.0)
-```
-- - -
-
-#### `tf.contrib.distributions.Normal.__init__(loc, scale, validate_args=False, allow_nan_stats=True, name='Normal')` {#Normal.__init__}
-
-Construct Normal distributions with mean and stddev `loc` and `scale`.
-
-The parameters `loc` and `scale` must be shaped in a way that supports
-broadcasting (e.g. `loc + scale` is a valid operation).
-
-##### Args:
-
-
-* <b>`loc`</b>: Floating point tensor; the means of the distribution(s).
-* <b>`scale`</b>: Floating point tensor; the stddevs of the distribution(s).
- Must contain only positive values.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`,
- statistics (e.g., mean, mode, variance) use the value "`NaN`" to
- indicate the result is undefined. When `False`, an exception is raised
- if one or more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `loc` and `scale` have different `dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.allow_nan_stats` {#Normal.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.batch_shape` {#Normal.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.batch_shape_tensor(name='batch_shape_tensor')` {#Normal.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.cdf(value, name='cdf')` {#Normal.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.copy(**override_parameters_kwargs)` {#Normal.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.covariance(name='covariance')` {#Normal.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.dtype` {#Normal.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.entropy(name='entropy')` {#Normal.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.event_shape` {#Normal.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.event_shape_tensor(name='event_shape_tensor')` {#Normal.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.is_continuous` {#Normal.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.is_scalar_batch(name='is_scalar_batch')` {#Normal.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.is_scalar_event(name='is_scalar_event')` {#Normal.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.loc` {#Normal.loc}
-
-Distribution parameter for the mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.log_cdf(value, name='log_cdf')` {#Normal.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.log_prob(value, name='log_prob')` {#Normal.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.log_survival_function(value, name='log_survival_function')` {#Normal.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.mean(name='mean')` {#Normal.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.mode(name='mode')` {#Normal.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.name` {#Normal.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Normal.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.param_static_shapes(cls, sample_shape)` {#Normal.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.parameters` {#Normal.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.prob(value, name='prob')` {#Normal.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.reparameterization_type` {#Normal.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.sample(sample_shape=(), seed=None, name='sample')` {#Normal.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.scale` {#Normal.scale}
-
-Distribution parameter for standard deviation.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.stddev(name='stddev')` {#Normal.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.survival_function(value, name='survival_function')` {#Normal.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.validate_args` {#Normal.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.variance(name='variance')` {#Normal.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.NormalWithSoftplusScale` {#NormalWithSoftplusScale}
-
-Normal with softplus applied to `scale`.
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.__init__(loc, scale, validate_args=False, allow_nan_stats=True, name='NormalWithSoftplusScale')` {#NormalWithSoftplusScale.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.allow_nan_stats` {#NormalWithSoftplusScale.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.batch_shape` {#NormalWithSoftplusScale.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.batch_shape_tensor(name='batch_shape_tensor')` {#NormalWithSoftplusScale.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.cdf(value, name='cdf')` {#NormalWithSoftplusScale.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.copy(**override_parameters_kwargs)` {#NormalWithSoftplusScale.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.covariance(name='covariance')` {#NormalWithSoftplusScale.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.dtype` {#NormalWithSoftplusScale.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.entropy(name='entropy')` {#NormalWithSoftplusScale.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.event_shape` {#NormalWithSoftplusScale.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.event_shape_tensor(name='event_shape_tensor')` {#NormalWithSoftplusScale.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.is_continuous` {#NormalWithSoftplusScale.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.is_scalar_batch(name='is_scalar_batch')` {#NormalWithSoftplusScale.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.is_scalar_event(name='is_scalar_event')` {#NormalWithSoftplusScale.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.loc` {#NormalWithSoftplusScale.loc}
-
-Distribution parameter for the mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.log_cdf(value, name='log_cdf')` {#NormalWithSoftplusScale.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.log_prob(value, name='log_prob')` {#NormalWithSoftplusScale.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.log_survival_function(value, name='log_survival_function')` {#NormalWithSoftplusScale.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.mean(name='mean')` {#NormalWithSoftplusScale.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.mode(name='mode')` {#NormalWithSoftplusScale.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.name` {#NormalWithSoftplusScale.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#NormalWithSoftplusScale.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.param_static_shapes(cls, sample_shape)` {#NormalWithSoftplusScale.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.parameters` {#NormalWithSoftplusScale.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.prob(value, name='prob')` {#NormalWithSoftplusScale.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.reparameterization_type` {#NormalWithSoftplusScale.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.sample(sample_shape=(), seed=None, name='sample')` {#NormalWithSoftplusScale.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.scale` {#NormalWithSoftplusScale.scale}
-
-Distribution parameter for standard deviation.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.stddev(name='stddev')` {#NormalWithSoftplusScale.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.survival_function(value, name='survival_function')` {#NormalWithSoftplusScale.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.validate_args` {#NormalWithSoftplusScale.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.variance(name='variance')` {#NormalWithSoftplusScale.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Poisson` {#Poisson}
-
-Poisson distribution.
-
-The Poisson distribution is parameterized by an event `rate` parameter.
-
-#### Mathematical Details
-
-The probability mass function (pmf) is,
-
-```none
-pmf(k; lambda, k >= 0) = (lambda^k / k!) / Z
-Z = exp(lambda).
-```
-
-where `rate = lambda` and `Z` is the normalizing constant.
-- - -
-
-#### `tf.contrib.distributions.Poisson.__init__(rate, validate_args=False, allow_nan_stats=True, name='Poisson')` {#Poisson.__init__}
-
-Initialize a batch of Poisson distributions.
-
-##### Args:
-
-
-* <b>`rate`</b>: Floating point tensor, the rate parameter of the
- distribution(s). `rate` must be positive.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.allow_nan_stats` {#Poisson.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.batch_shape` {#Poisson.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.batch_shape_tensor(name='batch_shape_tensor')` {#Poisson.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.cdf(value, name='cdf')` {#Poisson.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-
-Additional documentation from `Poisson`:
-
-Note that the input value must be a non-negative floating point tensor with
-dtype `dtype` and whose shape can be broadcast with `self.rate`. `x` is only
-legal if it is non-negative and its components are equal to integer values.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.copy(**override_parameters_kwargs)` {#Poisson.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.covariance(name='covariance')` {#Poisson.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.dtype` {#Poisson.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.entropy(name='entropy')` {#Poisson.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.event_shape` {#Poisson.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.event_shape_tensor(name='event_shape_tensor')` {#Poisson.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.is_continuous` {#Poisson.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.is_scalar_batch(name='is_scalar_batch')` {#Poisson.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.is_scalar_event(name='is_scalar_event')` {#Poisson.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.log_cdf(value, name='log_cdf')` {#Poisson.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-
-Additional documentation from `Poisson`:
-
-Note that the input value must be a non-negative floating point tensor with
-dtype `dtype` and whose shape can be broadcast with `self.rate`. `x` is only
-legal if it is non-negative and its components are equal to integer values.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.log_prob(value, name='log_prob')` {#Poisson.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Poisson`:
-
-Note that the input value must be a non-negative floating point tensor with
-dtype `dtype` and whose shape can be broadcast with `self.rate`. `x` is only
-legal if it is non-negative and its components are equal to integer values.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.log_survival_function(value, name='log_survival_function')` {#Poisson.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.mean(name='mean')` {#Poisson.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.mode(name='mode')` {#Poisson.mode}
-
-Mode.
-
-Additional documentation from `Poisson`:
-
-Note: when `rate` is an integer, there are actually two modes: `rate`
-and `rate - 1`. In this case we return the larger, i.e., `rate`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.name` {#Poisson.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Poisson.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.param_static_shapes(cls, sample_shape)` {#Poisson.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.parameters` {#Poisson.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.prob(value, name='prob')` {#Poisson.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Poisson`:
-
-Note that the input value must be a non-negative floating point tensor with
-dtype `dtype` and whose shape can be broadcast with `self.rate`. `x` is only
-legal if it is non-negative and its components are equal to integer values.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.rate` {#Poisson.rate}
-
-Rate parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.reparameterization_type` {#Poisson.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.sample(sample_shape=(), seed=None, name='sample')` {#Poisson.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.stddev(name='stddev')` {#Poisson.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.survival_function(value, name='survival_function')` {#Poisson.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.validate_args` {#Poisson.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.variance(name='variance')` {#Poisson.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.StudentT` {#StudentT}
-
-Student's t-distribution with degree of freedom `df`, location `loc`, and `scale` parameters.
-
-#### Mathematical details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; df, mu, sigma) = (1 + y**2 / df)**(-0.5 (df + 1)) / Z
-where,
-y = (x - mu) / sigma
-Z = abs(sigma) sqrt(df pi) Gamma(0.5 df) / Gamma(0.5 (df + 1))
-```
-
-where:
-* `loc = mu`,
-* `scale = sigma`, and,
-* `Z` is the normalization constant, and,
-* `Gamma` is the [gamma function](
- https://en.wikipedia.org/wiki/Gamma_function).
-
-The StudentT distribution is a member of the [location-scale family](
-https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be
-constructed as,
-
-```none
-X ~ StudentT(df, loc=0, scale=1)
-Y = loc + scale * X
-```
-
-Notice that `scale` has semantics more similar to standard deviation than
-variance. However it is not actually the std. deviation; the Student's
-t-distribution std. dev. is `scale sqrt(df / (df - 2))` when `df > 2`.
-
-#### Examples
-
-Examples of initialization of one or a batch of distributions.
-
-```python
-# Define a single scalar Student t distribution.
-single_dist = tf.contrib.distributions.StudentT(df=3)
-
-# Evaluate the pdf at 1, returning a scalar Tensor.
-single_dist.prob(1.)
-
-# Define a batch of two scalar valued Student t's.
-# The first has degrees of freedom 2, mean 1, and scale 11.
-# The second 3, 2 and 22.
-multi_dist = tf.contrib.distributions.StudentT(df=[2, 3],
- loc=[1, 2.],
- scale=[11, 22.])
-
-# Evaluate the pdf of the first distribution on 0, and the second on 1.5,
-# returning a length two tensor.
-multi_dist.prob([0, 1.5])
-
-# Get 3 samples, returning a 3 x 2 tensor.
-multi_dist.sample(3)
-```
-
-Arguments are broadcast when possible.
-
-```python
-# Define a batch of two Student's t distributions.
-# Both have df 2 and mean 1, but different scales.
-dist = tf.contrib.distributions.StudentT(df=2, loc=1, scale=[11, 22.])
-
-# Evaluate the pdf of both distributions on the same point, 3.0,
-# returning a length 2 tensor.
-dist.prob(3.0)
-```
-- - -
-
-#### `tf.contrib.distributions.StudentT.__init__(df, loc, scale, validate_args=False, allow_nan_stats=True, name='StudentT')` {#StudentT.__init__}
-
-Construct Student's t distributions.
-
-The distributions have degree of freedom `df`, mean `loc`, and scale
-`scale`.
-
-The parameters `df`, `loc`, and `scale` must be shaped in a way that
-supports broadcasting (e.g. `df + loc + scale` is a valid operation).
-
-##### Args:
-
-
-* <b>`df`</b>: Floating-point `Tensor`. The degrees of freedom of the
- distribution(s). `df` must contain only positive values.
-* <b>`loc`</b>: Floating-point `Tensor`. The mean(s) of the distribution(s).
-* <b>`scale`</b>: Floating-point `Tensor`. The scaling factor(s) for the
- distribution(s). Note that `scale` is not technically the standard
- deviation of this distribution but has semantics more similar to
- standard deviation than variance.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`,
- statistics (e.g., mean, mode, variance) use the value "`NaN`" to
- indicate the result is undefined. When `False`, an exception is raised
- if one or more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if loc and scale are different dtypes.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.allow_nan_stats` {#StudentT.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.batch_shape` {#StudentT.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.batch_shape_tensor(name='batch_shape_tensor')` {#StudentT.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.cdf(value, name='cdf')` {#StudentT.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.copy(**override_parameters_kwargs)` {#StudentT.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.covariance(name='covariance')` {#StudentT.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.df` {#StudentT.df}
-
-Degrees of freedom in these Student's t distribution(s).
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.dtype` {#StudentT.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.entropy(name='entropy')` {#StudentT.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.event_shape` {#StudentT.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.event_shape_tensor(name='event_shape_tensor')` {#StudentT.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.is_continuous` {#StudentT.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.is_scalar_batch(name='is_scalar_batch')` {#StudentT.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.is_scalar_event(name='is_scalar_event')` {#StudentT.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.loc` {#StudentT.loc}
-
-Locations of these Student's t distribution(s).
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.log_cdf(value, name='log_cdf')` {#StudentT.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.log_prob(value, name='log_prob')` {#StudentT.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.log_survival_function(value, name='log_survival_function')` {#StudentT.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.mean(name='mean')` {#StudentT.mean}
-
-Mean.
-
-Additional documentation from `StudentT`:
-
-The mean of Student's T equals `loc` if `df > 1`, otherwise it is
-`NaN`. If `self.allow_nan_stats=True`, then an exception will be raised
-rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.mode(name='mode')` {#StudentT.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.name` {#StudentT.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#StudentT.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.param_static_shapes(cls, sample_shape)` {#StudentT.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.parameters` {#StudentT.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.prob(value, name='prob')` {#StudentT.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.reparameterization_type` {#StudentT.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.sample(sample_shape=(), seed=None, name='sample')` {#StudentT.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.scale` {#StudentT.scale}
-
-Scaling factors of these Student's t distribution(s).
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.stddev(name='stddev')` {#StudentT.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.survival_function(value, name='survival_function')` {#StudentT.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.validate_args` {#StudentT.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.variance(name='variance')` {#StudentT.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-
-Additional documentation from `StudentT`:
-
-The variance for Student's T equals
-
-```
-df / (df - 2), when df > 2
-infinity, when 1 < df <= 2
-NaN, when df <= 1
-```
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.StudentTWithAbsDfSoftplusScale` {#StudentTWithAbsDfSoftplusScale}
-
-StudentT with `df = floor(abs(df))` and `scale = softplus(scale)`.
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.__init__(df, loc, scale, validate_args=False, allow_nan_stats=True, name='StudentTWithAbsDfSoftplusScale')` {#StudentTWithAbsDfSoftplusScale.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.allow_nan_stats` {#StudentTWithAbsDfSoftplusScale.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.batch_shape` {#StudentTWithAbsDfSoftplusScale.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.batch_shape_tensor(name='batch_shape_tensor')` {#StudentTWithAbsDfSoftplusScale.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.cdf(value, name='cdf')` {#StudentTWithAbsDfSoftplusScale.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.copy(**override_parameters_kwargs)` {#StudentTWithAbsDfSoftplusScale.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.covariance(name='covariance')` {#StudentTWithAbsDfSoftplusScale.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.df` {#StudentTWithAbsDfSoftplusScale.df}
-
-Degrees of freedom in these Student's t distribution(s).
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.dtype` {#StudentTWithAbsDfSoftplusScale.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.entropy(name='entropy')` {#StudentTWithAbsDfSoftplusScale.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.event_shape` {#StudentTWithAbsDfSoftplusScale.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.event_shape_tensor(name='event_shape_tensor')` {#StudentTWithAbsDfSoftplusScale.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.is_continuous` {#StudentTWithAbsDfSoftplusScale.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.is_scalar_batch(name='is_scalar_batch')` {#StudentTWithAbsDfSoftplusScale.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.is_scalar_event(name='is_scalar_event')` {#StudentTWithAbsDfSoftplusScale.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.loc` {#StudentTWithAbsDfSoftplusScale.loc}
-
-Locations of these Student's t distribution(s).
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.log_cdf(value, name='log_cdf')` {#StudentTWithAbsDfSoftplusScale.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.log_prob(value, name='log_prob')` {#StudentTWithAbsDfSoftplusScale.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.log_survival_function(value, name='log_survival_function')` {#StudentTWithAbsDfSoftplusScale.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.mean(name='mean')` {#StudentTWithAbsDfSoftplusScale.mean}
-
-Mean.
-
-Additional documentation from `StudentT`:
-
-The mean of Student's T equals `loc` if `df > 1`, otherwise it is
-`NaN`. If `self.allow_nan_stats=True`, then an exception will be raised
-rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.mode(name='mode')` {#StudentTWithAbsDfSoftplusScale.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.name` {#StudentTWithAbsDfSoftplusScale.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#StudentTWithAbsDfSoftplusScale.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.param_static_shapes(cls, sample_shape)` {#StudentTWithAbsDfSoftplusScale.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.parameters` {#StudentTWithAbsDfSoftplusScale.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.prob(value, name='prob')` {#StudentTWithAbsDfSoftplusScale.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.reparameterization_type` {#StudentTWithAbsDfSoftplusScale.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.sample(sample_shape=(), seed=None, name='sample')` {#StudentTWithAbsDfSoftplusScale.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.scale` {#StudentTWithAbsDfSoftplusScale.scale}
-
-Scaling factors of these Student's t distribution(s).
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.stddev(name='stddev')` {#StudentTWithAbsDfSoftplusScale.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.survival_function(value, name='survival_function')` {#StudentTWithAbsDfSoftplusScale.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.validate_args` {#StudentTWithAbsDfSoftplusScale.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.variance(name='variance')` {#StudentTWithAbsDfSoftplusScale.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-
-Additional documentation from `StudentT`:
-
-The variance for Student's T equals
-
-```
-df / (df - 2), when df > 2
-infinity, when 1 < df <= 2
-NaN, when df <= 1
-```
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Uniform` {#Uniform}
-
-Uniform distribution with `low` and `high` parameters.
-
-### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; a, b) = I[a <= x < b] / Z
-Z = b - a
-```
-
-where:
-* `low = a`,
-* `high = b`,
-* `Z` is the normalizing constant, and,
-* `I[predicate]` is the [indicator function](
- https://en.wikipedia.org/wiki/Indicator_function) for `predicate`.
-
-The parameters `low` and `high` must be shaped in a way that supports
-broadcasting (e.g., `high - low` is a valid operation).
-
-### Examples
-
-```python
-# Without broadcasting:
-u1 = Uniform(low=3.0, high=4.0) # a single uniform distribution [3, 4]
-u2 = Uniform(low=[1.0, 2.0],
- high=[3.0, 4.0]) # 2 distributions [1, 3], [2, 4]
-u3 = Uniform(low=[[1.0, 2.0],
- [3.0, 4.0]],
- high=[[1.5, 2.5],
- [3.5, 4.5]]) # 4 distributions
-```
-
-```python
-# With broadcasting:
-u1 = Uniform(low=3.0, high=[5.0, 6.0, 7.0]) # 3 distributions
-```
-- - -
-
-#### `tf.contrib.distributions.Uniform.__init__(low=0.0, high=1.0, validate_args=False, allow_nan_stats=True, name='Uniform')` {#Uniform.__init__}
-
-Initialize a batch of Uniform distributions.
-
-##### Args:
-
-
-* <b>`low`</b>: Floating point tensor, lower boundary of the output interval. Must
- have `low < high`.
-* <b>`high`</b>: Floating point tensor, upper boundary of the output interval. Must
- have `low < high`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: if `low >= high` and `validate_args=False`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.allow_nan_stats` {#Uniform.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.batch_shape` {#Uniform.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.batch_shape_tensor(name='batch_shape_tensor')` {#Uniform.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.cdf(value, name='cdf')` {#Uniform.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.copy(**override_parameters_kwargs)` {#Uniform.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.covariance(name='covariance')` {#Uniform.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.dtype` {#Uniform.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.entropy(name='entropy')` {#Uniform.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.event_shape` {#Uniform.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.event_shape_tensor(name='event_shape_tensor')` {#Uniform.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.high` {#Uniform.high}
-
-Upper boundary of the output interval.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.is_continuous` {#Uniform.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.is_scalar_batch(name='is_scalar_batch')` {#Uniform.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.is_scalar_event(name='is_scalar_event')` {#Uniform.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.log_cdf(value, name='log_cdf')` {#Uniform.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.log_prob(value, name='log_prob')` {#Uniform.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.log_survival_function(value, name='log_survival_function')` {#Uniform.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.low` {#Uniform.low}
-
-Lower boundary of the output interval.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.mean(name='mean')` {#Uniform.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.mode(name='mode')` {#Uniform.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.name` {#Uniform.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Uniform.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.param_static_shapes(cls, sample_shape)` {#Uniform.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.parameters` {#Uniform.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.prob(value, name='prob')` {#Uniform.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.range(name='range')` {#Uniform.range}
-
-`high - low`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.reparameterization_type` {#Uniform.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.sample(sample_shape=(), seed=None, name='sample')` {#Uniform.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.stddev(name='stddev')` {#Uniform.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.survival_function(value, name='survival_function')` {#Uniform.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.validate_args` {#Uniform.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.variance(name='variance')` {#Uniform.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-
-- - -
-
-### `class tf.contrib.distributions.MultivariateNormalDiag` {#MultivariateNormalDiag}
-
-The multivariate normal distribution on `R^k`.
-
-The Multivariate Normal distribution is defined over `R^k` and parameterized
-by a (batch of) length-`k` `loc` vector (aka "mu") and a (batch of) `k x k`
-`scale` matrix; `covariance = scale @ scale.T` where `@` denotes
-matrix-multiplication.
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; loc, scale) = exp(-0.5 ||y||**2) / Z,
-y = inv(scale) @ (x - loc),
-Z = (2 pi)**(0.5 k) |det(scale)|,
-```
-
-where:
-
-* `loc` is a vector in `R^k`,
-* `scale` is a linear operator in `R^{k x k}`, `cov = scale @ scale.T`,
-* `Z` denotes the normalization constant, and,
-* `||y||**2` denotes the squared Euclidean norm of `y`.
-
-A (non-batch) `scale` matrix is:
-
-```none
-scale = diag(scale_diag + scale_identity_multiplier * ones(k))
-```
-
-where:
-
-* `scale_diag.shape = [k]`, and,
-* `scale_identity_multiplier.shape = []`.
-
-Additional leading dimensions (if any) will index batches.
-
-If both `scale_diag` and `scale_identity_multiplier` are `None`, then
-`scale` is the Identity matrix.
-
-The MultivariateNormal distribution is a member of the [location-scale
-family](https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be
-constructed as,
-
-```none
-X ~ MultivariateNormal(loc=0, scale=1) # Identity scale, zero shift.
-Y = scale @ X + loc
-```
-
-#### Examples
-
-```python
-ds = tf.contrib.distributions
-
-# Initialize a single 2-variate Gaussian.
-mvn = ds.MultivariateNormalDiag(
- loc=[1., -1],
- scale_diag=[1, 2.])
-
-mvn.mean().eval()
-# ==> [1., -1]
-
-mvn.stddev().eval()
-# ==> [1., 2]
-
-# Evaluate this on an observation in `R^2`, returning a scalar.
-mvn.prob([-1., 0]).eval() # shape: []
-
-# Initialize a 3-batch, 2-variate scaled-identity Gaussian.
-mvn = ds.MultivariateNormalDiag(
- loc=[1., -1],
- scale_identity_multiplier=[1, 2., 3])
-
-mvn.mean().eval() # shape: [3, 2]
-# ==> [[1., -1]
-# [1, -1],
-# [1, -1]]
-
-mvn.stddev().eval() # shape: [3, 2]
-# ==> [[1., 1],
-# [2, 2],
-# [3, 3]]
-
-# Evaluate this on an observation in `R^2`, returning a length-3 vector.
-mvn.prob([-1., 0]).eval() # shape: [3]
-
-# Initialize a 2-batch of 3-variate Gaussians.
-mvn = ds.MultivariateNormalDiag(
- loc=[[1., 2, 3],
- [11, 22, 33]] # shape: [2, 3]
- scale_diag=[[1., 2, 3],
- [0.5, 1, 1.5]]) # shape: [2, 3]
-
-# Evaluate this on a two observations, each in `R^3`, returning a length-2
-# vector.
-x = [[-1., 0, 1],
- [-11, 0, 11.]] # shape: [2, 3].
-mvn.prob(x).eval() # shape: [2]
-```
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.__init__(loc=None, scale_diag=None, scale_identity_multiplier=None, validate_args=False, allow_nan_stats=True, name='MultivariateNormalDiag')` {#MultivariateNormalDiag.__init__}
-
-Construct Multivariate Normal distribution on `R^k`.
-
-The `batch_shape` is the broadcast shape between `loc` and `scale`
-arguments.
-
-The `event_shape` is given by the last dimension of `loc` or the last
-dimension of the matrix implied by `scale`.
-
-Recall that `covariance = scale @ scale.T`. A (non-batch) `scale` matrix is:
-
-```none
-scale = diag(scale_diag + scale_identity_multiplier * ones(k))
-```
-
-where:
-
-* `scale_diag.shape = [k]`, and,
-* `scale_identity_multiplier.shape = []`.
-
-Additional leading dimensions (if any) will index batches.
-
-If both `scale_diag` and `scale_identity_multiplier` are `None`, then
-`scale` is the Identity matrix.
-
-##### Args:
-
-
-* <b>`loc`</b>: Floating-point `Tensor`. If this is set to `None`, `loc` is
- implicitly `0`. When specified, may have shape `[B1, ..., Bb, k]` where
- `b >= 0` and `k` is the event size.
-* <b>`scale_diag`</b>: Non-zero, floating-point `Tensor` representing a diagonal
- matrix added to `scale`. May have shape `[B1, ..., Bb, k]`, `b >= 0`,
- and characterizes `b`-batches of `k x k` diagonal matrices added to
- `scale`. When both `scale_identity_multiplier` and `scale_diag` are
- `None` then `scale` is the `Identity`.
-* <b>`scale_identity_multiplier`</b>: Non-zero, floating-point `Tensor` representing
- a scaled-identity-matrix added to `scale`. May have shape
- `[B1, ..., Bb]`, `b >= 0`, and characterizes `b`-batches of scaled
- `k x k` identity matrices added to `scale`. When both
- `scale_identity_multiplier` and `scale_diag` are `None` then `scale` is
- the `Identity`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`,
- statistics (e.g., mean, mode, variance) use the value "`NaN`" to
- indicate the result is undefined. When `False`, an exception is raised
- if one or more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if at most `scale_identity_multiplier` is specified.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.allow_nan_stats` {#MultivariateNormalDiag.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.batch_shape` {#MultivariateNormalDiag.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.batch_shape_tensor(name='batch_shape_tensor')` {#MultivariateNormalDiag.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.bijector` {#MultivariateNormalDiag.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.cdf(value, name='cdf')` {#MultivariateNormalDiag.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.copy(**override_parameters_kwargs)` {#MultivariateNormalDiag.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.covariance(name='covariance')` {#MultivariateNormalDiag.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.det_covariance(name='det_covariance')` {#MultivariateNormalDiag.det_covariance}
-
-Determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.distribution` {#MultivariateNormalDiag.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.dtype` {#MultivariateNormalDiag.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.entropy(name='entropy')` {#MultivariateNormalDiag.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.event_shape` {#MultivariateNormalDiag.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.event_shape_tensor(name='event_shape_tensor')` {#MultivariateNormalDiag.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.is_continuous` {#MultivariateNormalDiag.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.is_scalar_batch(name='is_scalar_batch')` {#MultivariateNormalDiag.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.is_scalar_event(name='is_scalar_event')` {#MultivariateNormalDiag.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.loc` {#MultivariateNormalDiag.loc}
-
-The `loc` `Tensor` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.log_cdf(value, name='log_cdf')` {#MultivariateNormalDiag.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.log_det_covariance(name='log_det_covariance')` {#MultivariateNormalDiag.log_det_covariance}
-
-Log of determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.log_prob(value, name='log_prob')` {#MultivariateNormalDiag.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.log_survival_function(value, name='log_survival_function')` {#MultivariateNormalDiag.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.mean(name='mean')` {#MultivariateNormalDiag.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.mode(name='mode')` {#MultivariateNormalDiag.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.name` {#MultivariateNormalDiag.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#MultivariateNormalDiag.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.param_static_shapes(cls, sample_shape)` {#MultivariateNormalDiag.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.parameters` {#MultivariateNormalDiag.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.prob(value, name='prob')` {#MultivariateNormalDiag.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.reparameterization_type` {#MultivariateNormalDiag.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.sample(sample_shape=(), seed=None, name='sample')` {#MultivariateNormalDiag.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.scale` {#MultivariateNormalDiag.scale}
-
-The `scale` `LinearOperator` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.stddev(name='stddev')` {#MultivariateNormalDiag.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.survival_function(value, name='survival_function')` {#MultivariateNormalDiag.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.validate_args` {#MultivariateNormalDiag.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.variance(name='variance')` {#MultivariateNormalDiag.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.MultivariateNormalTriL` {#MultivariateNormalTriL}
-
-The multivariate normal distribution on `R^k`.
-
-The Multivariate Normal distribution is defined over `R^k` and parameterized
-by a (batch of) length-`k` `loc` vector (aka "mu") and a (batch of) `k x k`
-`scale` matrix; `covariance = scale @ scale.T` where `@` denotes
-matrix-multiplication.
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; loc, scale) = exp(-0.5 ||y||**2) / Z,
-y = inv(scale) @ (x - loc),
-Z = (2 pi)**(0.5 k) |det(scale)|,
-```
-
-where:
-
-* `loc` is a vector in `R^k`,
-* `scale` is a linear operator in `R^{k x k}`, `cov = scale @ scale.T`,
-* `Z` denotes the normalization constant, and,
-* `||y||**2` denotes the squared Euclidean norm of `y`.
-
-A (non-batch) `scale` matrix is:
-
-```none
-scale = scale_tril
-```
-
-where `scale_tril` is lower-triangular `k x k` matrix with non-zero diagonal,
-i.e., `tf.diag_part(scale_tril) != 0`.
-
-Additional leading dimensions (if any) will index batches.
-
-The MultivariateNormal distribution is a member of the [location-scale
-family](https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be
-constructed as,
-
-```none
-X ~ MultivariateNormal(loc=0, scale=1) # Identity scale, zero shift.
-Y = scale @ X + loc
-```
-
-Trainable (batch) lower-triangular matrices can be created with
-`ds.matrix_diag_transform()` and/or `ds.fill_lower_triangular()`
-
-#### Examples
-
-```python
-ds = tf.contrib.distributions
-
-# Initialize a single 3-variate Gaussian.
-mu = [1., 2, 3]
-cov = [[ 0.36, 0.12, 0.06],
- [ 0.12, 0.29, -0.13],
- [ 0.06, -0.13, 0.26]]
-scale = tf.cholesky(cov)
-# ==> [[ 0.6, 0. , 0. ],
-# [ 0.2, 0.5, 0. ],
-# [ 0.1, -0.3, 0.4]])
-mvn = ds.MultivariateNormalTriL(
- loc=mu,
- scale_tril=scale)
-
-mvn.mean().eval()
-# ==> [1., 2, 3]
-
-# Covariance agrees with cholesky(cov) parameterization.
-mvn.covariance().eval()
-# ==> [[ 0.36, 0.12, 0.06],
-# [ 0.12, 0.29, -0.13],
-# [ 0.06, -0.13, 0.26]]
-
-# Compute the pdf of an observation in `R^3` ; return a scalar.
-mvn.prob([-1., 0, 1]).eval() # shape: []
-
-# Initialize a 2-batch of 3-variate Gaussians.
-mu = [[1., 2, 3],
- [11, 22, 33]] # shape: [2, 3]
-tril = ... # shape: [2, 3, 3], lower triangular, non-zero diagonal.
-mvn = ds.MultivariateNormalTriL(
- loc=mu,
- scale_tril=tril)
-
-# Compute the pdf of two `R^3` observations; return a length-2 vector.
-x = [[-0.9, 0, 0.1],
- [-10, 0, 9]] # shape: [2, 3]
-mvn.prob(x).eval() # shape: [2]
-
-```
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.__init__(loc=None, scale_tril=None, validate_args=False, allow_nan_stats=True, name='MultivariateNormalTriL')` {#MultivariateNormalTriL.__init__}
-
-Construct Multivariate Normal distribution on `R^k`.
-
-The `batch_shape` is the broadcast shape between `loc` and `scale`
-arguments.
-
-The `event_shape` is given by the last dimension of `loc` or the last
-dimension of the matrix implied by `scale`.
-
-Recall that `covariance = scale @ scale.T`. A (non-batch) `scale` matrix is:
-
-```none
-scale = scale_tril
-```
-
-where `scale_tril` is lower-triangular `k x k` matrix with non-zero
-diagonal, i.e., `tf.diag_part(scale_tril) != 0`.
-
-Additional leading dimensions (if any) will index batches.
-
-##### Args:
-
-
-* <b>`loc`</b>: Floating-point `Tensor`. If this is set to `None`, `loc` is
- implicitly `0`. When specified, may have shape `[B1, ..., Bb, k]` where
- `b >= 0` and `k` is the event size.
-* <b>`scale_tril`</b>: Floating-point, lower-triangular `Tensor` with non-zero
- diagonal elements. `scale_tril` has shape `[B1, ..., Bb, k, k]` where
- `b >= 0` and `k` is the event size.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`,
- statistics (e.g., mean, mode, variance) use the value "`NaN`" to
- indicate the result is undefined. When `False`, an exception is raised
- if one or more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if neither `loc` nor `scale_tril` are specified.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.allow_nan_stats` {#MultivariateNormalTriL.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.batch_shape` {#MultivariateNormalTriL.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.batch_shape_tensor(name='batch_shape_tensor')` {#MultivariateNormalTriL.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.bijector` {#MultivariateNormalTriL.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.cdf(value, name='cdf')` {#MultivariateNormalTriL.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.copy(**override_parameters_kwargs)` {#MultivariateNormalTriL.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.covariance(name='covariance')` {#MultivariateNormalTriL.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.det_covariance(name='det_covariance')` {#MultivariateNormalTriL.det_covariance}
-
-Determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.distribution` {#MultivariateNormalTriL.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.dtype` {#MultivariateNormalTriL.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.entropy(name='entropy')` {#MultivariateNormalTriL.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.event_shape` {#MultivariateNormalTriL.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.event_shape_tensor(name='event_shape_tensor')` {#MultivariateNormalTriL.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.is_continuous` {#MultivariateNormalTriL.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.is_scalar_batch(name='is_scalar_batch')` {#MultivariateNormalTriL.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.is_scalar_event(name='is_scalar_event')` {#MultivariateNormalTriL.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.loc` {#MultivariateNormalTriL.loc}
-
-The `loc` `Tensor` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.log_cdf(value, name='log_cdf')` {#MultivariateNormalTriL.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.log_det_covariance(name='log_det_covariance')` {#MultivariateNormalTriL.log_det_covariance}
-
-Log of determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.log_prob(value, name='log_prob')` {#MultivariateNormalTriL.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.log_survival_function(value, name='log_survival_function')` {#MultivariateNormalTriL.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.mean(name='mean')` {#MultivariateNormalTriL.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.mode(name='mode')` {#MultivariateNormalTriL.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.name` {#MultivariateNormalTriL.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#MultivariateNormalTriL.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.param_static_shapes(cls, sample_shape)` {#MultivariateNormalTriL.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.parameters` {#MultivariateNormalTriL.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.prob(value, name='prob')` {#MultivariateNormalTriL.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.reparameterization_type` {#MultivariateNormalTriL.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.sample(sample_shape=(), seed=None, name='sample')` {#MultivariateNormalTriL.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.scale` {#MultivariateNormalTriL.scale}
-
-The `scale` `LinearOperator` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.stddev(name='stddev')` {#MultivariateNormalTriL.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.survival_function(value, name='survival_function')` {#MultivariateNormalTriL.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.validate_args` {#MultivariateNormalTriL.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.variance(name='variance')` {#MultivariateNormalTriL.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.MultivariateNormalDiagPlusLowRank` {#MultivariateNormalDiagPlusLowRank}
-
-The multivariate normal distribution on `R^k`.
-
-The Multivariate Normal distribution is defined over `R^k` and parameterized
-by a (batch of) length-`k` `loc` vector (aka "mu") and a (batch of) `k x k`
-`scale` matrix; `covariance = scale @ scale.T` where `@` denotes
-matrix-multiplication.
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; loc, scale) = exp(-0.5 ||y||**2) / Z,
-y = inv(scale) @ (x - loc),
-Z = (2 pi)**(0.5 k) |det(scale)|,
-```
-
-where:
-
-* `loc` is a vector in `R^k`,
-* `scale` is a linear operator in `R^{k x k}`, `cov = scale @ scale.T`,
-* `Z` denotes the normalization constant, and,
-* `||y||**2` denotes the squared Euclidean norm of `y`.
-
-A (non-batch) `scale` matrix is:
-
-```none
-scale = diag(scale_diag + scale_identity_multiplier ones(k)) +
- scale_perturb_factor @ diag(scale_perturb_diag) @ scale_perturb_factor.T
-```
-
-where:
-
-* `scale_diag.shape = [k]`,
-* `scale_identity_multiplier.shape = []`,
-* `scale_perturb_factor.shape = [k, r]`, typically `k >> r`, and,
-* `scale_perturb_diag.shape = [r]`.
-
-Additional leading dimensions (if any) will index batches.
-
-If both `scale_diag` and `scale_identity_multiplier` are `None`, then
-`scale` is the Identity matrix.
-
-The MultivariateNormal distribution is a member of the [location-scale
-family](https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be
-constructed as,
-
-```none
-X ~ MultivariateNormal(loc=0, scale=1) # Identity scale, zero shift.
-Y = scale @ X + loc
-```
-
-#### Examples
-
-```python
-ds = tf.contrib.distributions
-
-# Initialize a single 3-variate Gaussian with covariance `cov = S @ S.T`,
-# `S = diag(d) + U @ diag(m) @ U.T`. The perturbation, `U @ diag(m) @ U.T`, is
-# a rank-2 update.
-mu = [-0.5., 0, 0.5] # shape: [3]
-d = [1.5, 0.5, 2] # shape: [3]
-U = [[1., 2],
- [-1, 1],
- [2, -0.5]] # shape: [3, 2]
-m = [4., 5] # shape: [2]
-mvn = ds.MultivariateNormalDiagPlusLowRank(
- loc=mu
- scale_diag=d
- scale_perturb_factor=U,
- scale_perturb_diag=m)
-
-# Evaluate this on an observation in `R^3`, returning a scalar.
-mvn.prob([-1, 0, 1]).eval() # shape: []
-
-# Initialize a 2-batch of 3-variate Gaussians; `S = diag(d) + U @ U.T`.
-mu = [[1., 2, 3],
- [11, 22, 33]] # shape: [b, k] = [2, 3]
-U = [[[1., 2],
- [3, 4],
- [5, 6]],
- [[0.5, 0.75],
- [1,0, 0.25],
- [1.5, 1.25]]] # shape: [b, k, r] = [2, 3, 2]
-m = [[0.1, 0.2],
- [0.4, 0.5]] # shape: [b, r] = [2, 2]
-
-mvn = ds.MultivariateNormalDiagPlusLowRank(
- loc=mu,
- scale_perturb_factor=U,
- scale_perturb_diag=m)
-
-mvn.covariance().eval() # shape: [2, 3, 3]
-# ==> [[[ 15.63 31.57 48.51]
-# [ 31.57 69.31 105.05]
-# [ 48.51 105.05 162.59]]
-#
-# [[ 2.59 1.41 3.35]
-# [ 1.41 2.71 3.34]
-# [ 3.35 3.34 8.35]]]
-
-# Compute the pdf of two `R^3` observations (one from each batch);
-# return a length-2 vector.
-x = [[-0.9, 0, 0.1],
- [-10, 0, 9]] # shape: [2, 3]
-mvn.prob(x).eval() # shape: [2]
-```
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.__init__(loc=None, scale_diag=None, scale_identity_multiplier=None, scale_perturb_factor=None, scale_perturb_diag=None, validate_args=False, allow_nan_stats=True, name='MultivariateNormalDiagPlusLowRank')` {#MultivariateNormalDiagPlusLowRank.__init__}
-
-Construct Multivariate Normal distribution on `R^k`.
-
-The `batch_shape` is the broadcast shape between `loc` and `scale`
-arguments.
-
-The `event_shape` is given by the last dimension of `loc` or the last
-dimension of the matrix implied by `scale`.
-
-Recall that `covariance = scale @ scale.T`. A (non-batch) `scale` matrix is:
-
-```none
-scale = diag(scale_diag + scale_identity_multiplier ones(k)) +
- scale_perturb_factor @ diag(scale_perturb_diag) @ scale_perturb_factor.T
-```
-
-where:
-
-* `scale_diag.shape = [k]`,
-* `scale_identity_multiplier.shape = []`,
-* `scale_perturb_factor.shape = [k, r]`, typically `k >> r`, and,
-* `scale_perturb_diag.shape = [r]`.
-
-Additional leading dimensions (if any) will index batches.
-
-If both `scale_diag` and `scale_identity_multiplier` are `None`, then
-`scale` is the Identity matrix.
-
-##### Args:
-
-
-* <b>`loc`</b>: Floating-point `Tensor`. If this is set to `None`, `loc` is
- implicitly `0`. When specified, may have shape `[B1, ..., Bb, k]` where
- `b >= 0` and `k` is the event size.
-* <b>`scale_diag`</b>: Non-zero, floating-point `Tensor` representing a diagonal
- matrix added to `scale`. May have shape `[B1, ..., Bb, k]`, `b >= 0`,
- and characterizes `b`-batches of `k x k` diagonal matrices added to
- `scale`. When both `scale_identity_multiplier` and `scale_diag` are
- `None` then `scale` is the `Identity`.
-* <b>`scale_identity_multiplier`</b>: Non-zero, floating-point `Tensor` representing
- a scaled-identity-matrix added to `scale`. May have shape
- `[B1, ..., Bb]`, `b >= 0`, and characterizes `b`-batches of scaled
- `k x k` identity matrices added to `scale`. When both
- `scale_identity_multiplier` and `scale_diag` are `None` then `scale` is
- the `Identity`.
-* <b>`scale_perturb_factor`</b>: Floating-point `Tensor` representing a rank-`r`
- perturbation added to `scale`. May have shape `[B1, ..., Bb, k, r]`,
- `b >= 0`, and characterizes `b`-batches of rank-`r` updates to `scale`.
- When `None`, no rank-`r` update is added to `scale`.
-* <b>`scale_perturb_diag`</b>: Floating-point `Tensor` representing a diagonal matrix
- inside the rank-`r` perturbation added to `scale`. May have shape
- `[B1, ..., Bb, r]`, `b >= 0`, and characterizes `b`-batches of `r x r`
- diagonal matrices inside the perturbation added to `scale`. When
- `None`, an identity matrix is used inside the perturbation. Can only be
- specified if `scale_perturb_factor` is also specified.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`,
- statistics (e.g., mean, mode, variance) use the value "`NaN`" to
- indicate the result is undefined. When `False`, an exception is raised
- if one or more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if at most `scale_identity_multiplier` is specified.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.allow_nan_stats` {#MultivariateNormalDiagPlusLowRank.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.batch_shape` {#MultivariateNormalDiagPlusLowRank.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.batch_shape_tensor(name='batch_shape_tensor')` {#MultivariateNormalDiagPlusLowRank.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.bijector` {#MultivariateNormalDiagPlusLowRank.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.cdf(value, name='cdf')` {#MultivariateNormalDiagPlusLowRank.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.copy(**override_parameters_kwargs)` {#MultivariateNormalDiagPlusLowRank.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.covariance(name='covariance')` {#MultivariateNormalDiagPlusLowRank.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.det_covariance(name='det_covariance')` {#MultivariateNormalDiagPlusLowRank.det_covariance}
-
-Determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.distribution` {#MultivariateNormalDiagPlusLowRank.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.dtype` {#MultivariateNormalDiagPlusLowRank.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.entropy(name='entropy')` {#MultivariateNormalDiagPlusLowRank.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.event_shape` {#MultivariateNormalDiagPlusLowRank.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.event_shape_tensor(name='event_shape_tensor')` {#MultivariateNormalDiagPlusLowRank.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.is_continuous` {#MultivariateNormalDiagPlusLowRank.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.is_scalar_batch(name='is_scalar_batch')` {#MultivariateNormalDiagPlusLowRank.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.is_scalar_event(name='is_scalar_event')` {#MultivariateNormalDiagPlusLowRank.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.loc` {#MultivariateNormalDiagPlusLowRank.loc}
-
-The `loc` `Tensor` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.log_cdf(value, name='log_cdf')` {#MultivariateNormalDiagPlusLowRank.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.log_det_covariance(name='log_det_covariance')` {#MultivariateNormalDiagPlusLowRank.log_det_covariance}
-
-Log of determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.log_prob(value, name='log_prob')` {#MultivariateNormalDiagPlusLowRank.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.log_survival_function(value, name='log_survival_function')` {#MultivariateNormalDiagPlusLowRank.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.mean(name='mean')` {#MultivariateNormalDiagPlusLowRank.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.mode(name='mode')` {#MultivariateNormalDiagPlusLowRank.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.name` {#MultivariateNormalDiagPlusLowRank.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#MultivariateNormalDiagPlusLowRank.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.param_static_shapes(cls, sample_shape)` {#MultivariateNormalDiagPlusLowRank.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.parameters` {#MultivariateNormalDiagPlusLowRank.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.prob(value, name='prob')` {#MultivariateNormalDiagPlusLowRank.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.reparameterization_type` {#MultivariateNormalDiagPlusLowRank.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.sample(sample_shape=(), seed=None, name='sample')` {#MultivariateNormalDiagPlusLowRank.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.scale` {#MultivariateNormalDiagPlusLowRank.scale}
-
-The `scale` `LinearOperator` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.stddev(name='stddev')` {#MultivariateNormalDiagPlusLowRank.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.survival_function(value, name='survival_function')` {#MultivariateNormalDiagPlusLowRank.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.validate_args` {#MultivariateNormalDiagPlusLowRank.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.variance(name='variance')` {#MultivariateNormalDiagPlusLowRank.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale` {#MultivariateNormalDiagWithSoftplusScale}
-
-MultivariateNormalDiag with `diag_stddev = softplus(diag_stddev)`.
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.__init__(loc, scale_diag, validate_args=False, allow_nan_stats=True, name='MultivariateNormalDiagWithSoftplusScale')` {#MultivariateNormalDiagWithSoftplusScale.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.allow_nan_stats` {#MultivariateNormalDiagWithSoftplusScale.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.batch_shape` {#MultivariateNormalDiagWithSoftplusScale.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.batch_shape_tensor(name='batch_shape_tensor')` {#MultivariateNormalDiagWithSoftplusScale.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.bijector` {#MultivariateNormalDiagWithSoftplusScale.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.cdf(value, name='cdf')` {#MultivariateNormalDiagWithSoftplusScale.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.copy(**override_parameters_kwargs)` {#MultivariateNormalDiagWithSoftplusScale.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.covariance(name='covariance')` {#MultivariateNormalDiagWithSoftplusScale.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.det_covariance(name='det_covariance')` {#MultivariateNormalDiagWithSoftplusScale.det_covariance}
-
-Determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.distribution` {#MultivariateNormalDiagWithSoftplusScale.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.dtype` {#MultivariateNormalDiagWithSoftplusScale.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.entropy(name='entropy')` {#MultivariateNormalDiagWithSoftplusScale.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.event_shape` {#MultivariateNormalDiagWithSoftplusScale.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.event_shape_tensor(name='event_shape_tensor')` {#MultivariateNormalDiagWithSoftplusScale.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.is_continuous` {#MultivariateNormalDiagWithSoftplusScale.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.is_scalar_batch(name='is_scalar_batch')` {#MultivariateNormalDiagWithSoftplusScale.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.is_scalar_event(name='is_scalar_event')` {#MultivariateNormalDiagWithSoftplusScale.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.loc` {#MultivariateNormalDiagWithSoftplusScale.loc}
-
-The `loc` `Tensor` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.log_cdf(value, name='log_cdf')` {#MultivariateNormalDiagWithSoftplusScale.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.log_det_covariance(name='log_det_covariance')` {#MultivariateNormalDiagWithSoftplusScale.log_det_covariance}
-
-Log of determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.log_prob(value, name='log_prob')` {#MultivariateNormalDiagWithSoftplusScale.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.log_survival_function(value, name='log_survival_function')` {#MultivariateNormalDiagWithSoftplusScale.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.mean(name='mean')` {#MultivariateNormalDiagWithSoftplusScale.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.mode(name='mode')` {#MultivariateNormalDiagWithSoftplusScale.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.name` {#MultivariateNormalDiagWithSoftplusScale.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#MultivariateNormalDiagWithSoftplusScale.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.param_static_shapes(cls, sample_shape)` {#MultivariateNormalDiagWithSoftplusScale.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.parameters` {#MultivariateNormalDiagWithSoftplusScale.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.prob(value, name='prob')` {#MultivariateNormalDiagWithSoftplusScale.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.reparameterization_type` {#MultivariateNormalDiagWithSoftplusScale.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.sample(sample_shape=(), seed=None, name='sample')` {#MultivariateNormalDiagWithSoftplusScale.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.scale` {#MultivariateNormalDiagWithSoftplusScale.scale}
-
-The `scale` `LinearOperator` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.stddev(name='stddev')` {#MultivariateNormalDiagWithSoftplusScale.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.survival_function(value, name='survival_function')` {#MultivariateNormalDiagWithSoftplusScale.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.validate_args` {#MultivariateNormalDiagWithSoftplusScale.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.variance(name='variance')` {#MultivariateNormalDiagWithSoftplusScale.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Dirichlet` {#Dirichlet}
-
-Dirichlet distribution.
-
-The Dirichlet distribution is defined over the
-[`(k-1)`-simplex](https://en.wikipedia.org/wiki/Simplex) using a positive,
-length-`k` vector `concentration` (`k > 1`). The Dirichlet is identically the
-Beta distribution when `k = 2`.
-
-#### Mathematical Details
-
-The Dirichlet is a distribution over the open `(k-1)`-simplex, i.e.,
-
-```none
-S^{k-1} = { (x_0, ..., x_{k-1}) in R^k : sum_j x_j = 1 and all_j x_j > 0 }.
-```
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; alpha) = prod_j x_j**(alpha_j - 1) / Z
-Z = prod_j Gamma(alpha_j) / Gamma(sum_j alpha_j)
-```
-
-where:
-
-* `x in S^{k-1}`, i.e., the `(k-1)`-simplex,
-* `concentration = alpha = [alpha_0, ..., alpha_{k-1}]`, `alpha_j > 0`,
-* `Z` is the normalization constant aka the [multivariate beta function](
- https://en.wikipedia.org/wiki/Beta_function#Multivariate_beta_function),
- and,
-* `Gamma` is the [gamma function](
- https://en.wikipedia.org/wiki/Gamma_function).
-
-The `concentration` represents mean total counts of class occurrence, i.e.,
-
-```none
-concentration = alpha = mean * total_concentration
-```
-
-where `mean` in `S^{k-1}` and `total_concentration` is a positive real number
-representing a mean total count.
-
-Distribution parameters are automatically broadcast in all functions; see
-examples for details.
-
-#### Examples
-
-```python
-# Create a single trivariate Dirichlet, with the 3rd class being three times
-# more frequent than the first. I.e., batch_shape=[], event_shape=[3].
-alpha = [1., 2, 3]
-dist = Dirichlet(alpha)
-
-dist.sample([4, 5]) # shape: [4, 5, 3]
-
-# x has one sample, one batch, three classes:
-x = [.2, .3, .5] # shape: [3]
-dist.prob(x) # shape: []
-
-# x has two samples from one batch:
-x = [[.1, .4, .5],
- [.2, .3, .5]]
-dist.prob(x) # shape: [2]
-
-# alpha will be broadcast to shape [5, 7, 3] to match x.
-x = [[...]] # shape: [5, 7, 3]
-dist.prob(x) # shape: [5, 7]
-```
-
-```python
-# Create batch_shape=[2], event_shape=[3]:
-alpha = [[1., 2, 3],
- [4, 5, 6]] # shape: [2, 3]
-dist = Dirichlet(alpha)
-
-dist.sample([4, 5]) # shape: [4, 5, 2, 3]
-
-x = [.2, .3, .5]
-# x will be broadcast as [[.2, .3, .5],
-# [.2, .3, .5]],
-# thus matching batch_shape [2, 3].
-dist.prob(x) # shape: [2]
-```
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.__init__(concentration, validate_args=False, allow_nan_stats=True, name='Dirichlet')` {#Dirichlet.__init__}
-
-Initialize a batch of Dirichlet distributions.
-
-##### Args:
-
-
-* <b>`concentration`</b>: Positive floating-point `Tensor` indicating mean number
- of class occurrences; aka "alpha". Implies `self.dtype`, and
- `self.batch_shape`, `self.event_shape`, i.e., if
- `concentration.shape = [N1, N2, ..., Nm, k]` then
- `batch_shape = [N1, N2, ..., Nm]` and
- `event_shape = [k]`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.allow_nan_stats` {#Dirichlet.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.batch_shape` {#Dirichlet.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.batch_shape_tensor(name='batch_shape_tensor')` {#Dirichlet.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.cdf(value, name='cdf')` {#Dirichlet.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.concentration` {#Dirichlet.concentration}
-
-Concentration parameter; expected counts for that coordinate.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.copy(**override_parameters_kwargs)` {#Dirichlet.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.covariance(name='covariance')` {#Dirichlet.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.dtype` {#Dirichlet.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.entropy(name='entropy')` {#Dirichlet.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.event_shape` {#Dirichlet.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.event_shape_tensor(name='event_shape_tensor')` {#Dirichlet.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.is_continuous` {#Dirichlet.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.is_scalar_batch(name='is_scalar_batch')` {#Dirichlet.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.is_scalar_event(name='is_scalar_event')` {#Dirichlet.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.log_cdf(value, name='log_cdf')` {#Dirichlet.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.log_prob(value, name='log_prob')` {#Dirichlet.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Dirichlet`:
-
-Note: `value` must be a non-negative tensor with
-dtype `self.dtype` and be in the `(self.event_shape() - 1)`-simplex, i.e.,
-`tf.reduce_sum(value, -1) = 1`. It must have a shape compatible with
-`self.batch_shape() + self.event_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.log_survival_function(value, name='log_survival_function')` {#Dirichlet.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.mean(name='mean')` {#Dirichlet.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.mode(name='mode')` {#Dirichlet.mode}
-
-Mode.
-
-Additional documentation from `Dirichlet`:
-
-Note: The mode is undefined when any `concentration <= 1`. If
-`self.allow_nan_stats` is `True`, `NaN` is used for undefined modes. If
-`self.allow_nan_stats` is `False` an exception is raised when one or more
-modes are undefined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.name` {#Dirichlet.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Dirichlet.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.param_static_shapes(cls, sample_shape)` {#Dirichlet.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.parameters` {#Dirichlet.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.prob(value, name='prob')` {#Dirichlet.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Dirichlet`:
-
-Note: `value` must be a non-negative tensor with
-dtype `self.dtype` and be in the `(self.event_shape() - 1)`-simplex, i.e.,
-`tf.reduce_sum(value, -1) = 1`. It must have a shape compatible with
-`self.batch_shape() + self.event_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.reparameterization_type` {#Dirichlet.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.sample(sample_shape=(), seed=None, name='sample')` {#Dirichlet.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.stddev(name='stddev')` {#Dirichlet.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.survival_function(value, name='survival_function')` {#Dirichlet.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.total_concentration` {#Dirichlet.total_concentration}
-
-Sum of last dim of concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.validate_args` {#Dirichlet.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.variance(name='variance')` {#Dirichlet.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.DirichletMultinomial` {#DirichletMultinomial}
-
-Dirichlet-Multinomial compound distribution.
-
-The Dirichlet-Multinomial distribution is parameterized by a (batch of)
-length-`k` `concentration` vectors (`k > 1`) and a `total_count` number of
-trials, i.e., the number of trials per draw from the DirichletMultinomial. It
-is defined over a (batch of) length-`k` vector `counts` such that
-`tf.reduce_sum(counts, -1) = total_count`. The Dirichlet-Multinomial is
-identically the Beta-Binomial distribution when `k = 2`.
-
-#### Mathematical Details
-
-The Dirichlet-Multinomial is a distribution over `k`-class counts, i.e., a
-length-`k` vector of non-negative integer `counts = n = [n_0, ..., n_{k-1}]`.
-
-The probability mass function (pmf) is,
-
-```none
-pmf(n; alpha, N) = Beta(alpha + n) / (prod_j n_j!) / Z
-Z = Beta(alpha) / N!
-```
-
-where:
-
-* `concentration = alpha = [alpha_0, ..., alpha_{k-1}]`, `alpha_j > 0`,
-* `total_count = N`, `N` a positive integer,
-* `N!` is `N` factorial, and,
-* `Beta(x) = prod_j Gamma(x_j) / Gamma(sum_j x_j)` is the
- [multivariate beta function](
- https://en.wikipedia.org/wiki/Beta_function#Multivariate_beta_function),
- and,
-* `Gamma` is the [gamma function](
- https://en.wikipedia.org/wiki/Gamma_function).
-
-Dirichlet-Multinomial is a [compound distribution](
-https://en.wikipedia.org/wiki/Compound_probability_distribution), i.e., its
-samples are generated as follows.
-
- 1. Choose class probabilities:
- `probs = [p_0,...,p_{k-1}] ~ Dir(concentration)`
- 2. Draw integers:
- `counts = [n_0,...,n_{k-1}] ~ Multinomial(total_count, probs)`
-
-The last `concentration` dimension parametrizes a single Dirichlet-Multinomial
-distribution. When calling distribution functions (e.g., `dist.prob(counts)`),
-`concentration`, `total_count` and `counts` are broadcast to the same shape.
-The last dimension of of `counts` corresponds single Dirichlet-Multinomial
-distributions.
-
-Distribution parameters are automatically broadcast in all functions; see
-examples for details.
-
-#### Examples
-
-```python
-alpha = [1, 2, 3]
-n = 2
-dist = DirichletMultinomial(n, alpha)
-```
-
-Creates a 3-class distribution, with the 3rd class is most likely to be drawn.
-The distribution functions can be evaluated on counts.
-
-```python
-# counts same shape as alpha.
-counts = [0, 0, 2]
-dist.prob(counts) # Shape []
-
-# alpha will be broadcast to [[1, 2, 3], [1, 2, 3]] to match counts.
-counts = [[1, 1, 0], [1, 0, 1]]
-dist.prob(counts) # Shape [2]
-
-# alpha will be broadcast to shape [5, 7, 3] to match counts.
-counts = [[...]] # Shape [5, 7, 3]
-dist.prob(counts) # Shape [5, 7]
-```
-
-Creates a 2-batch of 3-class distributions.
-
-```python
-alpha = [[1, 2, 3], [4, 5, 6]] # Shape [2, 3]
-n = [3, 3]
-dist = DirichletMultinomial(n, alpha)
-
-# counts will be broadcast to [[2, 1, 0], [2, 1, 0]] to match alpha.
-counts = [2, 1, 0]
-dist.prob(counts) # Shape [2]
-```
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.__init__(total_count, concentration, validate_args=False, allow_nan_stats=True, name='DirichletMultinomial')` {#DirichletMultinomial.__init__}
-
-Initialize a batch of DirichletMultinomial distributions.
-
-##### Args:
-
-
-* <b>`total_count`</b>: Non-negative floating point tensor, whose dtype is the same
- as `concentration`. The shape is broadcastable to `[N1,..., Nm]` with
- `m >= 0`. Defines this as a batch of `N1 x ... x Nm` different
- Dirichlet multinomial distributions. Its components should be equal to
- integer values.
-* <b>`concentration`</b>: Positive floating point tensor, whose dtype is the
- same as `n` with shape broadcastable to `[N1,..., Nm, k]` `m >= 0`.
- Defines this as a batch of `N1 x ... x Nm` different `k` class Dirichlet
- multinomial distributions.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.allow_nan_stats` {#DirichletMultinomial.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.batch_shape` {#DirichletMultinomial.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.batch_shape_tensor(name='batch_shape_tensor')` {#DirichletMultinomial.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.cdf(value, name='cdf')` {#DirichletMultinomial.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.concentration` {#DirichletMultinomial.concentration}
-
-Concentration parameter; expected prior counts for that coordinate.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.copy(**override_parameters_kwargs)` {#DirichletMultinomial.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.covariance(name='covariance')` {#DirichletMultinomial.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-
-Additional documentation from `DirichletMultinomial`:
-
-The covariance for each batch member is defined as the following:
-
-```none
-Var(X_j) = n * alpha_j / alpha_0 * (1 - alpha_j / alpha_0) *
-(n + alpha_0) / (1 + alpha_0)
-```
-
-where `concentration = alpha` and
-`total_concentration = alpha_0 = sum_j alpha_j`.
-
-The covariance between elements in a batch is defined as:
-
-```none
-Cov(X_i, X_j) = -n * alpha_i * alpha_j / alpha_0 ** 2 *
-(n + alpha_0) / (1 + alpha_0)
-```
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.dtype` {#DirichletMultinomial.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.entropy(name='entropy')` {#DirichletMultinomial.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.event_shape` {#DirichletMultinomial.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.event_shape_tensor(name='event_shape_tensor')` {#DirichletMultinomial.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.is_continuous` {#DirichletMultinomial.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.is_scalar_batch(name='is_scalar_batch')` {#DirichletMultinomial.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.is_scalar_event(name='is_scalar_event')` {#DirichletMultinomial.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.log_cdf(value, name='log_cdf')` {#DirichletMultinomial.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.log_prob(value, name='log_prob')` {#DirichletMultinomial.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `DirichletMultinomial`:
-
-For each batch of counts,
-`value = [n_0, ..., n_{k-1}]`, `P[value]` is the probability that after
-sampling `self.total_count` draws from this Dirichlet-Multinomial distribution,
-the number of draws falling in class `j` is `n_j`. Since this definition is
-[exchangeable](https://en.wikipedia.org/wiki/Exchangeable_random_variables);
-different sequences have the same counts so the probability includes a
-combinatorial coefficient.
-
-Note: `value` must be a non-negative tensor with dtype `self.dtype`, have no
-fractional components, and such that
-`tf.reduce_sum(value, -1) = self.total_count`. Its shape must be broadcastable
-with `self.concentration` and `self.total_count`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.log_survival_function(value, name='log_survival_function')` {#DirichletMultinomial.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.mean(name='mean')` {#DirichletMultinomial.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.mode(name='mode')` {#DirichletMultinomial.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.name` {#DirichletMultinomial.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#DirichletMultinomial.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.param_static_shapes(cls, sample_shape)` {#DirichletMultinomial.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.parameters` {#DirichletMultinomial.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.prob(value, name='prob')` {#DirichletMultinomial.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `DirichletMultinomial`:
-
-For each batch of counts,
-`value = [n_0, ..., n_{k-1}]`, `P[value]` is the probability that after
-sampling `self.total_count` draws from this Dirichlet-Multinomial distribution,
-the number of draws falling in class `j` is `n_j`. Since this definition is
-[exchangeable](https://en.wikipedia.org/wiki/Exchangeable_random_variables);
-different sequences have the same counts so the probability includes a
-combinatorial coefficient.
-
-Note: `value` must be a non-negative tensor with dtype `self.dtype`, have no
-fractional components, and such that
-`tf.reduce_sum(value, -1) = self.total_count`. Its shape must be broadcastable
-with `self.concentration` and `self.total_count`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.reparameterization_type` {#DirichletMultinomial.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.sample(sample_shape=(), seed=None, name='sample')` {#DirichletMultinomial.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.stddev(name='stddev')` {#DirichletMultinomial.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.survival_function(value, name='survival_function')` {#DirichletMultinomial.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.total_concentration` {#DirichletMultinomial.total_concentration}
-
-Sum of last dim of concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.total_count` {#DirichletMultinomial.total_count}
-
-Number of trials used to construct a sample.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.validate_args` {#DirichletMultinomial.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.variance(name='variance')` {#DirichletMultinomial.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Multinomial` {#Multinomial}
-
-Multinomial distribution.
-
-This Multinomial distribution is parameterized by `probs`, a (batch of)
-length-`k` `prob` (probability) vectors (`k > 1`) such that
-`tf.reduce_sum(probs, -1) = 1`, and a `total_count` number of trials, i.e.,
-the number of trials per draw from the Multinomial. It is defined over a
-(batch of) length-`k` vector `counts` such that
-`tf.reduce_sum(counts, -1) = total_count`. The Multinomial is identically the
-Binomial distribution when `k = 2`.
-
-#### Mathematical Details
-
-The Multinomial is a distribution over `k`-class counts, i.e., a length-`k`
-vector of non-negative integer `counts = n = [n_0, ..., n_{k-1}]`.
-
-The probability mass function (pmf) is,
-
-```none
-pmf(n; pi, N) = prod_j (pi_j)**n_j / Z
-Z = (prod_j n_j!) / N!
-```
-
-where:
-* `probs = pi = [pi_0, ..., pi_{k-1}]`, `pi_j > 0`, `sum_j pi_j = 1`,
-* `total_count = N`, `N` a positive integer,
-* `Z` is the normalization constant, and,
-* `N!` denotes `N` factorial.
-
-Distribution parameters are automatically broadcast in all functions; see
-examples for details.
-
-#### Examples
-
-Create a 3-class distribution, with the 3rd class is most likely to be drawn,
-using logits.
-
-```python
-logits = [-50., -43, 0]
-dist = Multinomial(total_count=4., logits=logits)
-```
-
-Create a 3-class distribution, with the 3rd class is most likely to be drawn.
-
-```python
-p = [.2, .3, .5]
-dist = Multinomial(total_count=4., probs=p)
-```
-
-The distribution functions can be evaluated on counts.
-
-```python
-# counts same shape as p.
-counts = [1., 0, 3]
-dist.prob(counts) # Shape []
-
-# p will be broadcast to [[.2, .3, .5], [.2, .3, .5]] to match counts.
-counts = [[1., 2, 1], [2, 2, 0]]
-dist.prob(counts) # Shape [2]
-
-# p will be broadcast to shape [5, 7, 3] to match counts.
-counts = [[...]] # Shape [5, 7, 3]
-dist.prob(counts) # Shape [5, 7]
-```
-
-Create a 2-batch of 3-class distributions.
-
-```python
-p = [[.1, .2, .7], [.3, .3, .4]] # Shape [2, 3]
-dist = Multinomial(total_count=[4., 5], probs=p)
-
-counts = [[2., 1, 1], [3, 1, 1]]
-dist.prob(counts) # Shape [2]
-```
-- - -
-
-#### `tf.contrib.distributions.Multinomial.__init__(total_count, logits=None, probs=None, validate_args=False, allow_nan_stats=True, name='Multinomial')` {#Multinomial.__init__}
-
-Initialize a batch of Multinomial distributions.
-
-##### Args:
-
-
-* <b>`total_count`</b>: Non-negative floating point tensor with shape broadcastable
- to `[N1,..., Nm]` with `m >= 0`. Defines this as a batch of
- `N1 x ... x Nm` different Multinomial distributions. Its components
- should be equal to integer values.
-* <b>`logits`</b>: Floating point tensor representing the log-odds of a
- positive event with shape broadcastable to `[N1,..., Nm, k], m >= 0`,
- and the same dtype as `total_count`. Defines this as a batch of
- `N1 x ... x Nm` different `k` class Multinomial distributions. Only one
- of `logits` or `probs` should be passed in.
-* <b>`probs`</b>: Positive floating point tensor with shape broadcastable to
- `[N1,..., Nm, k]` `m >= 0` and same dtype as `total_count`. Defines
- this as a batch of `N1 x ... x Nm` different `k` class Multinomial
- distributions. `probs`'s components in the last portion of its shape
- should sum to `1`. Only one of `logits` or `probs` should be passed in.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.allow_nan_stats` {#Multinomial.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.batch_shape` {#Multinomial.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.batch_shape_tensor(name='batch_shape_tensor')` {#Multinomial.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.cdf(value, name='cdf')` {#Multinomial.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.copy(**override_parameters_kwargs)` {#Multinomial.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.covariance(name='covariance')` {#Multinomial.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.dtype` {#Multinomial.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.entropy(name='entropy')` {#Multinomial.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.event_shape` {#Multinomial.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.event_shape_tensor(name='event_shape_tensor')` {#Multinomial.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.is_continuous` {#Multinomial.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.is_scalar_batch(name='is_scalar_batch')` {#Multinomial.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.is_scalar_event(name='is_scalar_event')` {#Multinomial.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.log_cdf(value, name='log_cdf')` {#Multinomial.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.log_prob(value, name='log_prob')` {#Multinomial.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Multinomial`:
-
-For each batch of counts, `value = [n_0, ...
-,n_{k-1}]`, `P[value]` is the probability that after sampling `self.total_count`
-draws from this Multinomial distribution, the number of draws falling in class
-`j` is `n_j`. Since this definition is [exchangeable](
-https://en.wikipedia.org/wiki/Exchangeable_random_variables); different
-sequences have the same counts so the probability includes a combinatorial
-coefficient.
-
-Note: `value` must be a non-negative tensor with dtype `self.dtype`, have no
-fractional components, and such that
-`tf.reduce_sum(value, -1) = self.total_count`. Its shape must be broadcastable
-with `self.probs` and `self.total_count`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.log_survival_function(value, name='log_survival_function')` {#Multinomial.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.logits` {#Multinomial.logits}
-
-Vector of coordinatewise logits.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.mean(name='mean')` {#Multinomial.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.mode(name='mode')` {#Multinomial.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.name` {#Multinomial.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Multinomial.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.param_static_shapes(cls, sample_shape)` {#Multinomial.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.parameters` {#Multinomial.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.prob(value, name='prob')` {#Multinomial.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Multinomial`:
-
-For each batch of counts, `value = [n_0, ...
-,n_{k-1}]`, `P[value]` is the probability that after sampling `self.total_count`
-draws from this Multinomial distribution, the number of draws falling in class
-`j` is `n_j`. Since this definition is [exchangeable](
-https://en.wikipedia.org/wiki/Exchangeable_random_variables); different
-sequences have the same counts so the probability includes a combinatorial
-coefficient.
-
-Note: `value` must be a non-negative tensor with dtype `self.dtype`, have no
-fractional components, and such that
-`tf.reduce_sum(value, -1) = self.total_count`. Its shape must be broadcastable
-with `self.probs` and `self.total_count`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.probs` {#Multinomial.probs}
-
-Probability of of drawing a `1` in that coordinate.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.reparameterization_type` {#Multinomial.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.sample(sample_shape=(), seed=None, name='sample')` {#Multinomial.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.stddev(name='stddev')` {#Multinomial.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.survival_function(value, name='survival_function')` {#Multinomial.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.total_count` {#Multinomial.total_count}
-
-Number of trials used to construct a sample.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.validate_args` {#Multinomial.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.variance(name='variance')` {#Multinomial.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.WishartCholesky` {#WishartCholesky}
-
-The matrix Wishart distribution on positive definite matrices.
-
-This distribution is defined by a scalar degrees of freedom `df` and a
-lower, triangular Cholesky factor which characterizes the scale matrix.
-
-Using WishartCholesky is a constant-time improvement over WishartFull. It
-saves an O(nbk^3) operation, i.e., a matrix-product operation for sampling
-and a Cholesky factorization in log_prob. For most use-cases it often saves
-another O(nbk^3) operation since most uses of Wishart will also use the
-Cholesky factorization.
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(X; df, scale) = det(X)**(0.5 (df-k-1)) exp(-0.5 tr[inv(scale) X]) / Z
-Z = 2**(0.5 df k) |det(scale)|**(0.5 df) Gamma_k(0.5 df)
-```
-
-where:
-* `df >= k` denotes the degrees of freedom,
-* `scale` is a symmetric, positive definite, `k x k` matrix,
-* `Z` is the normalizing constant, and,
-* `Gamma_k` is the [multivariate Gamma function](
- https://en.wikipedia.org/wiki/Multivariate_gamma_function).
-
-
-#### Examples
-
-```python
-# Initialize a single 3x3 Wishart with Cholesky factored scale matrix and 5
-# degrees-of-freedom.(*)
-df = 5
-chol_scale = tf.cholesky(...) # Shape is [3, 3].
-dist = tf.contrib.distributions.WishartCholesky(df=df, scale=chol_scale)
-
-# Evaluate this on an observation in R^3, returning a scalar.
-x = ... # A 3x3 positive definite matrix.
-dist.prob(x) # Shape is [], a scalar.
-
-# Evaluate this on a two observations, each in R^{3x3}, returning a length two
-# Tensor.
-x = [x0, x1] # Shape is [2, 3, 3].
-dist.prob(x) # Shape is [2].
-
-# Initialize two 3x3 Wisharts with Cholesky factored scale matrices.
-df = [5, 4]
-chol_scale = tf.cholesky(...) # Shape is [2, 3, 3].
-dist = tf.contrib.distributions.WishartCholesky(df=df, scale=chol_scale)
-
-# Evaluate this on four observations.
-x = [[x0, x1], [x2, x3]] # Shape is [2, 2, 3, 3].
-dist.prob(x) # Shape is [2, 2].
-
-# (*) - To efficiently create a trainable covariance matrix, see the example
-# in tf.contrib.distributions.matrix_diag_transform.
-```
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.__init__(df, scale, cholesky_input_output_matrices=False, validate_args=False, allow_nan_stats=True, name='WishartCholesky')` {#WishartCholesky.__init__}
-
-Construct Wishart distributions.
-
-##### Args:
-
-
-* <b>`df`</b>: `float` or `double` `Tensor`. Degrees of freedom, must be greater than
- or equal to dimension of the scale matrix.
-* <b>`scale`</b>: `float` or `double` `Tensor`. The Cholesky factorization of
- the symmetric positive definite scale matrix of the distribution.
-* <b>`cholesky_input_output_matrices`</b>: Python `bool`. Any function which whose
- input or output is a matrix assumes the input is Cholesky and returns a
- Cholesky factored matrix. Example `log_prob` input takes a Cholesky and
- `sample_n` returns a Cholesky when
- `cholesky_input_output_matrices=True`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.allow_nan_stats` {#WishartCholesky.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.batch_shape` {#WishartCholesky.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.batch_shape_tensor(name='batch_shape_tensor')` {#WishartCholesky.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.cdf(value, name='cdf')` {#WishartCholesky.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.cholesky_input_output_matrices` {#WishartCholesky.cholesky_input_output_matrices}
-
-Boolean indicating if `Tensor` input/outputs are Cholesky factorized.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.copy(**override_parameters_kwargs)` {#WishartCholesky.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.covariance(name='covariance')` {#WishartCholesky.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.df` {#WishartCholesky.df}
-
-Wishart distribution degree(s) of freedom.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.dimension` {#WishartCholesky.dimension}
-
-Dimension of underlying vector space. The `p` in `R^(p*p)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.dtype` {#WishartCholesky.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.entropy(name='entropy')` {#WishartCholesky.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.event_shape` {#WishartCholesky.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.event_shape_tensor(name='event_shape_tensor')` {#WishartCholesky.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.is_continuous` {#WishartCholesky.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.is_scalar_batch(name='is_scalar_batch')` {#WishartCholesky.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.is_scalar_event(name='is_scalar_event')` {#WishartCholesky.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.log_cdf(value, name='log_cdf')` {#WishartCholesky.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.log_normalization(name='log_normalization')` {#WishartCholesky.log_normalization}
-
-Computes the log normalizing constant, log(Z).
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.log_prob(value, name='log_prob')` {#WishartCholesky.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.log_survival_function(value, name='log_survival_function')` {#WishartCholesky.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.mean(name='mean')` {#WishartCholesky.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.mean_log_det(name='mean_log_det')` {#WishartCholesky.mean_log_det}
-
-Computes E[log(det(X))] under this Wishart distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.mode(name='mode')` {#WishartCholesky.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.name` {#WishartCholesky.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#WishartCholesky.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.param_static_shapes(cls, sample_shape)` {#WishartCholesky.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.parameters` {#WishartCholesky.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.prob(value, name='prob')` {#WishartCholesky.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.reparameterization_type` {#WishartCholesky.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.sample(sample_shape=(), seed=None, name='sample')` {#WishartCholesky.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.scale()` {#WishartCholesky.scale}
-
-Wishart distribution scale matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.scale_operator_pd` {#WishartCholesky.scale_operator_pd}
-
-Wishart distribution scale matrix as an OperatorPD.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.stddev(name='stddev')` {#WishartCholesky.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.survival_function(value, name='survival_function')` {#WishartCholesky.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.validate_args` {#WishartCholesky.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.variance(name='variance')` {#WishartCholesky.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.WishartFull` {#WishartFull}
-
-The matrix Wishart distribution on positive definite matrices.
-
-This distribution is defined by a scalar degrees of freedom `df` and a
-symmetric, positive definite scale matrix.
-
-Evaluation of the pdf, determinant, and sampling are all `O(k^3)` operations
-where `(k, k)` is the event space shape.
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(X; df, scale) = det(X)**(0.5 (df-k-1)) exp(-0.5 tr[inv(scale) X]) / Z
-Z = 2**(0.5 df k) |det(scale)|**(0.5 df) Gamma_k(0.5 df)
-```
-
-where:
-* `df >= k` denotes the degrees of freedom,
-* `scale` is a symmetric, positive definite, `k x k` matrix,
-* `Z` is the normalizing constant, and,
-* `Gamma_k` is the [multivariate Gamma function](
- https://en.wikipedia.org/wiki/Multivariate_gamma_function).
-
-#### Examples
-
-```python
-# Initialize a single 3x3 Wishart with Full factored scale matrix and 5
-# degrees-of-freedom.(*)
-df = 5
-scale = ... # Shape is [3, 3]; positive definite.
-dist = tf.contrib.distributions.WishartFull(df=df, scale=scale)
-
-# Evaluate this on an observation in R^3, returning a scalar.
-x = ... # A 3x3 positive definite matrix.
-dist.prob(x) # Shape is [], a scalar.
-
-# Evaluate this on a two observations, each in R^{3x3}, returning a length two
-# Tensor.
-x = [x0, x1] # Shape is [2, 3, 3].
-dist.prob(x) # Shape is [2].
-
-# Initialize two 3x3 Wisharts with Full factored scale matrices.
-df = [5, 4]
-scale = ... # Shape is [2, 3, 3].
-dist = tf.contrib.distributions.WishartFull(df=df, scale=scale)
-
-# Evaluate this on four observations.
-x = [[x0, x1], [x2, x3]] # Shape is [2, 2, 3, 3]; xi is positive definite.
-dist.prob(x) # Shape is [2, 2].
-
-# (*) - To efficiently create a trainable covariance matrix, see the example
-# in tf.contrib.distributions.matrix_diag_transform.
-```
-- - -
-
-#### `tf.contrib.distributions.WishartFull.__init__(df, scale, cholesky_input_output_matrices=False, validate_args=False, allow_nan_stats=True, name='WishartFull')` {#WishartFull.__init__}
-
-Construct Wishart distributions.
-
-##### Args:
-
-
-* <b>`df`</b>: `float` or `double` `Tensor`. Degrees of freedom, must be greater than
- or equal to dimension of the scale matrix.
-* <b>`scale`</b>: `float` or `double` `Tensor`. The symmetric positive definite
- scale matrix of the distribution.
-* <b>`cholesky_input_output_matrices`</b>: Python `bool`. Any function which whose
- input or output is a matrix assumes the input is Cholesky and returns a
- Cholesky factored matrix. Example `log_prob` input takes a Cholesky and
- `sample_n` returns a Cholesky when
- `cholesky_input_output_matrices=True`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.allow_nan_stats` {#WishartFull.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.batch_shape` {#WishartFull.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.batch_shape_tensor(name='batch_shape_tensor')` {#WishartFull.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.cdf(value, name='cdf')` {#WishartFull.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.cholesky_input_output_matrices` {#WishartFull.cholesky_input_output_matrices}
-
-Boolean indicating if `Tensor` input/outputs are Cholesky factorized.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.copy(**override_parameters_kwargs)` {#WishartFull.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.covariance(name='covariance')` {#WishartFull.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.df` {#WishartFull.df}
-
-Wishart distribution degree(s) of freedom.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.dimension` {#WishartFull.dimension}
-
-Dimension of underlying vector space. The `p` in `R^(p*p)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.dtype` {#WishartFull.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.entropy(name='entropy')` {#WishartFull.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.event_shape` {#WishartFull.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.event_shape_tensor(name='event_shape_tensor')` {#WishartFull.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.is_continuous` {#WishartFull.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.is_scalar_batch(name='is_scalar_batch')` {#WishartFull.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.is_scalar_event(name='is_scalar_event')` {#WishartFull.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.log_cdf(value, name='log_cdf')` {#WishartFull.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.log_normalization(name='log_normalization')` {#WishartFull.log_normalization}
-
-Computes the log normalizing constant, log(Z).
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.log_prob(value, name='log_prob')` {#WishartFull.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.log_survival_function(value, name='log_survival_function')` {#WishartFull.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.mean(name='mean')` {#WishartFull.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.mean_log_det(name='mean_log_det')` {#WishartFull.mean_log_det}
-
-Computes E[log(det(X))] under this Wishart distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.mode(name='mode')` {#WishartFull.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.name` {#WishartFull.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#WishartFull.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.param_static_shapes(cls, sample_shape)` {#WishartFull.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.parameters` {#WishartFull.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.prob(value, name='prob')` {#WishartFull.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.reparameterization_type` {#WishartFull.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.sample(sample_shape=(), seed=None, name='sample')` {#WishartFull.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.scale()` {#WishartFull.scale}
-
-Wishart distribution scale matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.scale_operator_pd` {#WishartFull.scale_operator_pd}
-
-Wishart distribution scale matrix as an OperatorPD.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.stddev(name='stddev')` {#WishartFull.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.survival_function(value, name='survival_function')` {#WishartFull.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.validate_args` {#WishartFull.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.variance(name='variance')` {#WishartFull.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-
-- - -
-
-### `tf.contrib.distributions.matrix_diag_transform(matrix, transform=None, name=None)` {#matrix_diag_transform}
-
-Transform diagonal of [batch-]matrix, leave rest of matrix unchanged.
-
-Create a trainable covariance defined by a Cholesky factor:
-
-```python
-# Transform network layer into 2 x 2 array.
-matrix_values = tf.contrib.layers.fully_connected(activations, 4)
-matrix = tf.reshape(matrix_values, (batch_size, 2, 2))
-
-# Make the diagonal positive. If the upper triangle was zero, this would be a
-# valid Cholesky factor.
-chol = matrix_diag_transform(matrix, transform=tf.nn.softplus)
-
-# OperatorPDCholesky ignores the upper triangle.
-operator = OperatorPDCholesky(chol)
-```
-
-Example of heteroskedastic 2-D linear regression.
-
-```python
-# Get a trainable Cholesky factor.
-matrix_values = tf.contrib.layers.fully_connected(activations, 4)
-matrix = tf.reshape(matrix_values, (batch_size, 2, 2))
-chol = matrix_diag_transform(matrix, transform=tf.nn.softplus)
-
-# Get a trainable mean.
-mu = tf.contrib.layers.fully_connected(activations, 2)
-
-# This is a fully trainable multivariate normal!
-dist = tf.contrib.distributions.MVNCholesky(mu, chol)
-
-# Standard log loss. Minimizing this will "train" mu and chol, and then dist
-# will be a distribution predicting labels as multivariate Gaussians.
-loss = -1 * tf.reduce_mean(dist.log_prob(labels))
-```
-
-##### Args:
-
-
-* <b>`matrix`</b>: Rank `R` `Tensor`, `R >= 2`, where the last two dimensions are
- equal.
-* <b>`transform`</b>: Element-wise function mapping `Tensors` to `Tensors`. To
- be applied to the diagonal of `matrix`. If `None`, `matrix` is returned
- unchanged. Defaults to `None`.
-* <b>`name`</b>: A name to give created ops.
- Defaults to "matrix_diag_transform".
-
-##### Returns:
-
- A `Tensor` with same shape and `dtype` as `matrix`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.TransformedDistribution` {#TransformedDistribution}
-
-A Transformed Distribution.
-
-A `TransformedDistribution` models `p(y)` given a base distribution `p(x)`,
-and a deterministic, invertible, differentiable transform, `Y = g(X)`. The
-transform is typically an instance of the `Bijector` class and the base
-distribution is typically an instance of the `Distribution` class.
-
-A `Bijector` is expected to implement the following functions:
-- `forward`,
-- `inverse`,
-- `inverse_log_det_jacobian`.
-The semantics of these functions are outlined in the `Bijector` documentation.
-
-We now describe how a `TransformedDistribution` alters the input/outputs of a
-`Distribution` associated with a random variable (rv) `X`.
-
-Write `cdf(Y=y)` for an absolutely continuous cumulative distribution function
-of random variable `Y`; write the probability density function `pdf(Y=y) :=
-d^k / (dy_1,...,dy_k) cdf(Y=y)` for its derivative wrt to `Y` evaluated at
-`y`. Assume that `Y = g(X)` where `g` is a deterministic diffeomorphism,
-i.e., a non-random, continuous, differentiable, and invertible function.
-Write the inverse of `g` as `X = g^{-1}(Y)` and `(J o g)(x)` for the Jacobian
-of `g` evaluated at `x`.
-
-A `TransformedDistribution` implements the following operations:
-
- * `sample`:
-
- Mathematically:
-
- ```none
- Y = g(X)
- ```
-
- Programmatically:
-
- ```python
- return bijector.forward(distribution.sample(...))
- ```
-
- * `log_prob`:
-
- Mathematically:
-
- ```none
- (log o pdf)(Y=y) = (log o pdf o g^{-1})(y) +
- (log o abs o det o J o g^{-1})(y)
- ```
-
- Programmatically:
-
- ```python
- return (distribution.log_prob(bijector.inverse(y)) +
- bijector.inverse_log_det_jacobian(y))
- ```
-
- * `log_cdf`:
-
- Mathematically:
-
- ```none
- (log o cdf)(Y=y) = (log o cdf o g^{-1})(y)
- ```
-
- Programmatically:
-
- ```python
- return distribution.log_cdf(bijector.inverse(x))
- ```
-
- * and similarly for: `cdf`, `prob`, `log_survival_function`,
- `survival_function`.
-
-A simple example constructing a Log-Normal distribution from a Normal
-distribution:
-
-```python
-ds = tf.contrib.distributions
-log_normal = ds.TransformedDistribution(
- distribution=ds.Normal(loc=mu, scale=sigma),
- bijector=ds.bijector.Exp(),
- name="LogNormalTransformedDistribution")
-```
-
-A `LogNormal` made from callables:
-
-```python
-ds = tf.contrib.distributions
-log_normal = ds.TransformedDistribution(
- distribution=ds.Normal(loc=mu, scale=sigma),
- bijector=ds.bijector.Inline(
- forward_fn=tf.exp,
- inverse_fn=tf.log,
- inverse_log_det_jacobian_fn=(
- lambda y: -tf.reduce_sum(tf.log(y), axis=-1)),
- name="LogNormalTransformedDistribution")
-```
-
-Another example constructing a Normal from a StandardNormal:
-
-```python
-ds = tf.contrib.distributions
-normal = ds.TransformedDistribution(
- distribution=ds.Normal(loc=0, scale=1),
- bijector=ds.bijector.ScaleAndShift(loc=mu, scale=sigma, event_ndims=0),
- name="NormalTransformedDistribution")
-```
-
-A `TransformedDistribution`'s batch- and event-shape are implied by the base
-distribution unless explicitly overridden by `batch_shape` or `event_shape`
-arguments. Specifying an overriding `batch_shape` (`event_shape`) is
-permitted only if the base distribution has scalar batch-shape (event-shape).
-The bijector is applied to the distribution as if the distribution possessed
-the overridden shape(s). The following example demonstrates how to construct a
-multivariate Normal as a `TransformedDistribution`.
-
-```python
-bs = tf.contrib.distributions.bijector
-ds = tf.contrib.distributions
-# We will create two MVNs with batch_shape = event_shape = 2.
-mean = [[-1., 0], # batch:0
- [0., 1]] # batch:1
-chol_cov = [[[1., 0],
- [0, 1]], # batch:0
- [[1, 0],
- [2, 2]]] # batch:1
-mvn1 = ds.TransformedDistribution(
- distribution=ds.Normal(loc=0., scale=1.),
- bijector=bs.Affine(shift=mean, tril=chol_cov),
- batch_shape=[2], # Valid because base_distribution.batch_shape == [].
- event_shape=[2]) # Valid because base_distribution.event_shape == [].
-mvn2 = ds.MultivariateNormalTriL(loc=mean, scale_tril=chol_cov)
-# mvn1.log_prob(x) == mvn2.log_prob(x)
-```
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.__init__(distribution, bijector=None, batch_shape=None, event_shape=None, validate_args=False, name=None)` {#TransformedDistribution.__init__}
-
-Construct a Transformed Distribution.
-
-##### Args:
-
-
-* <b>`distribution`</b>: The base distribution instance to transform. Typically an
- instance of `Distribution`.
-* <b>`bijector`</b>: The object responsible for calculating the transformation.
- Typically an instance of `Bijector`. `None` means `Identity()`.
-* <b>`batch_shape`</b>: `integer` vector `Tensor` which overrides `distribution`
- `batch_shape`; valid only if `distribution.is_scalar_batch()`.
-* <b>`event_shape`</b>: `integer` vector `Tensor` which overrides `distribution`
- `event_shape`; valid only if `distribution.is_scalar_event()`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class. Default:
- `bijector.name + distribution.name`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.allow_nan_stats` {#TransformedDistribution.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.batch_shape` {#TransformedDistribution.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.batch_shape_tensor(name='batch_shape_tensor')` {#TransformedDistribution.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.bijector` {#TransformedDistribution.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.cdf(value, name='cdf')` {#TransformedDistribution.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.copy(**override_parameters_kwargs)` {#TransformedDistribution.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.covariance(name='covariance')` {#TransformedDistribution.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.distribution` {#TransformedDistribution.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.dtype` {#TransformedDistribution.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.entropy(name='entropy')` {#TransformedDistribution.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.event_shape` {#TransformedDistribution.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.event_shape_tensor(name='event_shape_tensor')` {#TransformedDistribution.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.is_continuous` {#TransformedDistribution.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.is_scalar_batch(name='is_scalar_batch')` {#TransformedDistribution.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.is_scalar_event(name='is_scalar_event')` {#TransformedDistribution.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.log_cdf(value, name='log_cdf')` {#TransformedDistribution.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.log_prob(value, name='log_prob')` {#TransformedDistribution.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.log_survival_function(value, name='log_survival_function')` {#TransformedDistribution.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.mean(name='mean')` {#TransformedDistribution.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.mode(name='mode')` {#TransformedDistribution.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.name` {#TransformedDistribution.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#TransformedDistribution.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.param_static_shapes(cls, sample_shape)` {#TransformedDistribution.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.parameters` {#TransformedDistribution.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.prob(value, name='prob')` {#TransformedDistribution.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.reparameterization_type` {#TransformedDistribution.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.sample(sample_shape=(), seed=None, name='sample')` {#TransformedDistribution.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.stddev(name='stddev')` {#TransformedDistribution.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.survival_function(value, name='survival_function')` {#TransformedDistribution.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.validate_args` {#TransformedDistribution.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.variance(name='variance')` {#TransformedDistribution.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.QuantizedDistribution` {#QuantizedDistribution}
-
-Distribution representing the quantization `Y = ceiling(X)`.
-
-#### Definition in terms of sampling.
-
-```
-1. Draw X
-2. Set Y <-- ceiling(X)
-3. If Y < low, reset Y <-- low
-4. If Y > high, reset Y <-- high
-5. Return Y
-```
-
-#### Definition in terms of the probability mass function.
-
-Given scalar random variable `X`, we define a discrete random variable `Y`
-supported on the integers as follows:
-
-```
-P[Y = j] := P[X <= low], if j == low,
- := P[X > high - 1], j == high,
- := 0, if j < low or j > high,
- := P[j - 1 < X <= j], all other j.
-```
-
-Conceptually, without cutoffs, the quantization process partitions the real
-line `R` into half open intervals, and identifies an integer `j` with the
-right endpoints:
-
-```
-R = ... (-2, -1](-1, 0](0, 1](1, 2](2, 3](3, 4] ...
-j = ... -1 0 1 2 3 4 ...
-```
-
-`P[Y = j]` is the mass of `X` within the `jth` interval.
-If `low = 0`, and `high = 2`, then the intervals are redrawn
-and `j` is re-assigned:
-
-```
-R = (-infty, 0](0, 1](1, infty)
-j = 0 1 2
-```
-
-`P[Y = j]` is still the mass of `X` within the `jth` interval.
-
-#### Caveats
-
-Since evaluation of each `P[Y = j]` involves a cdf evaluation (rather than
-a closed form function such as for a Poisson), computations such as mean and
-entropy are better done with samples or approximations, and are not
-implemented by this class.
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.__init__(distribution, low=None, high=None, validate_args=False, name='QuantizedDistribution')` {#QuantizedDistribution.__init__}
-
-Construct a Quantized Distribution representing `Y = ceiling(X)`.
-
-Some properties are inherited from the distribution defining `X`. Example:
-`allow_nan_stats` is determined for this `QuantizedDistribution` by reading
-the `distribution`.
-
-##### Args:
-
-
-* <b>`distribution`</b>: The base distribution class to transform. Typically an
- instance of `Distribution`.
-* <b>`low`</b>: `Tensor` with same `dtype` as this distribution and shape
- able to be added to samples. Should be a whole number. Default `None`.
- If provided, base distribution's `prob` should be defined at
- `low`.
-* <b>`high`</b>: `Tensor` with same `dtype` as this distribution and shape
- able to be added to samples. Should be a whole number. Default `None`.
- If provided, base distribution's `prob` should be defined at
- `high - 1`.
- `high` must be strictly greater than `low`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `dist_cls` is not a subclass of
- `Distribution` or continuous.
-* <b>`NotImplementedError`</b>: If the base distribution does not implement `cdf`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.allow_nan_stats` {#QuantizedDistribution.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.batch_shape` {#QuantizedDistribution.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.batch_shape_tensor(name='batch_shape_tensor')` {#QuantizedDistribution.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.cdf(value, name='cdf')` {#QuantizedDistribution.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-
-Additional documentation from `QuantizedDistribution`:
-
-For whole numbers `y`,
-
-```
-cdf(y) := P[Y <= y]
- = 1, if y >= high,
- = 0, if y < low,
- = P[X <= y], otherwise.
-```
-
-Since `Y` only has mass at whole numbers, `P[Y <= y] = P[Y <= floor(y)]`.
-This dictates that fractional `y` are first floored to a whole number, and
-then above definition applies.
-
-The base distribution's `cdf` method must be defined on `y - 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.copy(**override_parameters_kwargs)` {#QuantizedDistribution.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.covariance(name='covariance')` {#QuantizedDistribution.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.distribution` {#QuantizedDistribution.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.dtype` {#QuantizedDistribution.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.entropy(name='entropy')` {#QuantizedDistribution.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.event_shape` {#QuantizedDistribution.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.event_shape_tensor(name='event_shape_tensor')` {#QuantizedDistribution.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.is_continuous` {#QuantizedDistribution.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.is_scalar_batch(name='is_scalar_batch')` {#QuantizedDistribution.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.is_scalar_event(name='is_scalar_event')` {#QuantizedDistribution.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.log_cdf(value, name='log_cdf')` {#QuantizedDistribution.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-
-Additional documentation from `QuantizedDistribution`:
-
-For whole numbers `y`,
-
-```
-cdf(y) := P[Y <= y]
- = 1, if y >= high,
- = 0, if y < low,
- = P[X <= y], otherwise.
-```
-
-Since `Y` only has mass at whole numbers, `P[Y <= y] = P[Y <= floor(y)]`.
-This dictates that fractional `y` are first floored to a whole number, and
-then above definition applies.
-
-The base distribution's `log_cdf` method must be defined on `y - 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.log_prob(value, name='log_prob')` {#QuantizedDistribution.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `QuantizedDistribution`:
-
-For whole numbers `y`,
-
-```
-P[Y = y] := P[X <= low], if y == low,
- := P[X > high - 1], y == high,
- := 0, if j < low or y > high,
- := P[y - 1 < X <= y], all other y.
-```
-
-
-The base distribution's `log_cdf` method must be defined on `y - 1`. If the
-base distribution has a `log_survival_function` method results will be more
-accurate for large values of `y`, and in this case the `log_survival_function`
-must also be defined on `y - 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.log_survival_function(value, name='log_survival_function')` {#QuantizedDistribution.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-
-Additional documentation from `QuantizedDistribution`:
-
-For whole numbers `y`,
-
-```
-survival_function(y) := P[Y > y]
- = 0, if y >= high,
- = 1, if y < low,
- = P[X <= y], otherwise.
-```
-
-Since `Y` only has mass at whole numbers, `P[Y <= y] = P[Y <= floor(y)]`.
-This dictates that fractional `y` are first floored to a whole number, and
-then above definition applies.
-
-The base distribution's `log_cdf` method must be defined on `y - 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.mean(name='mean')` {#QuantizedDistribution.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.mode(name='mode')` {#QuantizedDistribution.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.name` {#QuantizedDistribution.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#QuantizedDistribution.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.param_static_shapes(cls, sample_shape)` {#QuantizedDistribution.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.parameters` {#QuantizedDistribution.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.prob(value, name='prob')` {#QuantizedDistribution.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `QuantizedDistribution`:
-
-For whole numbers `y`,
-
-```
-P[Y = y] := P[X <= low], if y == low,
- := P[X > high - 1], y == high,
- := 0, if j < low or y > high,
- := P[y - 1 < X <= y], all other y.
-```
-
-
-The base distribution's `cdf` method must be defined on `y - 1`. If the
-base distribution has a `survival_function` method, results will be more
-accurate for large values of `y`, and in this case the `survival_function` must
-also be defined on `y - 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.reparameterization_type` {#QuantizedDistribution.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.sample(sample_shape=(), seed=None, name='sample')` {#QuantizedDistribution.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.stddev(name='stddev')` {#QuantizedDistribution.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.survival_function(value, name='survival_function')` {#QuantizedDistribution.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-
-Additional documentation from `QuantizedDistribution`:
-
-For whole numbers `y`,
-
-```
-survival_function(y) := P[Y > y]
- = 0, if y >= high,
- = 1, if y < low,
- = P[X <= y], otherwise.
-```
-
-Since `Y` only has mass at whole numbers, `P[Y <= y] = P[Y <= floor(y)]`.
-This dictates that fractional `y` are first floored to a whole number, and
-then above definition applies.
-
-The base distribution's `cdf` method must be defined on `y - 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.validate_args` {#QuantizedDistribution.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.variance(name='variance')` {#QuantizedDistribution.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-
-- - -
-
-### `class tf.contrib.distributions.Mixture` {#Mixture}
-
-Mixture distribution.
-
-The `Mixture` object implements batched mixture distributions.
-The mixture model is defined by a `Categorical` distribution (the mixture)
-and a python list of `Distribution` objects.
-
-Methods supported include `log_prob`, `prob`, `mean`, `sample`, and
-`entropy_lower_bound`.
-- - -
-
-#### `tf.contrib.distributions.Mixture.__init__(cat, components, validate_args=False, allow_nan_stats=True, name='Mixture')` {#Mixture.__init__}
-
-Initialize a Mixture distribution.
-
-A `Mixture` is defined by a `Categorical` (`cat`, representing the
-mixture probabilities) and a list of `Distribution` objects
-all having matching dtype, batch shape, event shape, and continuity
-properties (the components).
-
-The `num_classes` of `cat` must be possible to infer at graph construction
-time and match `len(components)`.
-
-##### Args:
-
-
-* <b>`cat`</b>: A `Categorical` distribution instance, representing the probabilities
- of `distributions`.
-* <b>`components`</b>: A list or tuple of `Distribution` instances.
- Each instance must have the same type, be defined on the same domain,
- and have matching `event_shape` and `batch_shape`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. If `True`, raise a runtime
- error if batch or event ranks are inconsistent between cat and any of
- the distributions. This is only checked if the ranks cannot be
- determined statically at graph construction time.
-* <b>`allow_nan_stats`</b>: Boolean, default `True`. If `False`, raise an
- exception if a statistic (e.g. mean/mode/etc...) is undefined for any
- batch member. If `True`, batch members with valid parameters leading to
- undefined statistics will return NaN for this statistic.
-* <b>`name`</b>: A name for this distribution (optional).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If cat is not a `Categorical`, or `components` is not
- a list or tuple, or the elements of `components` are not
- instances of `Distribution`, or do not have matching `dtype`.
-* <b>`ValueError`</b>: If `components` is an empty list or tuple, or its
- elements do not have a statically known event rank.
- If `cat.num_classes` cannot be inferred at graph creation time,
- or the constant value of `cat.num_classes` is not equal to
- `len(components)`, or all `components` and `cat` do not have
- matching static batch shapes, or all components do not
- have matching static event shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.allow_nan_stats` {#Mixture.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.batch_shape` {#Mixture.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.batch_shape_tensor(name='batch_shape_tensor')` {#Mixture.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.cat` {#Mixture.cat}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.cdf(value, name='cdf')` {#Mixture.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.components` {#Mixture.components}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.copy(**override_parameters_kwargs)` {#Mixture.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.covariance(name='covariance')` {#Mixture.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.dtype` {#Mixture.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.entropy(name='entropy')` {#Mixture.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.entropy_lower_bound(name='entropy_lower_bound')` {#Mixture.entropy_lower_bound}
-
-A lower bound on the entropy of this mixture model.
-
-The bound below is not always very tight, and its usefulness depends
-on the mixture probabilities and the components in use.
-
-A lower bound is useful for ELBO when the `Mixture` is the variational
-distribution:
-
-\\(
-\log p(x) >= ELBO = \int q(z) \log p(x, z) dz + H[q]
-\\)
-
-where \\( p \\) is the prior distribution, \\( q \\) is the variational,
-and \\( H[q] \\) is the entropy of \\( q \\). If there is a lower bound
-\\( G[q] \\) such that \\( H[q] \geq G[q] \\) then it can be used in
-place of \\( H[q] \\).
-
-For a mixture of distributions \\( q(Z) = \sum_i c_i q_i(Z) \\) with
-\\( \sum_i c_i = 1 \\), by the concavity of \\( f(x) = -x \log x \\), a
-simple lower bound is:
-
-\\(
-\begin{align}
-H[q] & = - \int q(z) \log q(z) dz \\\
- & = - \int (\sum_i c_i q_i(z)) \log(\sum_i c_i q_i(z)) dz \\\
- & \geq - \sum_i c_i \int q_i(z) \log q_i(z) dz \\\
- & = \sum_i c_i H[q_i]
-\end{align}
-\\)
-
-This is the term we calculate below for \\( G[q] \\).
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A lower bound on the Mixture's entropy.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.event_shape` {#Mixture.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.event_shape_tensor(name='event_shape_tensor')` {#Mixture.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.is_continuous` {#Mixture.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.is_scalar_batch(name='is_scalar_batch')` {#Mixture.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.is_scalar_event(name='is_scalar_event')` {#Mixture.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.log_cdf(value, name='log_cdf')` {#Mixture.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.log_prob(value, name='log_prob')` {#Mixture.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.log_survival_function(value, name='log_survival_function')` {#Mixture.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.mean(name='mean')` {#Mixture.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.mode(name='mode')` {#Mixture.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.name` {#Mixture.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.num_components` {#Mixture.num_components}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Mixture.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.param_static_shapes(cls, sample_shape)` {#Mixture.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.parameters` {#Mixture.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.prob(value, name='prob')` {#Mixture.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.reparameterization_type` {#Mixture.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.sample(sample_shape=(), seed=None, name='sample')` {#Mixture.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.stddev(name='stddev')` {#Mixture.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.survival_function(value, name='survival_function')` {#Mixture.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.validate_args` {#Mixture.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.variance(name='variance')` {#Mixture.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-
-- - -
-
-### `tf.contrib.distributions.normal_conjugates_known_scale_posterior(prior, scale, s, n)` {#normal_conjugates_known_scale_posterior}
-
-Posterior Normal distribution with conjugate prior on the mean.
-
-This model assumes that `n` observations (with sum `s`) come from a
-Normal with unknown mean `loc` (described by the Normal `prior`)
-and known variance `scale**2`. The "known scale posterior" is
-the distribution of the unknown `loc`.
-
-Accepts a prior Normal distribution object, having parameters
-`loc0` and `scale0`, as well as known `scale` values of the predictive
-distribution(s) (also assumed Normal),
-and statistical estimates `s` (the sum(s) of the observations) and
-`n` (the number(s) of observations).
-
-Returns a posterior (also Normal) distribution object, with parameters
-`(loc', scale'**2)`, where:
-
-```
-mu ~ N(mu', sigma'**2)
-sigma'**2 = 1/(1/sigma0**2 + n/sigma**2),
-mu' = (mu0/sigma0**2 + s/sigma**2) * sigma'**2.
-```
-
-Distribution parameters from `prior`, as well as `scale`, `s`, and `n`.
-will broadcast in the case of multidimensional sets of parameters.
-
-##### Args:
-
-
-* <b>`prior`</b>: `Normal` object of type `dtype`:
- the prior distribution having parameters `(loc0, scale0)`.
-* <b>`scale`</b>: tensor of type `dtype`, taking values `scale > 0`.
- The known stddev parameter(s).
-* <b>`s`</b>: Tensor of type `dtype`. The sum(s) of observations.
-* <b>`n`</b>: Tensor of type `int`. The number(s) of observations.
-
-##### Returns:
-
- A new Normal posterior distribution object for the unknown observation
- mean `loc`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if dtype of `s` does not match `dtype`, or `prior` is not a
- Normal object.
-
-
-- - -
-
-### `tf.contrib.distributions.normal_conjugates_known_scale_predictive(prior, scale, s, n)` {#normal_conjugates_known_scale_predictive}
-
-Posterior predictive Normal distribution w. conjugate prior on the mean.
-
-This model assumes that `n` observations (with sum `s`) come from a
-Normal with unknown mean `loc` (described by the Normal `prior`)
-and known variance `scale**2`. The "known scale predictive"
-is the distribution of new observations, conditioned on the existing
-observations and our prior.
-
-Accepts a prior Normal distribution object, having parameters
-`loc0` and `scale0`, as well as known `scale` values of the predictive
-distribution(s) (also assumed Normal),
-and statistical estimates `s` (the sum(s) of the observations) and
-`n` (the number(s) of observations).
-
-Calculates the Normal distribution(s) `p(x | sigma**2)`:
-
-```
-p(x | sigma**2) = int N(x | mu, sigma**2)N(mu | prior.loc, prior.scale**2) dmu
- = N(x | prior.loc, 1 / (sigma**2 + prior.scale**2))
-```
-
-Returns the predictive posterior distribution object, with parameters
-`(loc', scale'**2)`, where:
-
-```
-sigma_n**2 = 1/(1/sigma0**2 + n/sigma**2),
-mu' = (mu0/sigma0**2 + s/sigma**2) * sigma_n**2.
-sigma'**2 = sigma_n**2 + sigma**2,
-```
-
-Distribution parameters from `prior`, as well as `scale`, `s`, and `n`.
-will broadcast in the case of multidimensional sets of parameters.
-
-##### Args:
-
-
-* <b>`prior`</b>: `Normal` object of type `dtype`:
- the prior distribution having parameters `(loc0, scale0)`.
-* <b>`scale`</b>: tensor of type `dtype`, taking values `scale > 0`.
- The known stddev parameter(s).
-* <b>`s`</b>: Tensor of type `dtype`. The sum(s) of observations.
-* <b>`n`</b>: Tensor of type `int`. The number(s) of observations.
-
-##### Returns:
-
- A new Normal predictive distribution object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if dtype of `s` does not match `dtype`, or `prior` is not a
- Normal object.
-
-
-
-- - -
-
-### `tf.contrib.distributions.kl(dist_a, dist_b, allow_nan=False, name=None)` {#kl}
-
-Get the KL-divergence KL(dist_a || dist_b).
-
-If there is no KL method registered specifically for `type(dist_a)` and
-`type(dist_b)`, then the class hierarchies of these types are searched.
-
-If one KL method is registered between any pairs of classes in these two
-parent hierarchies, it is used.
-
-If more than one such registered method exists, the method whose registered
-classes have the shortest sum MRO paths to the input types is used.
-
-If more than one such shortest path exists, the first method
-identified in the search is used (favoring a shorter MRO distance to
-`type(dist_a)`).
-
-##### Args:
-
-
-* <b>`dist_a`</b>: The first distribution.
-* <b>`dist_b`</b>: The second distribution.
-* <b>`allow_nan`</b>: If `False` (default), a runtime error is raised
- if the KL returns NaN values for any batch entry of the given
- distributions. If `True`, the KL may return a NaN for the given entry.
-* <b>`name`</b>: (optional) Name scope to use for created operations.
-
-##### Returns:
-
- A Tensor with the batchwise KL-divergence between dist_a and dist_b.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If no KL method is defined for distribution types
- of dist_a and dist_b.
-
-
-- - -
-
-### `class tf.contrib.distributions.RegisterKL` {#RegisterKL}
-
-Decorator to register a KL divergence implementation function.
-
-Usage:
-
-@distributions.RegisterKL(distributions.Normal, distributions.Normal)
-def _kl_normal_mvn(norm_a, norm_b):
- # Return KL(norm_a || norm_b)
-- - -
-
-#### `tf.contrib.distributions.RegisterKL.__call__(kl_fn)` {#RegisterKL.__call__}
-
-Perform the KL registration.
-
-##### Args:
-
-
-* <b>`kl_fn`</b>: The function to use for the KL divergence.
-
-##### Returns:
-
- kl_fn
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if kl_fn is not a callable.
-* <b>`ValueError`</b>: if a KL divergence function has already been registered for
- the given argument classes.
-
-
-- - -
-
-#### `tf.contrib.distributions.RegisterKL.__init__(dist_cls_a, dist_cls_b)` {#RegisterKL.__init__}
-
-Initialize the KL registrar.
-
-##### Args:
-
-
-* <b>`dist_cls_a`</b>: the class of the first argument of the KL divergence.
-* <b>`dist_cls_b`</b>: the class of the second argument of the KL divergence.
-
-
-
-
-- - -
-
-### `tf.contrib.distributions.softplus_inverse(x, name=None)` {#softplus_inverse}
-
-Computes the inverse softplus, i.e., x = softplus_inverse(softplus(x)).
-
-Mathematically this op is equivalent to:
-
-```none
-softplus_inverse = log(exp(x) - 1.)
-```
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. Non-negative (not enforced), floating-point.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `Tensor`. Has the same type/shape as input `x`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.ExpRelaxedOneHotCategorical` {#ExpRelaxedOneHotCategorical}
-
-ExpRelaxedOneHotCategorical distribution with temperature and logits.
-
-An ExpRelaxedOneHotCategorical distribution is a log-transformed
-RelaxedOneHotCategorical distribution. The RelaxedOneHotCategorical is a
-distribution over random probability vectors, vectors of positive real
-values that sum to one, which continuously approximates a OneHotCategorical.
-The degree of approximation is controlled by a temperature: as the temperature
-goes to 0 the RelaxedOneHotCategorical becomes discrete with a distribution
-described by the logits, as the temperature goes to infinity the
-RelaxedOneHotCategorical becomes the constant distribution that is identically
-the constant vector of (1/event_size, ..., 1/event_size).
-
-Because computing log-probabilities of the RelaxedOneHotCategorical can
-suffer from underflow issues, this class is one solution for loss
-functions that depend on log-probabilities, such as the KL Divergence found
-in the variational autoencoder loss. The KL divergence between two
-distributions is invariant under invertible transformations, so evaluating
-KL divergences of ExpRelaxedOneHotCategorical samples, which are always
-followed by a `tf.exp` op, is equivalent to evaluating KL divergences of
-RelaxedOneHotCategorical samples. See the appendix of Maddison et al., 2016
-for more mathematical details, where this distribution is called the
-ExpConcrete.
-
-#### Examples
-
-Creates a continuous distribution, whoe exp approximates a 3-class one-hot
-categorical distiribution. The 2nd class is the most likely to be the
-largest component in samples drawn from this distribution. If those samples
-are followed by a `tf.exp` op, then they are distributed as a relaxed onehot
-categorical.
-
-```python
-temperature = 0.5
-p = [0.1, 0.5, 0.4]
-dist = ExpRelaxedOneHotCategorical(temperature, probs=p)
-samples = dist.sample()
-exp_samples = tf.exp(samples)
-# exp_samples has the same distribution as samples from
-# RelaxedOneHotCategorical(temperature, probs=p)
-```
-
-Creates a continuous distribution, whose exp approximates a 3-class one-hot
-categorical distiribution. The 2nd class is the most likely to be the
-largest component in samples drawn from this distribution.
-
-```python
-temperature = 0.5
-logits = [-2, 2, 0]
-dist = ExpRelaxedOneHotCategorical(temperature, logits=logits)
-samples = dist.sample()
-exp_samples = tf.exp(samples)
-# exp_samples has the same distribution as samples from
-# RelaxedOneHotCategorical(temperature, probs=p)
-```
-
-Creates a continuous distribution, whose exp approximates a 3-class one-hot
-categorical distiribution. Because the temperature is very low, samples from
-this distribution are almost discrete, with one component almost 0 and the
-others very negative. The 2nd class is the most likely to be the largest
-component in samples drawn from this distribution.
-
-```python
-temperature = 1e-5
-logits = [-2, 2, 0]
-dist = ExpRelaxedOneHotCategorical(temperature, logits=logits)
-samples = dist.sample()
-exp_samples = tf.exp(samples)
-# exp_samples has the same distribution as samples from
-# RelaxedOneHotCategorical(temperature, probs=p)
-```
-
-Creates a continuous distribution, whose exp approximates a 3-class one-hot
-categorical distiribution. Because the temperature is very high, samples from
-this distribution are usually close to the (-log(3), -log(3), -log(3)) vector.
-The 2nd class is still the most likely to be the largest component
-in samples drawn from this distribution.
-
-```python
-temperature = 10
-logits = [-2, 2, 0]
-dist = ExpRelaxedOneHotCategorical(temperature, logits=logits)
-samples = dist.sample()
-exp_samples = tf.exp(samples)
-# exp_samples has the same distribution as samples from
-# RelaxedOneHotCategorical(temperature, probs=p)
-```
-
-Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete Distribution:
-A Continuous Relaxation of Discrete Random Variables. 2016.
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.__init__(temperature, logits=None, probs=None, dtype=tf.float32, validate_args=False, allow_nan_stats=True, name='ExpRelaxedOneHotCategorical')` {#ExpRelaxedOneHotCategorical.__init__}
-
-Initialize ExpRelaxedOneHotCategorical using class log-probabilities.
-
-##### Args:
-
-
-* <b>`temperature`</b>: An 0-D `Tensor`, representing the temperature
- of a set of ExpRelaxedCategorical distributions. The temperature should
- be positive.
-* <b>`logits`</b>: An N-D `Tensor`, `N >= 1`, representing the log probabilities
- of a set of ExpRelaxedCategorical distributions. The first
- `N - 1` dimensions index into a batch of independent distributions and
- the last dimension represents a vector of logits for each class. Only
- one of `logits` or `probs` should be passed in.
-* <b>`probs`</b>: An N-D `Tensor`, `N >= 1`, representing the probabilities
- of a set of ExpRelaxedCategorical distributions. The first
- `N - 1` dimensions index into a batch of independent distributions and
- the last dimension represents a vector of probabilities for each
- class. Only one of `logits` or `probs` should be passed in.
-* <b>`dtype`</b>: The type of the event samples (default: int32).
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.allow_nan_stats` {#ExpRelaxedOneHotCategorical.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.batch_shape` {#ExpRelaxedOneHotCategorical.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.batch_shape_tensor(name='batch_shape_tensor')` {#ExpRelaxedOneHotCategorical.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.cdf(value, name='cdf')` {#ExpRelaxedOneHotCategorical.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.copy(**override_parameters_kwargs)` {#ExpRelaxedOneHotCategorical.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.covariance(name='covariance')` {#ExpRelaxedOneHotCategorical.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.dtype` {#ExpRelaxedOneHotCategorical.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.entropy(name='entropy')` {#ExpRelaxedOneHotCategorical.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.event_shape` {#ExpRelaxedOneHotCategorical.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.event_shape_tensor(name='event_shape_tensor')` {#ExpRelaxedOneHotCategorical.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.event_size` {#ExpRelaxedOneHotCategorical.event_size}
-
-Scalar `int32` tensor: the number of classes.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.is_continuous` {#ExpRelaxedOneHotCategorical.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.is_scalar_batch(name='is_scalar_batch')` {#ExpRelaxedOneHotCategorical.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.is_scalar_event(name='is_scalar_event')` {#ExpRelaxedOneHotCategorical.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.log_cdf(value, name='log_cdf')` {#ExpRelaxedOneHotCategorical.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.log_prob(value, name='log_prob')` {#ExpRelaxedOneHotCategorical.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.log_survival_function(value, name='log_survival_function')` {#ExpRelaxedOneHotCategorical.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.logits` {#ExpRelaxedOneHotCategorical.logits}
-
-Vector of coordinatewise logits.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.mean(name='mean')` {#ExpRelaxedOneHotCategorical.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.mode(name='mode')` {#ExpRelaxedOneHotCategorical.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.name` {#ExpRelaxedOneHotCategorical.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#ExpRelaxedOneHotCategorical.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.param_static_shapes(cls, sample_shape)` {#ExpRelaxedOneHotCategorical.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.parameters` {#ExpRelaxedOneHotCategorical.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.prob(value, name='prob')` {#ExpRelaxedOneHotCategorical.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.probs` {#ExpRelaxedOneHotCategorical.probs}
-
-Vector of probabilities summing to one.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.reparameterization_type` {#ExpRelaxedOneHotCategorical.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.sample(sample_shape=(), seed=None, name='sample')` {#ExpRelaxedOneHotCategorical.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.stddev(name='stddev')` {#ExpRelaxedOneHotCategorical.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.survival_function(value, name='survival_function')` {#ExpRelaxedOneHotCategorical.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.temperature` {#ExpRelaxedOneHotCategorical.temperature}
-
-Batchwise temperature tensor of a RelaxedCategorical.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.validate_args` {#ExpRelaxedOneHotCategorical.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.variance(name='variance')` {#ExpRelaxedOneHotCategorical.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.OneHotCategorical` {#OneHotCategorical}
-
-OneHotCategorical distribution.
-
-The categorical distribution is parameterized by the log-probabilities
-of a set of classes. The difference between OneHotCategorical and Categorical
-distributions is that OneHotCategorical is a discrete distribution over
-one-hot bit vectors whereas Categorical is a discrete distribution over
-positive integers. OneHotCategorical is equivalent to Categorical except
-Categorical has event_dim=() while OneHotCategorical has event_dim=K, where
-K is the number of classes.
-
-This class provides methods to create indexed batches of OneHotCategorical
-distributions. If the provided `logits` or `probs` is rank 2 or higher, for
-every fixed set of leading dimensions, the last dimension represents one
-single OneHotCategorical distribution. When calling distribution
-functions (e.g. `dist.prob(x)`), `logits` and `x` are broadcast to the
-same shape (if possible). In all cases, the last dimension of `logits,x`
-represents single OneHotCategorical distributions.
-
-#### Examples
-
-Creates a 3-class distiribution, with the 2nd class, the most likely to be
-drawn from.
-
-```python
-p = [0.1, 0.5, 0.4]
-dist = OneHotCategorical(probs=p)
-```
-
-Creates a 3-class distiribution, with the 2nd class the most likely to be
-drawn from, using logits.
-
-```python
-logits = [-2, 2, 0]
-dist = OneHotCategorical(logits=logits)
-```
-
-Creates a 3-class distribution, with the 3rd class is most likely to be drawn.
-
-```python
-# counts is a scalar.
-p = [0.1, 0.4, 0.5]
-dist = OneHotCategorical(probs=p)
-dist.prob([0,1,0]) # Shape []
-
-# p will be broadcast to [[0.1, 0.4, 0.5], [0.1, 0.4, 0.5]] to match.
-samples = [[0,1,0], [1,0,0]]
-dist.prob(samples) # Shape [2]
-```
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.__init__(logits=None, probs=None, dtype=tf.int32, validate_args=False, allow_nan_stats=True, name='OneHotCategorical')` {#OneHotCategorical.__init__}
-
-Initialize OneHotCategorical distributions using class log-probabilities.
-
-##### Args:
-
-
-* <b>`logits`</b>: An N-D `Tensor`, `N >= 1`, representing the log probabilities of a
- set of Categorical distributions. The first `N - 1` dimensions index
- into a batch of independent distributions and the last dimension
- represents a vector of logits for each class. Only one of `logits` or
- `probs` should be passed in.
-* <b>`probs`</b>: An N-D `Tensor`, `N >= 1`, representing the probabilities of a set
- of Categorical distributions. The first `N - 1` dimensions index into a
- batch of independent distributions and the last dimension represents a
- vector of probabilities for each class. Only one of `logits` or `probs`
- should be passed in.
-* <b>`dtype`</b>: The type of the event samples (default: int32).
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.allow_nan_stats` {#OneHotCategorical.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.batch_shape` {#OneHotCategorical.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.batch_shape_tensor(name='batch_shape_tensor')` {#OneHotCategorical.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.cdf(value, name='cdf')` {#OneHotCategorical.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.copy(**override_parameters_kwargs)` {#OneHotCategorical.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.covariance(name='covariance')` {#OneHotCategorical.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.dtype` {#OneHotCategorical.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.entropy(name='entropy')` {#OneHotCategorical.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.event_shape` {#OneHotCategorical.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.event_shape_tensor(name='event_shape_tensor')` {#OneHotCategorical.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.event_size` {#OneHotCategorical.event_size}
-
-Scalar `int32` tensor: the number of classes.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.is_continuous` {#OneHotCategorical.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.is_scalar_batch(name='is_scalar_batch')` {#OneHotCategorical.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.is_scalar_event(name='is_scalar_event')` {#OneHotCategorical.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.log_cdf(value, name='log_cdf')` {#OneHotCategorical.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.log_prob(value, name='log_prob')` {#OneHotCategorical.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.log_survival_function(value, name='log_survival_function')` {#OneHotCategorical.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.logits` {#OneHotCategorical.logits}
-
-Vector of coordinatewise logits.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.mean(name='mean')` {#OneHotCategorical.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.mode(name='mode')` {#OneHotCategorical.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.name` {#OneHotCategorical.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#OneHotCategorical.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.param_static_shapes(cls, sample_shape)` {#OneHotCategorical.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.parameters` {#OneHotCategorical.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.prob(value, name='prob')` {#OneHotCategorical.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.probs` {#OneHotCategorical.probs}
-
-Vector of coordinatewise probabilities.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.reparameterization_type` {#OneHotCategorical.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.sample(sample_shape=(), seed=None, name='sample')` {#OneHotCategorical.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.stddev(name='stddev')` {#OneHotCategorical.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.survival_function(value, name='survival_function')` {#OneHotCategorical.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.validate_args` {#OneHotCategorical.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.variance(name='variance')` {#OneHotCategorical.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.RelaxedBernoulli` {#RelaxedBernoulli}
-
-RelaxedBernoulli distribution with temperature and logits parameters.
-
-The RelaxedBernoulli is a distribution over the unit interval (0,1), which
-continuously approximates a Bernoulli. The degree of approximation is
-controlled by a temperature: as the temperaturegoes to 0 the RelaxedBernoulli
-becomes discrete with a distribution described by the `logits` or `probs`
-parameters, as the temperature goes to infinity the RelaxedBernoulli
-becomes the constant distribution that is identically 0.5.
-
-The RelaxedBernoulli distribution is a reparameterized continuous
-distribution that is the binary special case of the RelaxedOneHotCategorical
-distribution (Maddison et al., 2016; Jang et al., 2016). For details on the
-binary special case see the appendix of Maddison et al. (2016) where it is
-referred to as BinConcrete. If you use this distribution, please cite both
-papers.
-
-Some care needs to be taken for loss functions that depend on the
-log-probability of RelaxedBernoullis, because computing log-probabilities of
-the RelaxedBernoulli can suffer from underflow issues. In many case loss
-functions such as these are invariant under invertible transformations of
-the random variables. The KL divergence, found in the variational autoencoder
-loss, is an example. Because RelaxedBernoullis are sampled by by a Logistic
-random variable followed by a `tf.sigmoid` op, one solution is to treat
-the Logistic as the random variable and `tf.sigmoid` as downstream. The
-KL divergences of two Logistics, which are always followed by a `tf.sigmoid`
-op, is equivalent to evaluating KL divergences of RelaxedBernoulli samples.
-See Maddison et al., 2016 for more details where this distribution is called
-the BinConcrete.
-
-An alternative approach is to evaluate Bernoulli log probability or KL
-directly on relaxed samples, as done in Jang et al., 2016. In this case,
-guarantees on the loss are usually violated. For instance, using a Bernoulli
-KL in a relaxed ELBO is no longer a lower bound on the log marginal
-probability of the observation. Thus care and early stopping are important.
-
-#### Examples
-
-Creates three continuous distributions, which approximate 3 Bernoullis with
-probabilities (0.1, 0.5, 0.4). Samples from these distributions will be in
-the unit interval (0,1).
-
-```python
-temperature = 0.5
-p = [0.1, 0.5, 0.4]
-dist = RelaxedBernoulli(temperature, probs=p)
-```
-
-Creates three continuous distributions, which approximate 3 Bernoullis with
-logits (-2, 2, 0). Samples from these distributions will be in
-the unit interval (0,1).
-
-```python
-temperature = 0.5
-logits = [-2, 2, 0]
-dist = RelaxedBernoulli(temperature, logits=logits)
-```
-
-Creates three continuous distributions, whose sigmoid approximate 3 Bernoullis
-with logits (-2, 2, 0).
-
-```python
-temperature = 0.5
-logits = [-2, 2, 0]
-dist = Logistic(logits/temperature, 1./temperature)
-samples = dist.sample()
-sigmoid_samples = tf.sigmoid(samples)
-# sigmoid_samples has the same distribution as samples from
-# RelaxedBernoulli(temperature, logits=logits)
-```
-
-Creates three continuous distributions, which approximate 3 Bernoullis with
-logits (-2, 2, 0). Samples from these distributions will be in
-the unit interval (0,1). Because the temperature is very low, samples from
-these distributions are almost discrete, usually taking values very close to 0
-or 1.
-
-```python
-temperature = 1e-5
-logits = [-2, 2, 0]
-dist = RelaxedBernoulli(temperature, logits=logits)
-```
-
-Creates three continuous distributions, which approximate 3 Bernoullis with
-logits (-2, 2, 0). Samples from these distributions will be in
-the unit interval (0,1). Because the temperature is very high, samples from
-these distributions are usually close to the (0.5, 0.5, 0.5) vector.
-
-```python
-temperature = 100
-logits = [-2, 2, 0]
-dist = RelaxedBernoulli(temperature, logits=logits)
-```
-
-Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete Distribution:
-A Continuous Relaxation of Discrete Random Variables. 2016.
-
-Eric Jang, Shixiang Gu, and Ben Poole. Categorical Reparameterization with
-Gumbel-Softmax. 2016.
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.__init__(temperature, logits=None, probs=None, validate_args=False, allow_nan_stats=True, name='RelaxedBernoulli')` {#RelaxedBernoulli.__init__}
-
-Construct RelaxedBernoulli distributions.
-
-##### Args:
-
-
-* <b>`temperature`</b>: An 0-D `Tensor`, representing the temperature
- of a set of RelaxedBernoulli distributions. The temperature should be
- positive.
-* <b>`logits`</b>: An N-D `Tensor` representing the log-odds
- of a positive event. Each entry in the `Tensor` parametrizes
- an independent RelaxedBernoulli distribution where the probability of an
- event is sigmoid(logits). Only one of `logits` or `probs` should be
- passed in.
-* <b>`probs`</b>: An N-D `Tensor` representing the probability of a positive event.
- Each entry in the `Tensor` parameterizes an independent Bernoulli
- distribution. Only one of `logits` or `probs` should be passed in.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both `probs` and `logits` are passed, or if neither.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.allow_nan_stats` {#RelaxedBernoulli.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.batch_shape` {#RelaxedBernoulli.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.batch_shape_tensor(name='batch_shape_tensor')` {#RelaxedBernoulli.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.bijector` {#RelaxedBernoulli.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.cdf(value, name='cdf')` {#RelaxedBernoulli.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.copy(**override_parameters_kwargs)` {#RelaxedBernoulli.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.covariance(name='covariance')` {#RelaxedBernoulli.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.distribution` {#RelaxedBernoulli.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.dtype` {#RelaxedBernoulli.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.entropy(name='entropy')` {#RelaxedBernoulli.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.event_shape` {#RelaxedBernoulli.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.event_shape_tensor(name='event_shape_tensor')` {#RelaxedBernoulli.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.is_continuous` {#RelaxedBernoulli.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.is_scalar_batch(name='is_scalar_batch')` {#RelaxedBernoulli.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.is_scalar_event(name='is_scalar_event')` {#RelaxedBernoulli.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.log_cdf(value, name='log_cdf')` {#RelaxedBernoulli.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.log_prob(value, name='log_prob')` {#RelaxedBernoulli.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.log_survival_function(value, name='log_survival_function')` {#RelaxedBernoulli.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.logits` {#RelaxedBernoulli.logits}
-
-Log-odds of `1`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.mean(name='mean')` {#RelaxedBernoulli.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.mode(name='mode')` {#RelaxedBernoulli.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.name` {#RelaxedBernoulli.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#RelaxedBernoulli.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.param_static_shapes(cls, sample_shape)` {#RelaxedBernoulli.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.parameters` {#RelaxedBernoulli.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.prob(value, name='prob')` {#RelaxedBernoulli.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.probs` {#RelaxedBernoulli.probs}
-
-Probability of `1`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.reparameterization_type` {#RelaxedBernoulli.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.sample(sample_shape=(), seed=None, name='sample')` {#RelaxedBernoulli.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.stddev(name='stddev')` {#RelaxedBernoulli.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.survival_function(value, name='survival_function')` {#RelaxedBernoulli.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.temperature` {#RelaxedBernoulli.temperature}
-
-Distribution parameter for the location.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.validate_args` {#RelaxedBernoulli.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.variance(name='variance')` {#RelaxedBernoulli.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.RelaxedOneHotCategorical` {#RelaxedOneHotCategorical}
-
-RelaxedOneHotCategorical distribution with temperature and logits.
-
-The RelaxedOneHotCategorical is a distribution over random probability
-vectors, vectors of positive real values that sum to one, which continuously
-approximates a OneHotCategorical. The degree of approximation is controlled by
-a temperature: as the temperaturegoes to 0 the RelaxedOneHotCategorical
-becomes discrete with a distribution described by the `logits` or `probs`
-parameters, as the temperature goes to infinity the RelaxedOneHotCategorical
-becomes the constant distribution that is identically the constant vector of
-(1/event_size, ..., 1/event_size).
-
-The RelaxedOneHotCategorical distribution was concurrently introduced as the
-Gumbel-Softmax (Jang et al., 2016) and Concrete (Maddison et al., 2016)
-distributions for use as a reparameterized continuous approximation to the
-`Categorical` one-hot distribution. If you use this distribution, please cite
-both papers.
-
-#### Examples
-
-Creates a continuous distribution, which approximates a 3-class one-hot
-categorical distiribution. The 2nd class is the most likely to be the
-largest component in samples drawn from this distribution.
-
-```python
-temperature = 0.5
-p = [0.1, 0.5, 0.4]
-dist = RelaxedOneHotCategorical(temperature, probs=p)
-```
-
-Creates a continuous distribution, which approximates a 3-class one-hot
-categorical distiribution. The 2nd class is the most likely to be the
-largest component in samples drawn from this distribution.
-
-```python
-temperature = 0.5
-logits = [-2, 2, 0]
-dist = RelaxedOneHotCategorical(temperature, logits=logits)
-```
-
-Creates a continuous distribution, which approximates a 3-class one-hot
-categorical distiribution. Because the temperature is very low, samples from
-this distribution are almost discrete, with one component almost 1 and the
-others nearly 0. The 2nd class is the most likely to be the largest component
-in samples drawn from this distribution.
-
-```python
-temperature = 1e-5
-logits = [-2, 2, 0]
-dist = RelaxedOneHotCategorical(temperature, logits=logits)
-```
-
-Creates a continuous distribution, which approximates a 3-class one-hot
-categorical distiribution. Because the temperature is very high, samples from
-this distribution are usually close to the (1/3, 1/3, 1/3) vector. The 2nd
-class is still the most likely to be the largest component
-in samples drawn from this distribution.
-
-```python
-temperature = 10
-logits = [-2, 2, 0]
-dist = RelaxedOneHotCategorical(temperature, logits=logits)
-```
-
-Eric Jang, Shixiang Gu, and Ben Poole. Categorical Reparameterization with
-Gumbel-Softmax. 2016.
-
-Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete Distribution:
-A Continuous Relaxation of Discrete Random Variables. 2016.
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.__init__(temperature, logits=None, probs=None, dtype=tf.float32, validate_args=False, allow_nan_stats=True, name='RelaxedOneHotCategorical')` {#RelaxedOneHotCategorical.__init__}
-
-Initialize RelaxedOneHotCategorical using class log-probabilities.
-
-##### Args:
-
-
-* <b>`temperature`</b>: An 0-D `Tensor`, representing the temperature
- of a set of RelaxedOneHotCategorical distributions. The temperature
- should be positive.
-* <b>`logits`</b>: An N-D `Tensor`, `N >= 1`, representing the log probabilities
- of a set of RelaxedOneHotCategorical distributions. The first
- `N - 1` dimensions index into a batch of independent distributions and
- the last dimension represents a vector of logits for each class. Only
- one of `logits` or `probs` should be passed in.
-* <b>`probs`</b>: An N-D `Tensor`, `N >= 1`, representing the probabilities
- of a set of RelaxedOneHotCategorical distributions. The first `N - 1`
- dimensions index into a batch of independent distributions and the last
- dimension represents a vector of probabilities for each class. Only one
- of `logits` or `probs` should be passed in.
-* <b>`dtype`</b>: The type of the event samples (default: int32).
-* <b>`validate_args`</b>: Unused in this distribution.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. If `False`, raise an
- exception if a statistic (e.g. mean/mode/etc...) is undefined for any
- batch member. If `True`, batch members with valid parameters leading to
- undefined statistics will return NaN for this statistic.
-* <b>`name`</b>: A name for this distribution (optional).
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.allow_nan_stats` {#RelaxedOneHotCategorical.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.batch_shape` {#RelaxedOneHotCategorical.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.batch_shape_tensor(name='batch_shape_tensor')` {#RelaxedOneHotCategorical.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.bijector` {#RelaxedOneHotCategorical.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.cdf(value, name='cdf')` {#RelaxedOneHotCategorical.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.copy(**override_parameters_kwargs)` {#RelaxedOneHotCategorical.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.covariance(name='covariance')` {#RelaxedOneHotCategorical.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.distribution` {#RelaxedOneHotCategorical.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.dtype` {#RelaxedOneHotCategorical.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.entropy(name='entropy')` {#RelaxedOneHotCategorical.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.event_shape` {#RelaxedOneHotCategorical.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.event_shape_tensor(name='event_shape_tensor')` {#RelaxedOneHotCategorical.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.is_continuous` {#RelaxedOneHotCategorical.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.is_scalar_batch(name='is_scalar_batch')` {#RelaxedOneHotCategorical.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.is_scalar_event(name='is_scalar_event')` {#RelaxedOneHotCategorical.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.log_cdf(value, name='log_cdf')` {#RelaxedOneHotCategorical.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.log_prob(value, name='log_prob')` {#RelaxedOneHotCategorical.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.log_survival_function(value, name='log_survival_function')` {#RelaxedOneHotCategorical.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.mean(name='mean')` {#RelaxedOneHotCategorical.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.mode(name='mode')` {#RelaxedOneHotCategorical.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.name` {#RelaxedOneHotCategorical.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#RelaxedOneHotCategorical.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.param_static_shapes(cls, sample_shape)` {#RelaxedOneHotCategorical.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.parameters` {#RelaxedOneHotCategorical.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.prob(value, name='prob')` {#RelaxedOneHotCategorical.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.reparameterization_type` {#RelaxedOneHotCategorical.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.sample(sample_shape=(), seed=None, name='sample')` {#RelaxedOneHotCategorical.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.stddev(name='stddev')` {#RelaxedOneHotCategorical.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.survival_function(value, name='survival_function')` {#RelaxedOneHotCategorical.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.validate_args` {#RelaxedOneHotCategorical.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.variance(name='variance')` {#RelaxedOneHotCategorical.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-
-## Other Functions and Classes
-- - -
-
-### `class tf.contrib.distributions.ConditionalDistribution` {#ConditionalDistribution}
-
-Distribution that supports intrinsic parameters (local latents).
-
-Subclasses of this distribution may have additional keyword arguments passed
-to their sample-based methods (i.e. `sample`, `log_prob`, etc.).
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.__init__(dtype, is_continuous, reparameterization_type, validate_args, allow_nan_stats, parameters=None, graph_parents=None, name=None)` {#ConditionalDistribution.__init__}
-
-Constructs the `Distribution`.
-
-**This is a private method for subclass use.**
-
-##### Args:
-
-
-* <b>`dtype`</b>: The type of the event samples. `None` implies no type-enforcement.
-* <b>`is_continuous`</b>: Python `bool`. If `True` this `Distribution` is continuous
- over its supported domain.
-* <b>`reparameterization_type`</b>: Instance of `ReparameterizationType`.
- If `distributions.FULLY_REPARAMETERIZED`, this
- `Distribution` can be reparameterized in terms of some standard
- distribution with a function whose Jacobian is constant for the support
- of the standard distribution. If `distributions.NOT_REPARAMETERIZED`,
- then no such reparameterization is available.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`parameters`</b>: Python `dict` of parameters used to instantiate this
- `Distribution`.
-* <b>`graph_parents`</b>: Python `list` of graph prerequisites of this
- `Distribution`.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class. Default:
- subclass name.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if any member of graph_parents is `None` or not a `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.allow_nan_stats` {#ConditionalDistribution.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.batch_shape` {#ConditionalDistribution.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.batch_shape_tensor(name='batch_shape_tensor')` {#ConditionalDistribution.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.cdf(*args, **kwargs)` {#ConditionalDistribution.cdf}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.copy(**override_parameters_kwargs)` {#ConditionalDistribution.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.covariance(name='covariance')` {#ConditionalDistribution.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.dtype` {#ConditionalDistribution.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.entropy(name='entropy')` {#ConditionalDistribution.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.event_shape` {#ConditionalDistribution.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.event_shape_tensor(name='event_shape_tensor')` {#ConditionalDistribution.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.is_continuous` {#ConditionalDistribution.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.is_scalar_batch(name='is_scalar_batch')` {#ConditionalDistribution.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.is_scalar_event(name='is_scalar_event')` {#ConditionalDistribution.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.log_cdf(*args, **kwargs)` {#ConditionalDistribution.log_cdf}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.log_prob(*args, **kwargs)` {#ConditionalDistribution.log_prob}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.log_survival_function(*args, **kwargs)` {#ConditionalDistribution.log_survival_function}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.mean(name='mean')` {#ConditionalDistribution.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.mode(name='mode')` {#ConditionalDistribution.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.name` {#ConditionalDistribution.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#ConditionalDistribution.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.param_static_shapes(cls, sample_shape)` {#ConditionalDistribution.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.parameters` {#ConditionalDistribution.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.prob(*args, **kwargs)` {#ConditionalDistribution.prob}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.reparameterization_type` {#ConditionalDistribution.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.sample(*args, **kwargs)` {#ConditionalDistribution.sample}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.stddev(name='stddev')` {#ConditionalDistribution.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.survival_function(*args, **kwargs)` {#ConditionalDistribution.survival_function}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.validate_args` {#ConditionalDistribution.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.variance(name='variance')` {#ConditionalDistribution.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
-- - -
-
-### `class tf.contrib.distributions.ConditionalTransformedDistribution` {#ConditionalTransformedDistribution}
-
-A TransformedDistribution that allows intrinsic conditioning.
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.__init__(distribution, bijector=None, batch_shape=None, event_shape=None, validate_args=False, name=None)` {#ConditionalTransformedDistribution.__init__}
-
-Construct a Transformed Distribution.
-
-##### Args:
-
-
-* <b>`distribution`</b>: The base distribution instance to transform. Typically an
- instance of `Distribution`.
-* <b>`bijector`</b>: The object responsible for calculating the transformation.
- Typically an instance of `Bijector`. `None` means `Identity()`.
-* <b>`batch_shape`</b>: `integer` vector `Tensor` which overrides `distribution`
- `batch_shape`; valid only if `distribution.is_scalar_batch()`.
-* <b>`event_shape`</b>: `integer` vector `Tensor` which overrides `distribution`
- `event_shape`; valid only if `distribution.is_scalar_event()`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class. Default:
- `bijector.name + distribution.name`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.allow_nan_stats` {#ConditionalTransformedDistribution.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.batch_shape` {#ConditionalTransformedDistribution.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.batch_shape_tensor(name='batch_shape_tensor')` {#ConditionalTransformedDistribution.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.bijector` {#ConditionalTransformedDistribution.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.cdf(*args, **kwargs)` {#ConditionalTransformedDistribution.cdf}
-
-Additional documentation from `ConditionalTransformedDistribution`:
-
-##### `kwargs`:
-
-* `bijector_kwargs`: Python dictionary of arg names/values forwarded to the bijector.
-* `distribution_kwargs`: Python dictionary of arg names/values forwarded to the distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.copy(**override_parameters_kwargs)` {#ConditionalTransformedDistribution.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.covariance(name='covariance')` {#ConditionalTransformedDistribution.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.distribution` {#ConditionalTransformedDistribution.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.dtype` {#ConditionalTransformedDistribution.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.entropy(name='entropy')` {#ConditionalTransformedDistribution.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.event_shape` {#ConditionalTransformedDistribution.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.event_shape_tensor(name='event_shape_tensor')` {#ConditionalTransformedDistribution.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.is_continuous` {#ConditionalTransformedDistribution.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.is_scalar_batch(name='is_scalar_batch')` {#ConditionalTransformedDistribution.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.is_scalar_event(name='is_scalar_event')` {#ConditionalTransformedDistribution.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.log_cdf(*args, **kwargs)` {#ConditionalTransformedDistribution.log_cdf}
-
-Additional documentation from `ConditionalTransformedDistribution`:
-
-##### `kwargs`:
-
-* `bijector_kwargs`: Python dictionary of arg names/values forwarded to the bijector.
-* `distribution_kwargs`: Python dictionary of arg names/values forwarded to the distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.log_prob(*args, **kwargs)` {#ConditionalTransformedDistribution.log_prob}
-
-Additional documentation from `ConditionalTransformedDistribution`:
-
-##### `kwargs`:
-
-* `bijector_kwargs`: Python dictionary of arg names/values forwarded to the bijector.
-* `distribution_kwargs`: Python dictionary of arg names/values forwarded to the distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.log_survival_function(*args, **kwargs)` {#ConditionalTransformedDistribution.log_survival_function}
-
-Additional documentation from `ConditionalTransformedDistribution`:
-
-##### `kwargs`:
-
-* `bijector_kwargs`: Python dictionary of arg names/values forwarded to the bijector.
-* `distribution_kwargs`: Python dictionary of arg names/values forwarded to the distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.mean(name='mean')` {#ConditionalTransformedDistribution.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.mode(name='mode')` {#ConditionalTransformedDistribution.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.name` {#ConditionalTransformedDistribution.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#ConditionalTransformedDistribution.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.param_static_shapes(cls, sample_shape)` {#ConditionalTransformedDistribution.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.parameters` {#ConditionalTransformedDistribution.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.prob(*args, **kwargs)` {#ConditionalTransformedDistribution.prob}
-
-Additional documentation from `ConditionalTransformedDistribution`:
-
-##### `kwargs`:
-
-* `bijector_kwargs`: Python dictionary of arg names/values forwarded to the bijector.
-* `distribution_kwargs`: Python dictionary of arg names/values forwarded to the distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.reparameterization_type` {#ConditionalTransformedDistribution.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.sample(*args, **kwargs)` {#ConditionalTransformedDistribution.sample}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.stddev(name='stddev')` {#ConditionalTransformedDistribution.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.survival_function(*args, **kwargs)` {#ConditionalTransformedDistribution.survival_function}
-
-Additional documentation from `ConditionalTransformedDistribution`:
-
-##### `kwargs`:
-
-* `bijector_kwargs`: Python dictionary of arg names/values forwarded to the bijector.
-* `distribution_kwargs`: Python dictionary of arg names/values forwarded to the distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.validate_args` {#ConditionalTransformedDistribution.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.variance(name='variance')` {#ConditionalTransformedDistribution.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.ffmpeg.md b/tensorflow/g3doc/api_docs/python/contrib.ffmpeg.md
deleted file mode 100644
index e420e4687f..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.ffmpeg.md
+++ /dev/null
@@ -1,61 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# FFmpeg (contrib)
-[TOC]
-
-Working with audio using FFmpeg. See the @{$python/contrib.ffmpeg} guide.
-
-- - -
-
-### `tf.contrib.ffmpeg.decode_audio(contents, file_format=None, samples_per_second=None, channel_count=None)` {#decode_audio}
-
-Create an op that decodes the contents of an audio file.
-
-Note that ffmpeg is free to select the "best" audio track from an mp4.
-https://trac.ffmpeg.org/wiki/Map
-
-##### Args:
-
-
-* <b>`contents`</b>: The binary contents of the audio file to decode. This is a
- scalar.
-* <b>`file_format`</b>: A string specifying which format the contents will conform
- to. This can be mp3, mp4, ogg, or wav.
-* <b>`samples_per_second`</b>: The number of samples per second that is assumed.
- In some cases, resampling will occur to generate the correct sample
- rate.
-* <b>`channel_count`</b>: The number of channels that should be created from the
- audio contents. If the contents have more than this number, then
- some channels will be merged or dropped. If contents has fewer than
- this, then additional channels will be created from the existing ones.
-
-##### Returns:
-
- A rank 2 tensor that has time along dimension 0 and channels along
- dimension 1. Dimension 0 will be `samples_per_second * length` wide, and
- dimension 1 will be `channel_count` wide. If ffmpeg fails to decode the
- audio then an empty tensor will be returned.
-
-
-- - -
-
-### `tf.contrib.ffmpeg.encode_audio(audio, file_format=None, samples_per_second=None)` {#encode_audio}
-
-Creates an op that encodes an audio file using sampled audio from a tensor.
-
-##### Args:
-
-
-* <b>`audio`</b>: A rank 2 tensor that has time along dimension 0 and channels along
- dimension 1. Dimension 0 is `samples_per_second * length` long in
- seconds.
-* <b>`file_format`</b>: The type of file to encode. "wav" is the only supported format.
-* <b>`samples_per_second`</b>: The number of samples in the audio tensor per second of
- audio.
-
-##### Returns:
-
- A scalar tensor that contains the encoded audio in the specified file
- format.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.framework.md b/tensorflow/g3doc/api_docs/python/contrib.framework.md
deleted file mode 100644
index 5b00ee590a..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.framework.md
+++ /dev/null
@@ -1,1205 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Framework (contrib)
-[TOC]
-
-Framework utilities. See the @{$python/contrib.framework} guide.
-
-- - -
-
-### `tf.contrib.framework.assert_same_float_dtype(tensors=None, dtype=None)` {#assert_same_float_dtype}
-
-Validate and return float type based on `tensors` and `dtype`.
-
-For ops such as matrix multiplication, inputs and weights must be of the
-same float type. This function validates that all `tensors` are the same type,
-validates that type is `dtype` (if supplied), and returns the type. Type must
-be `dtypes.float32` or `dtypes.float64`. If neither `tensors` nor
-`dtype` is supplied, default to `dtypes.float32`.
-
-##### Args:
-
-
-* <b>`tensors`</b>: Tensors of input values. Can include `None` elements, which will be
- ignored.
-* <b>`dtype`</b>: Expected type.
-
-##### Returns:
-
- Validated type.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if neither `tensors` nor `dtype` is supplied, or result is not
- float.
-
-
-- - -
-
-### `tf.contrib.framework.assert_scalar(tensor, name=None)` {#assert_scalar}
-
-
-
-
-- - -
-
-### `tf.contrib.framework.assert_scalar_int(tensor, name=None)` {#assert_scalar_int}
-
-Assert `tensor` is 0-D, of type `tf.int32` or `tf.int64`.
-
-##### Args:
-
-
-* <b>`tensor`</b>: `Tensor` to test.
-* <b>`name`</b>: Name of the op and of the new `Tensor` if one is created.
-
-##### Returns:
-
- `tensor`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `tensor` is not 0-D, of type `tf.int32` or `tf.int64`.
-
-
-- - -
-
-### `tf.convert_to_tensor_or_sparse_tensor(value, dtype=None, name=None)` {#convert_to_tensor_or_sparse_tensor}
-
-Converts value to a `SparseTensor` or `Tensor`.
-
-##### Args:
-
-
-* <b>`value`</b>: A `SparseTensor`, `SparseTensorValue`, or an object whose type has a
- registered `Tensor` conversion function.
-* <b>`dtype`</b>: Optional element type for the returned tensor. If missing, the
- type is inferred from the type of `value`.
-* <b>`name`</b>: Optional name to use if a new `Tensor` is created.
-
-##### Returns:
-
- A `SparseTensor` or `Tensor` based on `value`.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If result type is incompatible with `dtype`.
-
-
-- - -
-
-### `tf.contrib.framework.get_graph_from_inputs(op_input_list, graph=None)` {#get_graph_from_inputs}
-
-Returns the appropriate graph to use for the given inputs.
-
-1. If `graph` is provided, we validate that all inputs in `op_input_list` are
- from the same graph.
-2. Otherwise, we attempt to select a graph from the first Operation- or
- Tensor-valued input in `op_input_list`, and validate that all other
- such inputs are in the same graph.
-3. If the graph was not specified and it could not be inferred from
- `op_input_list`, we attempt to use the default graph.
-
-##### Args:
-
-
-* <b>`op_input_list`</b>: A list of inputs to an operation, which may include `Tensor`,
- `Operation`, and other objects that may be converted to a graph element.
-* <b>`graph`</b>: (Optional) The explicit graph to use.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `op_input_list` is not a list or tuple, or if graph is not a
- Graph.
-* <b>`ValueError`</b>: If a graph is explicitly passed and not all inputs are from it,
- or if the inputs are from multiple graphs, or we could not find a graph
- and there was no default graph.
-
-##### Returns:
-
- The appropriate graph to use for the given inputs.
-
-
-- - -
-
-### `tf.is_numeric_tensor(tensor)` {#is_numeric_tensor}
-
-
-
-
-- - -
-
-### `tf.is_non_decreasing(x, name=None)` {#is_non_decreasing}
-
-Returns `True` if `x` is non-decreasing.
-
-Elements of `x` are compared in row-major order. The tensor `[x[0],...]`
-is non-decreasing if for every adjacent pair we have `x[i] <= x[i+1]`.
-If `x` has less than two elements, it is trivially non-decreasing.
-
-See also: `is_strictly_increasing`
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "is_non_decreasing"
-
-##### Returns:
-
- Boolean `Tensor`, equal to `True` iff `x` is non-decreasing.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `x` is not a numeric tensor.
-
-
-- - -
-
-### `tf.is_strictly_increasing(x, name=None)` {#is_strictly_increasing}
-
-Returns `True` if `x` is strictly increasing.
-
-Elements of `x` are compared in row-major order. The tensor `[x[0],...]`
-is strictly increasing if for every adjacent pair we have `x[i] < x[i+1]`.
-If `x` has less than two elements, it is trivially strictly increasing.
-
-See also: `is_non_decreasing`
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`name`</b>: A name for this operation (optional).
- Defaults to "is_strictly_increasing"
-
-##### Returns:
-
- Boolean `Tensor`, equal to `True` iff `x` is strictly increasing.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `x` is not a numeric tensor.
-
-
-- - -
-
-### `tf.contrib.framework.is_tensor(x)` {#is_tensor}
-
-Check for tensor types.
-
-Check whether an object is a tensor. Equivalent to
-`isinstance(x, [tf.Tensor, tf.SparseTensor, tf.Variable])`.
-
-##### Args:
-
-
-* <b>`x`</b>: An python object to check.
-
-##### Returns:
-
- `True` if `x` is a tensor, `False` if not.
-
-
-- - -
-
-### `tf.contrib.framework.reduce_sum_n(tensors, name=None)` {#reduce_sum_n}
-
-Reduce tensors to a scalar sum.
-
-This reduces each tensor in `tensors` to a scalar via `tf.reduce_sum`, then
-adds them via `tf.add_n`.
-
-##### Args:
-
-
-* <b>`tensors`</b>: List of tensors, all of the same numeric type.
-* <b>`name`</b>: Tensor name, and scope for all other ops.
-
-##### Returns:
-
- Total loss tensor, or None if no losses have been configured.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `losses` is missing or empty.
-
-
-- - -
-
-### `tf.contrib.framework.remove_squeezable_dimensions(predictions, labels, name=None)` {#remove_squeezable_dimensions}
-
-Squeeze last dim if ranks of `predictions` and `labels` differ by 1.
-
-This will use static shape if available. Otherwise, it will add graph
-operations, which could result in a performance hit.
-
-##### Args:
-
-
-* <b>`predictions`</b>: Predicted values, a `Tensor` of arbitrary dimensions.
-* <b>`labels`</b>: Label values, a `Tensor` whose dimensions match `predictions`.
-* <b>`name`</b>: Name of the op.
-
-##### Returns:
-
- Tuple of `predictions` and `labels`, possibly with last dim squeezed.
-
-
-- - -
-
-### `tf.contrib.framework.with_shape(expected_shape, tensor)` {#with_shape}
-
-Asserts tensor has expected shape.
-
-If tensor shape and expected_shape, are fully defined, assert they match.
-Otherwise, add assert op that will validate the shape when tensor is
-evaluated, and set shape on tensor.
-
-##### Args:
-
-
-* <b>`expected_shape`</b>: Expected shape to assert, as a 1D array of ints, or tensor
- of same.
-* <b>`tensor`</b>: Tensor whose shape we're validating.
-
-##### Returns:
-
- tensor, perhaps with a dependent assert operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if tensor has an invalid shape.
-
-
-- - -
-
-### `tf.contrib.framework.with_same_shape(expected_tensor, tensor)` {#with_same_shape}
-
-Assert tensors are the same shape, from the same graph.
-
-##### Args:
-
-
-* <b>`expected_tensor`</b>: Tensor with expected shape.
-* <b>`tensor`</b>: Tensor of actual values.
-
-##### Returns:
-
- Tuple of (actual_tensor, label_tensor), possibly with assert ops added.
-
-
-
-- - -
-
-### `tf.contrib.framework.deprecated(date, instructions)` {#deprecated}
-
-Decorator for marking functions or methods deprecated.
-
-This decorator logs a deprecation warning whenever the decorated function is
-called. It has the following format:
-
- <function> (from <module>) is deprecated and will be removed after <date>.
- Instructions for updating:
- <instructions>
-
-<function> will include the class name if it is a method.
-
-It also edits the docstring of the function: ' (deprecated)' is appended
-to the first line of the docstring and a deprecation notice is prepended
-to the rest of the docstring.
-
-##### Args:
-
-
-* <b>`date`</b>: String. The date the function is scheduled to be removed. Must be
- ISO 8601 (YYYY-MM-DD).
-* <b>`instructions`</b>: String. Instructions on how to update code using the
- deprecated function.
-
-##### Returns:
-
- Decorated function or method.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If date is not in ISO 8601 format, or instructions are empty.
-
-
-- - -
-
-### `tf.contrib.framework.deprecated_args(date, instructions, *deprecated_arg_names_or_tuples)` {#deprecated_args}
-
-Decorator for marking specific function arguments as deprecated.
-
-This decorator logs a deprecation warning whenever the decorated function is
-called with the deprecated argument. It has the following format:
-
- Calling <function> (from <module>) with <arg> is deprecated and will be
- removed after <date>. Instructions for updating:
- <instructions>
-
-<function> will include the class name if it is a method.
-
-It also edits the docstring of the function: ' (deprecated arguments)' is
-appended to the first line of the docstring and a deprecation notice is
-prepended to the rest of the docstring.
-
-##### Args:
-
-
-* <b>`date`</b>: String. The date the function is scheduled to be removed. Must be
- ISO 8601 (YYYY-MM-DD).
-* <b>`instructions`</b>: String. Instructions on how to update code using the
- deprecated function.
-* <b>`*deprecated_arg_names_or_tuples`</b>: String. or 2-Tuple(String,
- [ok_vals]). The string is the deprecated argument name.
- Optionally, an ok-value may be provided. If the user provided
- argument equals this value, the warning is suppressed.
-
-##### Returns:
-
- Decorated function or method.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If date is not in ISO 8601 format, instructions are
- empty, the deprecated arguments are not present in the function
- signature, or the second element of a deprecated_tuple is not a
- list.
-
-
-- - -
-
-### `tf.contrib.framework.deprecated_arg_values(date, instructions, **deprecated_kwargs)` {#deprecated_arg_values}
-
-Decorator for marking specific function argument values as deprecated.
-
-This decorator logs a deprecation warning whenever the decorated function is
-called with the deprecated argument values. It has the following format:
-
- Calling <function> (from <module>) with <arg>=<value> is deprecated and
- will be removed after <date>. Instructions for updating:
- <instructions>
-
-<function> will include the class name if it is a method.
-
-It also edits the docstring of the function: ' (deprecated arguments)' is
-appended to the first line of the docstring and a deprecation notice is
-prepended to the rest of the docstring.
-
-##### Args:
-
-
-* <b>`date`</b>: String. The date the function is scheduled to be removed. Must be
- ISO 8601 (YYYY-MM-DD).
-* <b>`instructions`</b>: String. Instructions on how to update code using the
- deprecated function.
-* <b>`**deprecated_kwargs`</b>: The deprecated argument values.
-
-##### Returns:
-
- Decorated function or method.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If date is not in ISO 8601 format, or instructions are empty.
-
-
-
-- - -
-
-### `tf.contrib.framework.arg_scope(list_ops_or_scope, **kwargs)` {#arg_scope}
-
-Stores the default arguments for the given set of list_ops.
-
-For usage, please see examples at top of the file.
-
-##### Args:
-
-
-* <b>`list_ops_or_scope`</b>: List or tuple of operations to set argument scope for or
- a dictionary containing the current scope. When list_ops_or_scope is a
- dict, kwargs must be empty. When list_ops_or_scope is a list or tuple,
- then every op in it need to be decorated with @add_arg_scope to work.
-* <b>`**kwargs`</b>: keyword=value that will define the defaults for each op in
- list_ops. All the ops need to accept the given set of arguments.
-
-##### Yields:
-
- the current_scope, which is a dictionary of {op: {arg: value}}
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if list_ops is not a list or a tuple.
-* <b>`ValueError`</b>: if any op in list_ops has not be decorated with @add_arg_scope.
-
-
-- - -
-
-### `tf.contrib.framework.add_arg_scope(func)` {#add_arg_scope}
-
-Decorates a function with args so it can be used within an arg_scope.
-
-##### Args:
-
-
-* <b>`func`</b>: function to decorate.
-
-##### Returns:
-
- A tuple with the decorated function func_with_args().
-
-
-- - -
-
-### `tf.contrib.framework.has_arg_scope(func)` {#has_arg_scope}
-
-Checks whether a func has been decorated with @add_arg_scope or not.
-
-##### Args:
-
-
-* <b>`func`</b>: function to check.
-
-##### Returns:
-
- a boolean.
-
-
-- - -
-
-### `tf.contrib.framework.arg_scoped_arguments(func)` {#arg_scoped_arguments}
-
-Returns the list kwargs that arg_scope can set for a func.
-
-##### Args:
-
-
-* <b>`func`</b>: function which has been decorated with @add_arg_scope.
-
-##### Returns:
-
- a list of kwargs names.
-
-
-
-- - -
-
-### `tf.contrib.framework.add_model_variable(var)` {#add_model_variable}
-
-Adds a variable to the `GraphKeys.MODEL_VARIABLES` collection.
-
-##### Args:
-
-
-* <b>`var`</b>: a variable.
-
-
-- - -
-
-### `tf.train.assert_global_step(global_step_tensor)` {#assert_global_step}
-
-Asserts `global_step_tensor` is a scalar int `Variable` or `Tensor`.
-
-##### Args:
-
-
-* <b>`global_step_tensor`</b>: `Tensor` to test.
-
-
-- - -
-
-### `tf.contrib.framework.assert_or_get_global_step(graph=None, global_step_tensor=None)` {#assert_or_get_global_step}
-
-Verifies that a global step tensor is valid or gets one if None is given.
-
-If `global_step_tensor` is not None, check that it is a valid global step
-tensor (using `assert_global_step`). Otherwise find a global step tensor using
-`get_global_step` and return it.
-
-##### Args:
-
-
-* <b>`graph`</b>: The graph to find the global step tensor for.
-* <b>`global_step_tensor`</b>: The tensor to check for suitability as a global step.
- If None is given (the default), find a global step tensor.
-
-##### Returns:
-
- A tensor suitable as a global step, or `None` if none was provided and none
- was found.
-
-
-- - -
-
-### `tf.contrib.framework.assign_from_checkpoint(model_path, var_list)` {#assign_from_checkpoint}
-
-Creates an operation to assign specific variables from a checkpoint.
-
-##### Args:
-
-
-* <b>`model_path`</b>: The full path to the model checkpoint. To get latest checkpoint
- use `model_path = tf.train.latest_checkpoint(checkpoint_dir)`
-* <b>`var_list`</b>: A list of (possibly partitioned) `Variable` objects
- or a dictionary mapping names in the checkpoint to the
- corresponding variables or list of variables to initialize
- from that checkpoint value. For partitioned Variables, the
- name in the checkpoint must be the full variable, not the
- name of the partitioned variable, eg. "my_var" rather than
- "my_var/part_4". If empty, returns no_op(), {}.
-
-##### Returns:
-
- the restore_op and the feed_dict that need to be run to restore var_list.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the checkpoint specified at `model_path` is missing one of
- the variables in `var_list`.
-
-
-- - -
-
-### `tf.contrib.framework.assign_from_checkpoint_fn(model_path, var_list, ignore_missing_vars=False, reshape_variables=False)` {#assign_from_checkpoint_fn}
-
-Returns a function that assigns specific variables from a checkpoint.
-
-##### Args:
-
-
-* <b>`model_path`</b>: The full path to the model checkpoint. To get latest checkpoint
- use `model_path = tf.train.latest_checkpoint(checkpoint_dir)`
-* <b>`var_list`</b>: A list of `Variable` objects or a dictionary mapping names in the
- checkpoint to the correspoing variables to initialize. If empty or None,
- it would return no_op(), None.
-* <b>`ignore_missing_vars`</b>: Boolean, if True it would ignore variables missing in
- the checkpoint with a warning instead of failing.
-* <b>`reshape_variables`</b>: Boolean, if True it would automatically reshape variables
- which are of different shape then the ones stored in the checkpoint but
- which have the same number of elements.
-
-##### Returns:
-
- A function that takes a single argument, a `tf.Session`, that applies the
- assignment operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the checkpoint specified at `model_path` is missing one of
- the variables in `var_list`.
-
-
-- - -
-
-### `tf.contrib.framework.assign_from_values(var_names_to_values)` {#assign_from_values}
-
-Creates an assignment operation from a given mapping.
-
-This function provides a mechanism for performing assignment of variables
-to values in a way that does not fill the graph with large assignment values.
-
-##### Args:
-
-
-* <b>`var_names_to_values`</b>: A map from variable names to values.
-
-##### Returns:
-
-
-* <b>`assign_op`</b>: An `Operation` that assigns each of the given variables to the
- requested values.
-* <b>`feed_dict`</b>: The feed dictionary to use when evaluating `assign_op`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if any of the given variable names were not found.
-
-
-- - -
-
-### `tf.contrib.framework.assign_from_values_fn(var_names_to_values)` {#assign_from_values_fn}
-
-Returns a function that assigns specific variables from the given values.
-
-This function provides a mechanism for performing assignment of variables
-to values in a way that does not fill the graph with large assignment values.
-
-##### Args:
-
-
-* <b>`var_names_to_values`</b>: A map from variable names to values.
-
-##### Returns:
-
- A function that takes a single argument, a `tf.Session`, that applies the
- assignment operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if any of the given variable names were not found.
-
-
-- - -
-
-### `tf.contrib.framework.create_global_step(graph=None)` {#create_global_step}
-
-Create global step tensor in graph.
-
-##### Args:
-
-
-* <b>`graph`</b>: The graph in which to create the global step. If missing, use default
- graph.
-
-##### Returns:
-
- Global step tensor.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if global step key is already defined.
-
-
-- - -
-
-### `tf.contrib.framework.filter_variables(var_list, include_patterns=None, exclude_patterns=None, reg_search=True)` {#filter_variables}
-
-Filter a list of variables using regular expressions.
-
-First includes variables according to the list of include_patterns.
-Afterwards, eliminates variables according to the list of exclude_patterns.
-
-For example, one can obtain a list of variables with the weights of all
-convolutional layers (depending on the network definition) by:
-
-```python
-variables = tf.contrib.framework.get_model_variables()
-conv_weight_variables = tf.contrib.framework.filter_variables(
- variables,
- include_patterns=['Conv'],
- exclude_patterns=['biases', 'Logits'])
-```
-
-##### Args:
-
-
-* <b>`var_list`</b>: list of variables.
-* <b>`include_patterns`</b>: list of regular expressions to include. Defaults to None,
- which means all variables are selected according to the include rules.
- A variable is included if it matches any of the include_patterns.
-* <b>`exclude_patterns`</b>: list of regular expressions to exclude. Defaults to None,
- which means all variables are selected according to the exclude rules.
- A variable is excluded if it matches any of the exclude_patterns.
-* <b>`reg_search`</b>: boolean. If True (default), performs re.search to find matches
- (i.e. pattern can match any substring of the variable name). If False,
- performs re.match (i.e. regexp should match from the beginning of the
- variable name).
-
-##### Returns:
-
- filtered list of variables.
-
-
-- - -
-
-### `tf.train.get_global_step(graph=None)` {#get_global_step}
-
-Get the global step tensor.
-
-The global step tensor must be an integer variable. We first try to find it
-in the collection `GLOBAL_STEP`, or by name `global_step:0`.
-
-##### Args:
-
-
-* <b>`graph`</b>: The graph to find the global step in. If missing, use default graph.
-
-##### Returns:
-
- The global step variable, or `None` if none was found.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the global step tensor has a non-integer type, or if it is not
- a `Variable`.
-
-
-- - -
-
-### `tf.contrib.framework.get_or_create_global_step(graph=None)` {#get_or_create_global_step}
-
-Returns and create (if necessary) the global step variable.
-
-##### Args:
-
-
-* <b>`graph`</b>: The graph in which to create the global step. If missing, use default
- graph.
-
-##### Returns:
-
- the tensor representing the global step variable.
-
-
-- - -
-
-### `tf.contrib.framework.get_local_variables(scope=None, suffix=None)` {#get_local_variables}
-
-Gets the list of local variables, filtered by scope and/or suffix.
-
-##### Args:
-
-
-* <b>`scope`</b>: an optional scope for filtering the variables to return.
-* <b>`suffix`</b>: an optional suffix for filtering the variables to return.
-
-##### Returns:
-
- a list of variables in collection with scope and suffix.
-
-
-- - -
-
-### `tf.contrib.framework.get_model_variables(scope=None, suffix=None)` {#get_model_variables}
-
-Gets the list of model variables, filtered by scope and/or suffix.
-
-##### Args:
-
-
-* <b>`scope`</b>: an optional scope for filtering the variables to return.
-* <b>`suffix`</b>: an optional suffix for filtering the variables to return.
-
-##### Returns:
-
- a list of variables in collection with scope and suffix.
-
-
-- - -
-
-### `tf.contrib.framework.get_unique_variable(var_op_name)` {#get_unique_variable}
-
-Gets the variable uniquely identified by that var_op_name.
-
-##### Args:
-
-
-* <b>`var_op_name`</b>: the full name of the variable op, including the scope.
-
-##### Returns:
-
- a tensorflow variable.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if no variable uniquely identified by the name exists.
-
-
-- - -
-
-### `tf.contrib.framework.get_variables_by_name(given_name, scope=None)` {#get_variables_by_name}
-
-Gets the list of variables that were given that name.
-
-##### Args:
-
-
-* <b>`given_name`</b>: name given to the variable without any scope.
-* <b>`scope`</b>: an optional scope for filtering the variables to return.
-
-##### Returns:
-
- a copied list of variables with the given name and scope.
-
-
-- - -
-
-### `tf.contrib.framework.get_variables_by_suffix(suffix, scope=None)` {#get_variables_by_suffix}
-
-Gets the list of variables that end with the given suffix.
-
-##### Args:
-
-
-* <b>`suffix`</b>: suffix for filtering the variables to return.
-* <b>`scope`</b>: an optional scope for filtering the variables to return.
-
-##### Returns:
-
- a copied list of variables with the given name and prefix.
-
-
-- - -
-
-### `tf.contrib.framework.get_variable_full_name(var)` {#get_variable_full_name}
-
-Returns the full name of a variable.
-
-For normal Variables, this is the same as the var.op.name. For
-sliced or PartitionedVariables, this name is the same for all the
-slices/partitions. In both cases, this is normally the name used in
-a checkpoint file.
-
-##### Args:
-
-
-* <b>`var`</b>: A `Variable` object.
-
-##### Returns:
-
- A string that is the full name.
-
-
-- - -
-
-### `tf.contrib.framework.get_variables_to_restore(include=None, exclude=None)` {#get_variables_to_restore}
-
-Gets the list of the variables to restore.
-
-##### Args:
-
-
-* <b>`include`</b>: an optional list/tuple of scope strings for filtering which
- variables from the VARIABLES collection to include. None would include all
- the variables.
-* <b>`exclude`</b>: an optional list/tuple of scope strings for filtering which
- variables from the VARIABLES collection to exclude. None it would not
- exclude any.
-
-##### Returns:
-
- a list of variables to restore.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: include or exclude is provided but is not a list or a tuple.
-
-
-- - -
-
-### `tf.contrib.framework.get_variables(scope=None, suffix=None, collection='variables')` {#get_variables}
-
-Gets the list of variables, filtered by scope and/or suffix.
-
-##### Args:
-
-
-* <b>`scope`</b>: an optional scope for filtering the variables to return. Can be a
- variable scope or a string.
-* <b>`suffix`</b>: an optional suffix for filtering the variables to return.
-* <b>`collection`</b>: in which collection search for. Defaults to
- `GraphKeys.GLOBAL_VARIABLES`.
-
-##### Returns:
-
- a list of variables in collection with scope and suffix.
-
-
-- - -
-
-### `tf.contrib.framework.local_variable(initial_value, validate_shape=True, name=None)` {#local_variable}
-
-Create variable and add it to `GraphKeys.LOCAL_VARIABLES` collection.
-
-##### Args:
-
-
-* <b>`initial_value`</b>: See variables.Variable.__init__.
-* <b>`validate_shape`</b>: See variables.Variable.__init__.
-* <b>`name`</b>: See variables.Variable.__init__.
-
-##### Returns:
-
- New variable.
-
-
-- - -
-
-### `tf.contrib.framework.model_variable(*args, **kwargs)` {#model_variable}
-
-Gets an existing model variable with these parameters or creates a new one.
-
-##### Args:
-
-
-* <b>`name`</b>: the name of the new or existing variable.
-* <b>`shape`</b>: shape of the new or existing variable.
-* <b>`dtype`</b>: type of the new or existing variable (defaults to `DT_FLOAT`).
-* <b>`initializer`</b>: initializer for the variable if one is created.
-* <b>`regularizer`</b>: a (Tensor -> Tensor or None) function; the result of
- applying it on a newly created variable will be added to the collection
- GraphKeys.REGULARIZATION_LOSSES and can be used for regularization.
-* <b>`trainable`</b>: If `True` also add the variable to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
-* <b>`collections`</b>: A list of collection names to which the Variable will be added.
- Note that the variable is always also added to the
- `GraphKeys.GLOBAL_VARIABLES` and `GraphKeys.MODEL_VARIABLES` collections.
-* <b>`caching_device`</b>: Optional device string or function describing where the
- Variable should be cached for reading. Defaults to the Variable's
- device.
-* <b>`device`</b>: Optional device to place the variable. It can be an string or a
- function that is called to get the device for the variable.
-* <b>`partitioner`</b>: Optional callable that accepts a fully defined `TensorShape`
- and dtype of the `Variable` to be created, and returns a list of
- partitions for each axis (currently only one axis can be partitioned).
-* <b>`custom_getter`</b>: Callable that allows overwriting the internal
- get_variable method and has to have the same signature.
-
-##### Returns:
-
- The created or existing variable.
-
-
-- - -
-
-### `tf.contrib.framework.variable(*args, **kwargs)` {#variable}
-
-Gets an existing variable with these parameters or creates a new one.
-
-##### Args:
-
-
-* <b>`name`</b>: the name of the new or existing variable.
-* <b>`shape`</b>: shape of the new or existing variable.
-* <b>`dtype`</b>: type of the new or existing variable (defaults to `DT_FLOAT`).
-* <b>`initializer`</b>: initializer for the variable if one is created.
-* <b>`regularizer`</b>: a (Tensor -> Tensor or None) function; the result of
- applying it on a newly created variable will be added to the collection
- GraphKeys.REGULARIZATION_LOSSES and can be used for regularization.
-* <b>`trainable`</b>: If `True` also add the variable to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
-* <b>`collections`</b>: A list of collection names to which the Variable will be added.
- If None it would default to `tf.GraphKeys.GLOBAL_VARIABLES`.
-* <b>`caching_device`</b>: Optional device string or function describing where the
- Variable should be cached for reading. Defaults to the Variable's
- device.
-* <b>`device`</b>: Optional device to place the variable. It can be an string or a
- function that is called to get the device for the variable.
-* <b>`partitioner`</b>: Optional callable that accepts a fully defined `TensorShape`
- and dtype of the `Variable` to be created, and returns a list of
- partitions for each axis (currently only one axis can be partitioned).
-* <b>`custom_getter`</b>: Callable that allows overwriting the internal
- get_variable method and has to have the same signature.
-
-##### Returns:
-
- The created or existing variable.
-
-
-- - -
-
-### `class tf.contrib.framework.VariableDeviceChooser` {#VariableDeviceChooser}
-
-Device chooser for variables.
-
-When using a parameter server it will assign them in a round-robin fashion.
-When not using a parameter server it allows GPU or CPU placement.
-- - -
-
-#### `tf.contrib.framework.VariableDeviceChooser.__call__(op)` {#VariableDeviceChooser.__call__}
-
-
-
-
-- - -
-
-#### `tf.contrib.framework.VariableDeviceChooser.__init__(num_tasks=0, job_name='ps', device_type='CPU', device_index=0)` {#VariableDeviceChooser.__init__}
-
-Initialize VariableDeviceChooser.
-
-##### Usage:
-
- To use with 2 parameter servers:
- VariableDeviceChooser(2)
-
- To use without parameter servers:
- VariableDeviceChooser()
- VariableDeviceChooser(device_type='GPU') # For GPU placement
-
-##### Args:
-
-
-* <b>`num_tasks`</b>: number of tasks.
-* <b>`job_name`</b>: String, a name for the parameter server job.
-* <b>`device_type`</b>: Optional device type string (e.g. "CPU" or "GPU")
-* <b>`device_index`</b>: int. Optional device index. If left
- unspecified, device represents 'any' device_index.
-
-
-
-- - -
-
-### `tf.contrib.framework.zero_initializer(ref, use_locking=True, name='zero_initializer')` {#zero_initializer}
-
-Initialize 'ref' with all zeros, ref tensor should be uninitialized.
-If already initialized, you will get ValueError. This op is intended to
-save memory during initialization.
-
-##### Args:
-
-
-* <b>`ref`</b>: ref of the tensor need to be zero initialized.
-* <b>`name`</b>: optional name for this operation.
-
-##### Returns:
-
- ref that initialized.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If ref tensor is initialized.
-
-
-
-- - -
-
-### `tf.contrib.framework.load_checkpoint(filepattern)` {#load_checkpoint}
-
-Returns CheckpointReader for latest checkpoint.
-
-##### Args:
-
-
-* <b>`filepattern`</b>: Directory with checkpoints file or path to checkpoint.
-
-##### Returns:
-
- `CheckpointReader` object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if checkpoint_dir doesn't have 'checkpoint' file or checkpoints.
-
-
-- - -
-
-### `tf.contrib.framework.list_variables(checkpoint_dir)` {#list_variables}
-
-Returns list of all variables in the latest checkpoint.
-
-##### Args:
-
-
-* <b>`checkpoint_dir`</b>: Directory with checkpoints file or path to checkpoint.
-
-##### Returns:
-
- List of tuples `(name, shape)`.
-
-
-- - -
-
-### `tf.contrib.framework.load_variable(checkpoint_dir, name)` {#load_variable}
-
-Returns a Tensor with the contents of the given variable in the checkpoint.
-
-##### Args:
-
-
-* <b>`checkpoint_dir`</b>: Directory with checkpoints file or path to checkpoint.
-* <b>`name`</b>: Name of the tensor to return.
-
-##### Returns:
-
- `Tensor` object.
-
-
-- - -
-
-### `tf.contrib.framework.init_from_checkpoint(checkpoint_dir, assignment_map)` {#init_from_checkpoint}
-
-Using assingment map initializes current variables with loaded tensors.
-
-Note: This overrides default initialization ops of specified variables and
-redefines dtype.
-
-##### Assignment map supports following syntax:
-
- `'checkpoint_scope_name/': 'scope_name/'` - will load all variables in
- current `scope_name` from `checkpoint_scope_name` with matching variable
- names.
- `'checkpoint_scope_name/some_other_variable': 'scope_name/variable_name'` -
- will initalize `scope_name/variable_name` variable
- from `checkpoint_scope_name/some_other_variable`.
- `'scope_variable_name': variable` - will initialize given `tf.Variable`
- object with variable from the checkpoint.
- `'scope_variable_name': list(variable)` - will initialize list of
- partitioned variables with variable from the checkpoint.
- `'/': 'scope_name/'` - will load all variables in current `scope_name` from
- checkpoint's root (e.g. no scope).
-
-Supports loading into partitioned variables, which are represented as
-'<variable>/part_<part #>'.
-
-
-* <b>`Example`</b>:
-```python
- # Create variables.
- with tf.variable_scope('test'):
- m = tf.get_variable('my_var')
- with tf.variable_scope('test2'):
- var2 = tf.get_variable('my_var')
- var3 = tf.get_variable(name="my1", shape=[100, 100],
- partitioner=lambda shape, dtype: [5, 1])
- ...
- # Specify which variables to intialize from checkpoint.
- init_from_checkpoint(checkpoint_dir, {
- 'some_var': 'test/my_var',
- 'some_scope/': 'test2/'})
- ...
- # Or use `Variable` objects to identify what to initialize.
- init_from_checkpoint(checkpoint_dir, {
- 'some_scope/var2': var2,
- })
- # Initialize partitioned variables
- init_from_checkpoint(checkpoint_dir, {
- 'some_var_from_ckpt': 'part_var',
- })
- # Or specifying the list of `Variable` objects.
- init_from_checkpoint(checkpoint_dir, {
- 'some_var_from_ckpt': var3._get_variable_list(),
- })
- ...
- # Initialize variables as usual.
- session.run(tf.get_all_variables())
-```
-
-##### Args:
-
-
-* <b>`checkpoint_dir`</b>: Directory with checkpoints file or path to checkpoint.
-* <b>`assignment_map`</b>: Dict, where keys are names of the variables in the
- checkpoint and values are current variables or names of current variables
- (in default graph).
-
-##### Raises:
-
- tf.errors.OpError: If missing checkpoints or tensors in checkpoints.
-
-* <b>`ValueError`</b>: If missing variables in current graph.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.graph_editor.md b/tensorflow/g3doc/api_docs/python/contrib.graph_editor.md
deleted file mode 100644
index 700af31086..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.graph_editor.md
+++ /dev/null
@@ -1,2054 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Graph Editor (contrib)
-[TOC]
-
-TensorFlow Graph Editor. See the @{$python/contrib.graph_editor} guide.
-
-## Other Functions and Classes
-- - -
-
-### `class tf.contrib.graph_editor.ControlOutputs` {#ControlOutputs}
-
-The control outputs topology.
-- - -
-
-#### `tf.contrib.graph_editor.ControlOutputs.__init__(graph)` {#ControlOutputs.__init__}
-
-Create a dictionary of control-output dependencies.
-
-##### Args:
-
-
-* <b>`graph`</b>: a `tf.Graph`.
-
-##### Returns:
-
- A dictionary where a key is a `tf.Operation` instance and the
- corresponding value is a list of all the ops which have the key
- as one of their control-input dependencies.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: graph is not a `tf.Graph`.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.ControlOutputs.get(op)` {#ControlOutputs.get}
-
-return the control outputs of op.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.ControlOutputs.get_all()` {#ControlOutputs.get_all}
-
-
-
-
-- - -
-
-#### `tf.contrib.graph_editor.ControlOutputs.graph` {#ControlOutputs.graph}
-
-
-
-
-- - -
-
-#### `tf.contrib.graph_editor.ControlOutputs.update()` {#ControlOutputs.update}
-
-Update the control outputs if the graph has changed.
-
-
-
-- - -
-
-### `class tf.contrib.graph_editor.OpMatcher` {#OpMatcher}
-
-Graph match class.
-- - -
-
-#### `tf.contrib.graph_editor.OpMatcher.__call__(op)` {#OpMatcher.__call__}
-
-Evaluate if the op matches or not.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.OpMatcher.__init__(positive_filter)` {#OpMatcher.__init__}
-
-Graph match constructor.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.OpMatcher.control_input_ops(*args)` {#OpMatcher.control_input_ops}
-
-Add input matches.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.OpMatcher.input_ops(*args)` {#OpMatcher.input_ops}
-
-Add input matches.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.OpMatcher.output_ops(*args)` {#OpMatcher.output_ops}
-
-Add output matches.
-
-
-
-- - -
-
-### `class tf.contrib.graph_editor.SubGraphView` {#SubGraphView}
-
-A subgraph view on an existing `tf.Graph`.
-
-An instance of this class is a subgraph view on an existing `tf.Graph`.
-"subgraph" means that it can represent part of the whole `tf.Graph`.
-"view" means that it only provides a passive observation and do not to act
-on the `tf.Graph`. Note that in this documentation, the term "subgraph" is
-often used as substitute to "subgraph view".
-
-A subgraph contains:
-
-* a list of input tensors, accessible via the `inputs` property.
-* a list of output tensors, accessible via the `outputs` property.
-* and the operations in between, accessible via the "ops" property.
-
-An subgraph can be seen as a function F(i0, i1, ...) -> o0, o1, ... It is a
-function which takes as input some input tensors and returns as output some
-output tensors. The computation that the function performs is encoded in the
-operations of the subgraph.
-
-The tensors (input or output) can be of two kinds:
-
-- connected: a connected tensor connects to at least one operation contained
-in the subgraph. One example is a subgraph representing a single operation
-and its inputs and outputs: all the input and output tensors of the op
-are "connected".
-- passthrough: a passthrough tensor does not connect to any operation
-contained in the subgraph. One example is a subgraph representing a
-single tensor: this tensor is passthrough. By default a passthrough tensor is
-present both in the input and output tensors of the subgraph. It can however
-be remapped to only appear as an input (or output) only.
-
-The input and output tensors can be remapped. For instance, some input tensor
-can be omitted. For instance, a subgraph representing an operation with two
-inputs can be remapped to only take one input. Note that this does not change
-at all the underlying `tf.Graph` (remember, it is a view). It means that
-the other input is being ignored, or is being treated as "given".
-The analogy with functions can be extended like this: F(x,y) is the original
-function. Remapping the inputs from [x, y] to just [x] means that the subgraph
-now represent the function F_y(x) (y is "given").
-
-The output tensors can also be remapped. For instance, some output tensor can
-be omitted. Other output tensor can be duplicated as well. As mentioned
-before, this does not change at all the underlying `tf.Graph`.
-The analogy with functions can be extended like this: F(...)->x,y is the
-original function. Remapping the outputs from [x, y] to just [y,y] means that
-the subgraph now represent the function M(F(...)) where M is the function
-M(a,b)->b,b.
-
-It is useful to describe three other kind of tensors:
-
-* internal: an internal tensor is a tensor connecting operations contained
- in the subgraph. One example in the subgraph representing the two
- operations A and B connected sequentially: -> A -> B ->. The middle arrow
- is an internal tensor.
-* actual input: an input tensor of the subgraph, regardless of whether it is
- listed in "inputs" or not (masked-out).
-* actual output: an output tensor of the subgraph, regardless of whether it is
- listed in "outputs" or not (masked-out).
-* hidden input: an actual input which has been masked-out using an
- input remapping. In other word, a hidden input is a non-internal tensor
- not listed as a input tensor and one of whose consumers belongs to
- the subgraph.
-* hidden output: a actual output which has been masked-out using an output
- remapping. In other word, a hidden output is a non-internal tensor
- not listed as an output and one of whose generating operations belongs to
- the subgraph.
-
-Here are some useful guarantees about an instance of a SubGraphView:
-
-* the input (or output) tensors are not internal.
-* the input (or output) tensors are either "connected" or "passthrough".
-* the passthrough tensors are not connected to any of the operation of
-the subgraph.
-
-Note that there is no guarantee that an operation in a subgraph contributes
-at all to its inputs or outputs. For instance, remapping both the inputs and
-outputs to empty lists will produce a subgraph which still contains all the
-original operations. However, the remove_unused_ops function can be used to
-make a new subgraph view whose operations are connected to at least one of
-the input or output tensors.
-
-An instance of this class is meant to be a lightweight object which is not
-modified in-place by the user. Rather, the user can create new modified
-instances of a given subgraph. In that sense, the class SubGraphView is meant
-to be used like an immutable python object.
-
-A common problem when using views is that they can get out-of-sync with the
-data they observe (in this case, a `tf.Graph`). This is up to the user to
-ensure that this doesn't happen. To keep on the safe side, it is recommended
-that the life time of subgraph views are kept very short. One way to achieve
-this is to use subgraphs within a "with make_sgv(...) as sgv:" Python context.
-
-To alleviate the out-of-sync problem, some functions are granted the right to
-modified subgraph in place. This is typically the case of graph manipulation
-functions which, given some subgraphs as arguments, can modify the underlying
-`tf.Graph`. Since this modification is likely to render the subgraph view
-invalid, those functions can modify the argument in place to reflect the
-change. For instance, calling the function swap_inputs(svg0, svg1) will modify
-svg0 and svg1 in place to reflect the fact that their inputs have now being
-swapped.
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.__bool__()` {#SubGraphView.__bool__}
-
-Allows for implicit boolean conversion.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.__copy__()` {#SubGraphView.__copy__}
-
-Create a copy of this subgraph.
-
-Note that this class is a "view", copying it only create another view and
-does not copy the underlying part of the `tf.Graph`.
-
-##### Returns:
-
- A new identical instance of the original subgraph view.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.__enter__()` {#SubGraphView.__enter__}
-
-Allow Python context to minimize the life time of a subgraph view.
-
-A subgraph view is meant to be a lightweight and transient object. A short
-lifetime will alleviate the "out-of-sync" issue mentioned earlier. For that
-reason, a SubGraphView instance can be used within a Python context. For
-example:
-
-from tensorflow.contrib import graph_editor as ge
-with ge.make_sgv(...) as sgv:
- print(sgv)
-
-##### Returns:
-
- Itself.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.__exit__(exc_type, exc_value, traceback)` {#SubGraphView.__exit__}
-
-
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.__init__(inside_ops=(), passthrough_ts=())` {#SubGraphView.__init__}
-
-Create a subgraph containing the given ops and the "passthrough" tensors.
-
-##### Args:
-
-
-* <b>`inside_ops`</b>: an object convertible to a list of `tf.Operation`. This list
- defines all the operations in the subgraph.
-* <b>`passthrough_ts`</b>: an object convertible to a list of `tf.Tensor`. This list
- define all the "passthrough" tensors. A passthrough tensor is a tensor
- which goes directly from the input of the subgraph to it output, without
- any intermediate operations. All the non passthrough tensors are
- silently ignored.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if inside_ops cannot be converted to a list of `tf.Operation`
- or if `passthrough_ts` cannot be converted to a list of `tf.Tensor`.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.__nonzero__()` {#SubGraphView.__nonzero__}
-
-Allows for implicit boolean conversion.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.__str__()` {#SubGraphView.__str__}
-
-
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.connected_inputs` {#SubGraphView.connected_inputs}
-
-The connected input tensors of this subgraph view.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.connected_outputs` {#SubGraphView.connected_outputs}
-
-The connected output tensors of this subgraph view.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.consumers()` {#SubGraphView.consumers}
-
-Return a Python set of all the consumers of this subgraph view.
-
-A consumer of a subgraph view is a tf.Operation which is a consumer
-of one of the output tensors and is not in the subgraph.
-
-##### Returns:
-
- A list of `tf.Operation` which are the consumers of this subgraph view.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.copy()` {#SubGraphView.copy}
-
-Return a copy of itself.
-
-Note that this class is a "view", copying it only create another view and
-does not copy the underlying part of the tf.Graph.
-
-##### Returns:
-
- A new instance identical to the original one.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.find_op_by_name(op_name)` {#SubGraphView.find_op_by_name}
-
-Return the op named op_name.
-
-##### Args:
-
-
-* <b>`op_name`</b>: the name to search for
-
-##### Returns:
-
- The op named op_name.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the op_name could not be found.
-* <b>`AssertionError`</b>: if the name was found multiple time.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.graph` {#SubGraphView.graph}
-
-The underlying `tf.Graph`.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.input_index(t)` {#SubGraphView.input_index}
-
-Find the input index corresponding to the given input tensor t.
-
-##### Args:
-
-
-* <b>`t`</b>: the input tensor of this subgraph view.
-
-##### Returns:
-
- The index in the self.inputs list.
-
-##### Raises:
-
-
-* <b>`Error`</b>: if t in not an input tensor.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.inputs` {#SubGraphView.inputs}
-
-The input tensors of this subgraph view.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.is_passthrough(t)` {#SubGraphView.is_passthrough}
-
-Check whether a tensor is passthrough.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.op(op_id)` {#SubGraphView.op}
-
-Get an op by its index.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.ops` {#SubGraphView.ops}
-
-The operations in this subgraph view.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.output_index(t)` {#SubGraphView.output_index}
-
-Find the output index corresponding to given output tensor t.
-
-##### Args:
-
-
-* <b>`t`</b>: the output tensor of this subgraph view.
-
-##### Returns:
-
- The index in the self.outputs list.
-
-##### Raises:
-
-
-* <b>`Error`</b>: if t in not an output tensor.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.outputs` {#SubGraphView.outputs}
-
-The output tensors of this subgraph view.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.passthroughs` {#SubGraphView.passthroughs}
-
-The passthrough tensors, going straight from input to output.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.remap(new_input_indices=None, new_output_indices=None)` {#SubGraphView.remap}
-
-Remap the inputs and outputs of the subgraph.
-
-Note that this is only modifying the view: the underlying tf.Graph is not
-affected.
-
-##### Args:
-
-
-* <b>`new_input_indices`</b>: an iterable of integers or tf.Tensors
- representing a mapping between the old inputs and the new ones.
- Integers must be positive and smaller than the number of old inputs.
- tf.Tensors must belong to the old list of inputs.
- This mapping can be under-complete and must be without repetitions.
-* <b>`new_output_indices`</b>: an iterable of integers or tf.Tensors
- representing a mapping between the old outputs and the new ones.
- Integers must be positive and smaller than the number of old outputs.
- tf.Tensors must belong to the old list of outputs.
- This mapping can be under-complete and can have repetitions.
-
-##### Returns:
-
- A new modified instance of the original subgraph view with remapped
- inputs and outputs.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.remap_default(remove_input_map=True, remove_output_map=True)` {#SubGraphView.remap_default}
-
-Remap the inputs and/or outputs to the default mapping.
-
-##### Args:
-
-
-* <b>`remove_input_map`</b>: if True the input map is reset to the default one.
-* <b>`remove_output_map`</b>: if True the output map is reset to the default one.
-
-##### Returns:
-
- A new modified instance of the original subgraph view with its
- input and/or output mapping reset to the default one.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.remap_inputs(new_input_indices)` {#SubGraphView.remap_inputs}
-
-Remap the inputs of the subgraph.
-
-If the inputs of the original subgraph are [t0, t1, t2], remapping to [2,0]
-will create a new instance whose inputs is [t2, t0].
-
-Note that this is only modifying the view: the underlying `tf.Graph` is not
-affected.
-
-##### Args:
-
-
-* <b>`new_input_indices`</b>: an iterable of integers or tf.Tensors
- representing a mapping between the old inputs and the new ones.
- Integers must be positive and smaller than the number of old inputs.
- tf.Tensors must belong to the old list of inputs.
- This mapping can be under-complete and must be without repetitions.
-
-##### Returns:
-
- A new modified instance of the original subgraph view with remapped
- inputs.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.remap_outputs(new_output_indices)` {#SubGraphView.remap_outputs}
-
-Remap the output of the subgraph.
-
-If the output of the original subgraph are [t0, t1, t2], remapping to
-[1,1,0] will create a new instance whose outputs is [t1, t1, t0].
-
-Note that this is only modifying the view: the underlying tf.Graph is not
-affected.
-
-##### Args:
-
-
-* <b>`new_output_indices`</b>: an iterable of integers or tf.Tensors
- representing a mapping between the old outputs and the new ones.
- Integers must be positive and smaller than the number of old outputs.
- tf.Tensors must belong to the old list of outputs.
- This mapping can be under-complete and can have repetitions.
-
-##### Returns:
-
- A new modified instance of the original subgraph view with remapped
- outputs.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.remap_outputs_make_unique()` {#SubGraphView.remap_outputs_make_unique}
-
-Remap the outputs so that all the tensors appears only once.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.remap_outputs_to_consumers()` {#SubGraphView.remap_outputs_to_consumers}
-
-Remap the outputs to match the number of consumers.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.remove_unused_ops(control_inputs=True)` {#SubGraphView.remove_unused_ops}
-
-Remove unused ops.
-
-##### Args:
-
-
-* <b>`control_inputs`</b>: if True, control inputs are used to detect used ops.
-
-##### Returns:
-
- A new subgraph view which only contains used operations.
-
-
-
-- - -
-
-### `class tf.contrib.graph_editor.Transformer` {#Transformer}
-
-Transform a subgraph into another one.
-
-By default, the constructor create a transform which copy a subgraph and
-replaces inputs with placeholders. This behavior can be modified by changing
-the handlers.
-- - -
-
-#### `tf.contrib.graph_editor.Transformer.__call__(sgv, dst_graph, dst_scope, src_scope='', reuse_dst_scope=False)` {#Transformer.__call__}
-
-Execute the transformation.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the source subgraph-view.
-* <b>`dst_graph`</b>: the destination graph.
-* <b>`dst_scope`</b>: the destination scope.
-* <b>`src_scope`</b>: the source scope, which specify the path from which the
- relative path of the transformed nodes are computed. For instance, if
- src_scope is a/ and dst_scoped is b/, then the node a/x/y will have a
- relative path of x/y and will be transformed into b/x/y.
-* <b>`reuse_dst_scope`</b>: if True the dst_scope is re-used if it already exists.
- Otherwise, the scope is given a unique name based on the one given
- by appending an underscore followed by a digit (default).
-
-##### Returns:
-
- A tuple `(sgv, info)` where:
- `sgv` is the transformed subgraph view;
- `info` is an instance of TransformerInfo containing
- information about the transform, including mapping between
- original and transformed tensors and operations.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the arguments are invalid.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.Transformer.__init__()` {#Transformer.__init__}
-
-Transformer constructor.
-
-The following members can be modified:
-transform_op_handler: handle the transformation of a `tf.Operation`.
- This handler defaults to a simple copy.
-assign_collections_handler: handle the assignment of collections.
- This handler defaults to assigning new collections created under the
- given name-scope.
-transform_external_input_handler: handle the transform of the inputs to
- the given subgraph. This handler defaults to creating placeholders
- instead of the ops just before the input tensors of the subgraph.
-transform_external_hidden_input_handler: handle the transform of the
- hidden inputs of the subgraph, that is, the inputs which are not listed
- in sgv.inputs. This handler defaults to a transform which keep the same
- input if the source and destination graphs are the same, otherwise
- use placeholders.
-transform_original_op_handler: handle the transform of original_op. This
- handler defaults to transforming original_op only if they are in the
- subgraph, otherwise they are ignored.
-
-
-
-- - -
-
-### `class tf.contrib.graph_editor.TransformerInfo` {#TransformerInfo}
-
-"Contains information about the result of a transform operation.
-- - -
-
-#### `tf.contrib.graph_editor.TransformerInfo.__init__(info)` {#TransformerInfo.__init__}
-
-Constructor.
-
-##### Args:
-
-
-* <b>`info`</b>: an instance of Transformer._TmpInfo containing various internal
- information about the transform operation.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.TransformerInfo.__str__()` {#TransformerInfo.__str__}
-
-
-
-
-- - -
-
-#### `tf.contrib.graph_editor.TransformerInfo.original(transformed, missing_fn=None)` {#TransformerInfo.original}
-
-Return the original op/tensor corresponding to the transformed one.
-
-Note that the output of this function mimics the hierarchy
-of its input argument `transformed`.
-Given an iterable, it returns a list. Given an operation or a tensor,
-it will return an operation or a tensor.
-
-##### Args:
-
-
-* <b>`transformed`</b>: the transformed tensor/operation.
-* <b>`missing_fn`</b>: function handling the case where the counterpart
- cannot be found. By default, None is returned.
-
-##### Returns:
-
- the original tensor/operation (or None if no match is found).
-
-
-- - -
-
-#### `tf.contrib.graph_editor.TransformerInfo.transformed(original, missing_fn=None)` {#TransformerInfo.transformed}
-
-Return the transformed op/tensor corresponding to the original one.
-
-Note that the output of this function mimics the hierarchy
-of its input argument `original`.
-Given an iterable, it returns a list. Given an operation or a tensor,
-it will return an operation or a tensor.
-
-##### Args:
-
-
-* <b>`original`</b>: the original tensor/operation.
-* <b>`missing_fn`</b>: function handling the case where the counterpart
- cannot be found. By default, None is returned.
-
-##### Returns:
-
- the transformed tensor/operation (or None if no match is found).
-
-
-
-- - -
-
-### `tf.contrib.graph_editor.add_control_inputs(op, cops)` {#add_control_inputs}
-
-Add the control inputs cops to co.
-
-Warning: this function is directly manipulating the internals of the tf.Graph.
-
-##### Args:
-
-
-* <b>`op`</b>: a tf.Operation to which the control inputs are added.
-* <b>`cops`</b>: an object convertible to a list of `tf.Operation`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if op is not a tf.Operation
-* <b>`ValueError`</b>: if any cop in cops is already a control input of op.
-
-
-- - -
-
-### `tf.contrib.graph_editor.assign_renamed_collections_handler(info, elem, elem_)` {#assign_renamed_collections_handler}
-
-Add the transformed elem to the (renamed) collections of elem.
-
-A collection is renamed only if is not a known key, as described in
-`tf.GraphKeys`.
-
-##### Args:
-
-
-* <b>`info`</b>: Transform._TmpInfo instance.
-* <b>`elem`</b>: the original element (`tf.Tensor` or `tf.Operation`)
-* <b>`elem_`</b>: the transformed element
-
-
-- - -
-
-### `tf.contrib.graph_editor.bypass(sgv)` {#bypass}
-
-Bypass the given subgraph by connecting its inputs to its outputs.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the subgraph view to be bypassed. This argument is converted to a
- subgraph using the same rules than the function subgraph.make_view.
- Note that sgv is modified in place.
-
-##### Returns:
-
- A tuple `(sgv, detached_inputs)` where:
- `sgv` is a new subgraph view of the bypassed subgraph;
- `detached_inputs` is a list of the created input placeholders.
-
-##### Raises:
-
-
-* <b>`StandardError`</b>: if sgv cannot be converted to a SubGraphView using
- the same rules than the function subgraph.make_view.
-
-
-- - -
-
-### `tf.contrib.graph_editor.can_be_regex(obj)` {#can_be_regex}
-
-Return True if obj can be turned into a regular expression.
-
-
-- - -
-
-### `tf.contrib.graph_editor.check_cios(control_inputs=False, control_outputs=None, control_ios=None)` {#check_cios}
-
-Do various check on control_inputs and control_outputs.
-
-##### Args:
-
-
-* <b>`control_inputs`</b>: A boolean indicating whether control inputs are enabled.
-* <b>`control_outputs`</b>: An instance of util.ControlOutputs or None. If not None,
- control outputs are enabled.
-* <b>`control_ios`</b>: An instance of util.ControlOutputs or None. If not None, both
- control inputs and control outputs are enabled. This is equivalent to set
- control_inputs to True and control_outputs to the util.ControlOutputs
- instance.
-
-##### Returns:
-
- A tuple `(control_inputs, control_outputs)` where:
- `control_inputs` is a boolean indicating whether to use control inputs.
- `control_outputs` is an instance of util.ControlOutputs or None
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if control_inputs is an instance of util.ControlOutputs but
- control_outputs is not None
-* <b>`TypeError`</b>: if control_outputs is not None and is not a util.ControlOutputs.
-
-
-- - -
-
-### `tf.contrib.graph_editor.compute_boundary_ts(ops)` {#compute_boundary_ts}
-
-Compute the tensors at the boundary of a set of ops.
-
-This function looks at all the tensors connected to the given ops (in/out)
-and classify them into three categories:
-1) input tensors: tensors whose generating operation is not in ops.
-2) output tensors: tensors whose consumer operations are not in ops
-3) inside tensors: tensors which are neither input nor output tensors.
-
-Note that a tensor can be both an inside tensor and an output tensor if it is
-consumed by operations both outside and inside of `ops`.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of tf.Operation.
-
-##### Returns:
-
- A tuple `(outside_input_ts, outside_output_ts, inside_ts)` where:
- `outside_input_ts` is a Python list of input tensors;
- `outside_output_ts` is a python list of output tensors;
- `inside_ts` is a python list of inside tensors.
- Since a tensor can be both an inside tensor and an output tensor,
- `outside_output_ts` and `inside_ts` might intersect.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ops cannot be converted to a list of tf.Operation.
-
-
-- - -
-
-### `tf.contrib.graph_editor.connect(sgv0, sgv1, disconnect_first=False)` {#connect}
-
-Connect the outputs of sgv0 to the inputs of sgv1.
-
-##### Args:
-
-
-* <b>`sgv0`</b>: the first subgraph to have its outputs swapped. This argument is
- converted to a subgraph using the same rules as the function
- subgraph.make_view.
- Note that sgv0 is modified in place.
-* <b>`sgv1`</b>: the second subgraph to have its outputs swapped. This argument is
- converted to a subgraph using the same rules as the function
- subgraph.make_view.
- Note that sgv1 is modified in place.
-* <b>`disconnect_first`</b>: if True the current outputs of sgv0 are disconnected.
-
-##### Returns:
-
- A tuple `(sgv0, sgv1)` of the now connected subgraphs.
-
-##### Raises:
-
-
-* <b>`StandardError`</b>: if sgv0 or sgv1 cannot be converted to a SubGraphView using
- the same rules than the function subgraph.make_view.
-
-
-- - -
-
-### `tf.contrib.graph_editor.copy_op_handler(info, op, copy_shape=True)` {#copy_op_handler}
-
-Copy a `tf.Operation`.
-
-##### Args:
-
-
-* <b>`info`</b>: Transform._TmpInfo instance.
-* <b>`op`</b>: the `tf.Operation` to be copied.
-* <b>`copy_shape`</b>: also copy the shape of the tensor
-
-##### Returns:
-
- A `(op, op_outputs)` tuple containgin the transformed op and its outputs.
-
-
-- - -
-
-### `tf.contrib.graph_editor.copy_with_input_replacements(sgv, replacement_ts, dst_graph=None, dst_scope='', src_scope='', reuse_dst_scope=False)` {#copy_with_input_replacements}
-
-Copy a subgraph, replacing some of its inputs.
-
-Note a replacement only happens if the tensor to be replaced
-is an input of the given subgraph. The inputs of a subgraph can
-be queried using sgv.inputs.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the source subgraph-view. This argument is converted to a subgraph
- using the same rules as the function subgraph.make_view.
-* <b>`replacement_ts`</b>: dictionary mapping from original tensors to the
- replaced one.
-* <b>`dst_graph`</b>: the destination graph.
-* <b>`dst_scope`</b>: the destination scope.
-* <b>`src_scope`</b>: the source scope.
-* <b>`reuse_dst_scope`</b>: if True the dst_scope is re-used if it already exists.
- Otherwise, the scope is given a unique name based on the one given
- by appending an underscore followed by a digit (default).
-
-##### Returns:
-
- A tuple `(sgv, info)` where:
- `sgv` is the transformed subgraph view;
- `info` is an instance of TransformerInfo containing
- information about the transform, including mapping between
- original and transformed tensors and operations.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if dst_graph is not a tf.Graph.
-* <b>`StandardError`</b>: if sgv cannot be converted to a SubGraphView using
- the same rules as the function subgraph.make_view.
-
-
-- - -
-
-### `tf.contrib.graph_editor.detach(sgv, control_inputs=False, control_outputs=None, control_ios=None)` {#detach}
-
-Detach both the inputs and the outputs of a subgraph view.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the subgraph view to be detached. This argument is converted to a
- subgraph using the same rules as the function subgraph.make_view.
- Note that sgv is modified in place.
-* <b>`control_inputs`</b>: A boolean indicating whether control inputs are enabled.
-* <b>`control_outputs`</b>: An instance of util.ControlOutputs or None. If not None,
- control outputs are enabled.
-* <b>`control_ios`</b>: An instance of util.ControlOutputs or None. If not None, both
- control inputs and control outputs are enabled. This is equivalent to set
- control_inputs to True and control_outputs to the util.ControlOutputs
- instance.
-
-##### Returns:
-
- A tuple `(sgv, detached_inputs, detached_outputs)` where:
- `sgv` is a new subgraph view of the detached subgraph;
- `detach_inputs` is a list of the created input placeholders;
- `detach_outputs` is a list of the created output placeholders.
-
-##### Raises:
-
-
-* <b>`StandardError`</b>: if sgv cannot be converted to a SubGraphView using
- the same rules than the function subgraph.make_view.
-
-
-- - -
-
-### `tf.contrib.graph_editor.detach_control_inputs(sgv)` {#detach_control_inputs}
-
-Detach all the external control inputs of the subgraph sgv.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the subgraph view to be detached. This argument is converted to a
- subgraph using the same rules as the function subgraph.make_view.
-
-
-- - -
-
-### `tf.contrib.graph_editor.detach_control_outputs(sgv, control_outputs)` {#detach_control_outputs}
-
-Detach all the external control outputs of the subgraph sgv.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the subgraph view to be detached. This argument is converted to a
- subgraph using the same rules as the function subgraph.make_view.
-* <b>`control_outputs`</b>: a util.ControlOutputs instance.
-
-
-- - -
-
-### `tf.contrib.graph_editor.detach_inputs(sgv, control_inputs=False)` {#detach_inputs}
-
-Detach the inputs of a subgraph view.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the subgraph view to be detached. This argument is converted to a
- subgraph using the same rules as the function subgraph.make_view.
- Note that sgv is modified in place.
-* <b>`control_inputs`</b>: if True control_inputs are also detached.
-
-##### Returns:
-
- A tuple `(sgv, input_placeholders)` where
- `sgv` is a new subgraph view of the detached subgraph;
- `input_placeholders` is a list of the created input placeholders.
-
-##### Raises:
-
-
-* <b>`StandardError`</b>: if sgv cannot be converted to a SubGraphView using
- the same rules than the function subgraph.make_view.
-
-
-- - -
-
-### `tf.contrib.graph_editor.detach_outputs(sgv, control_outputs=None)` {#detach_outputs}
-
-Detach the output of a subgraph view.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the subgraph view to be detached. This argument is converted to a
- subgraph using the same rules as the function subgraph.make_view.
- Note that sgv is modified in place.
-* <b>`control_outputs`</b>: a util.ControlOutputs instance or None. If not None the
- control outputs are also detached.
-
-##### Returns:
-
- A tuple `(sgv, output_placeholders)` where
- `sgv` is a new subgraph view of the detached subgraph;
- `output_placeholders` is a list of the created output placeholders.
-
-##### Raises:
-
-
-* <b>`StandardError`</b>: if sgv cannot be converted to a SubGraphView using
- the same rules than the function subgraph.make_view.
-
-
-- - -
-
-### `tf.contrib.graph_editor.filter_ops(ops, positive_filter)` {#filter_ops}
-
-Get the ops passing the given filter.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of tf.Operation.
-* <b>`positive_filter`</b>: a function deciding where to keep an operation or not.
- If True, all the operations are returned.
-
-##### Returns:
-
- A list of selected tf.Operation.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ops cannot be converted to a list of tf.Operation.
-
-
-- - -
-
-### `tf.contrib.graph_editor.filter_ops_from_regex(ops, regex)` {#filter_ops_from_regex}
-
-Get all the operations that match the given regex.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of `tf.Operation`.
-* <b>`regex`</b>: a regular expression matching the operation's name.
- For example, `"^foo(/.*)?$"` will match all the operations in the "foo"
- scope.
-
-##### Returns:
-
- A list of `tf.Operation`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ops cannot be converted to a list of `tf.Operation`.
-
-
-- - -
-
-### `tf.contrib.graph_editor.filter_ts(ops, positive_filter)` {#filter_ts}
-
-Get all the tensors which are input or output of an op in ops.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of `tf.Operation`.
-* <b>`positive_filter`</b>: a function deciding whether to keep a tensor or not.
- If `True`, all the tensors are returned.
-
-##### Returns:
-
- A list of `tf.Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ops cannot be converted to a list of `tf.Operation`.
-
-
-- - -
-
-### `tf.contrib.graph_editor.filter_ts_from_regex(ops, regex)` {#filter_ts_from_regex}
-
-Get all the tensors linked to ops that match the given regex.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of tf.Operation.
-* <b>`regex`</b>: a regular expression matching the tensors' name.
- For example, "^foo(/.*)?:\d+$" will match all the tensors in the "foo"
- scope.
-
-##### Returns:
-
- A list of tf.Tensor.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ops cannot be converted to a list of tf.Operation.
-
-
-- - -
-
-### `tf.contrib.graph_editor.get_backward_walk_ops(seed_ops, inclusive=True, within_ops=None, stop_at_ts=(), control_inputs=False)` {#get_backward_walk_ops}
-
-Do a backward graph walk and return all the visited ops.
-
-##### Args:
-
-
-* <b>`seed_ops`</b>: an iterable of operations from which the backward graph
- walk starts. If a list of tensors is given instead, the seed_ops are set
- to be the generators of those tensors.
-* <b>`inclusive`</b>: if True the given seed_ops are also part of the resulting set.
-* <b>`within_ops`</b>: an iterable of `tf.Operation` within which the search is
- restricted. If `within_ops` is `None`, the search is performed within
- the whole graph.
-* <b>`stop_at_ts`</b>: an iterable of tensors at which the graph walk stops.
-* <b>`control_inputs`</b>: if True, control inputs will be used while moving backward.
-
-##### Returns:
-
- A Python set of all the `tf.Operation` behind `seed_ops`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `seed_ops` or `within_ops` cannot be converted to a list of
- `tf.Operation`.
-
-
-- - -
-
-### `tf.contrib.graph_editor.get_consuming_ops(ts)` {#get_consuming_ops}
-
-Return all the consuming ops of the tensors in ts.
-
-##### Args:
-
-
-* <b>`ts`</b>: a list of `tf.Tensor`
-
-##### Returns:
-
- A list of all the consuming `tf.Operation` of the tensors in `ts`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ts cannot be converted to a list of `tf.Tensor`.
-
-
-- - -
-
-### `tf.contrib.graph_editor.get_forward_walk_ops(seed_ops, inclusive=True, within_ops=None, stop_at_ts=(), control_outputs=None)` {#get_forward_walk_ops}
-
-Do a forward graph walk and return all the visited ops.
-
-##### Args:
-
-
-* <b>`seed_ops`</b>: an iterable of operations from which the forward graph
- walk starts. If a list of tensors is given instead, the seed_ops are set
- to be the consumers of those tensors.
-* <b>`inclusive`</b>: if True the given seed_ops are also part of the resulting set.
-* <b>`within_ops`</b>: an iterable of `tf.Operation` within which the search is
- restricted. If `within_ops` is `None`, the search is performed within
- the whole graph.
-* <b>`stop_at_ts`</b>: an iterable of tensors at which the graph walk stops.
-* <b>`control_outputs`</b>: a `util.ControlOutputs` instance or None.
- If not `None`, it will be used while walking the graph forward.
-
-##### Returns:
-
- A Python set of all the `tf.Operation` ahead of `seed_ops`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `seed_ops` or `within_ops` cannot be converted to a list of
- `tf.Operation`.
-
-
-- - -
-
-### `tf.contrib.graph_editor.get_generating_ops(ts)` {#get_generating_ops}
-
-Return all the generating ops of the tensors in `ts`.
-
-##### Args:
-
-
-* <b>`ts`</b>: a list of `tf.Tensor`
-
-##### Returns:
-
- A list of all the generating `tf.Operation` of the tensors in `ts`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `ts` cannot be converted to a list of `tf.Tensor`.
-
-
-- - -
-
-### `tf.contrib.graph_editor.get_name_scope_ops(ops, scope)` {#get_name_scope_ops}
-
-Get all the operations under the given scope path.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of tf.Operation.
-* <b>`scope`</b>: a scope path.
-
-##### Returns:
-
- A list of tf.Operation.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ops cannot be converted to a list of tf.Operation.
-
-
-- - -
-
-### `tf.contrib.graph_editor.get_ops_ios(ops, control_inputs=False, control_outputs=None, control_ios=None)` {#get_ops_ios}
-
-Return all the `tf.Operation` which are connected to an op in ops.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of `tf.Operation`.
-* <b>`control_inputs`</b>: A boolean indicating whether control inputs are enabled.
-* <b>`control_outputs`</b>: An instance of `util.ControlOutputs` or `None`. If not
- `None`, control outputs are enabled.
-* <b>`control_ios`</b>: An instance of `util.ControlOutputs` or `None`. If not `None`,
- both control inputs and control outputs are enabled. This is equivalent to
- set `control_inputs` to `True` and `control_outputs` to the
- `util.ControlOutputs` instance.
-
-##### Returns:
-
- All the `tf.Operation` surrounding the given ops.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `ops` cannot be converted to a list of `tf.Operation`.
-
-
-- - -
-
-### `tf.contrib.graph_editor.get_tensors(graph)` {#get_tensors}
-
-get all the tensors which are input or output of an op in the graph.
-
-##### Args:
-
-
-* <b>`graph`</b>: a `tf.Graph`.
-
-##### Returns:
-
- A list of `tf.Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if graph is not a `tf.Graph`.
-
-
-- - -
-
-### `tf.contrib.graph_editor.get_walks_intersection_ops(forward_seed_ops, backward_seed_ops, forward_inclusive=True, backward_inclusive=True, within_ops=None, control_inputs=False, control_outputs=None, control_ios=None)` {#get_walks_intersection_ops}
-
-Return the intersection of a forward and a backward walk.
-
-##### Args:
-
-
-* <b>`forward_seed_ops`</b>: an iterable of operations from which the forward graph
- walk starts. If a list of tensors is given instead, the seed_ops are set
- to be the consumers of those tensors.
-* <b>`backward_seed_ops`</b>: an iterable of operations from which the backward graph
- walk starts. If a list of tensors is given instead, the seed_ops are set
- to be the generators of those tensors.
-* <b>`forward_inclusive`</b>: if True the given forward_seed_ops are also part of the
- resulting set.
-* <b>`backward_inclusive`</b>: if True the given backward_seed_ops are also part of the
- resulting set.
-* <b>`within_ops`</b>: an iterable of tf.Operation within which the search is
- restricted. If within_ops is None, the search is performed within
- the whole graph.
-* <b>`control_inputs`</b>: A boolean indicating whether control inputs are enabled.
-* <b>`control_outputs`</b>: An instance of util.ControlOutputs or None. If not None,
- control outputs are enabled.
-* <b>`control_ios`</b>: An instance of util.ControlOutputs or None. If not None, both
- control inputs and control outputs are enabled. This is equivalent to set
- control_inputs to True and control_outputs to the util.ControlOutputs
- instance.
-
-##### Returns:
-
- A Python set of all the tf.Operation in the intersection of a forward and a
- backward walk.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `forward_seed_ops` or `backward_seed_ops` or `within_ops`
- cannot be converted to a list of `tf.Operation`.
-
-
-- - -
-
-### `tf.contrib.graph_editor.get_walks_union_ops(forward_seed_ops, backward_seed_ops, forward_inclusive=True, backward_inclusive=True, within_ops=None, control_inputs=False, control_outputs=None, control_ios=None)` {#get_walks_union_ops}
-
-Return the union of a forward and a backward walk.
-
-##### Args:
-
-
-* <b>`forward_seed_ops`</b>: an iterable of operations from which the forward graph
- walk starts. If a list of tensors is given instead, the seed_ops are set
- to be the consumers of those tensors.
-* <b>`backward_seed_ops`</b>: an iterable of operations from which the backward graph
- walk starts. If a list of tensors is given instead, the seed_ops are set
- to be the generators of those tensors.
-* <b>`forward_inclusive`</b>: if True the given forward_seed_ops are also part of the
- resulting set.
-* <b>`backward_inclusive`</b>: if True the given backward_seed_ops are also part of the
- resulting set.
-* <b>`within_ops`</b>: restrict the search within those operations. If within_ops is
- None, the search is done within the whole graph.
-* <b>`control_inputs`</b>: A boolean indicating whether control inputs are enabled.
-* <b>`control_outputs`</b>: An instance of util.ControlOutputs or None. If not None,
- control outputs are enabled.
-* <b>`control_ios`</b>: An instance of util.ControlOutputs or None. If not None, both
- control inputs and control outputs are enabled. This is equivalent to set
- control_inputs to True and control_outputs to the util.ControlOutputs
- instance.
-
-##### Returns:
-
- A Python set of all the tf.Operation in the union of a forward and a
- backward walk.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if forward_seed_ops or backward_seed_ops or within_ops cannot be
- converted to a list of tf.Operation.
-
-
-- - -
-
-### `tf.contrib.graph_editor.get_within_boundary_ops(ops, seed_ops, boundary_ops=(), inclusive=True, control_inputs=False, control_outputs=None, control_ios=None)` {#get_within_boundary_ops}
-
-Return all the `tf.Operation` within the given boundary.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of `tf.Operation`. those ops define the
- set in which to perform the operation (if a `tf.Graph` is given, it
- will be converted to the list of all its operations).
-* <b>`seed_ops`</b>: the operations from which to start expanding.
-* <b>`boundary_ops`</b>: the ops forming the boundary.
-* <b>`inclusive`</b>: if `True`, the result will also include the boundary ops.
-* <b>`control_inputs`</b>: A boolean indicating whether control inputs are enabled.
-* <b>`control_outputs`</b>: An instance of `util.ControlOutputs` or `None`. If not
- `None`, control outputs are enabled.
-* <b>`control_ios`</b>: An instance of `util.ControlOutputs` or `None`. If not
- `None`, both control inputs and control outputs are enabled. This is
- equivalent to set control_inputs to True and control_outputs to
- the `util.ControlOutputs` instance.
-
-##### Returns:
-
- All the `tf.Operation` surrounding the given ops.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `ops` or `seed_ops` cannot be converted to a list of
- `tf.Operation`.
-* <b>`ValueError`</b>: if the boundary is intersecting with the seeds.
-
-
-- - -
-
-### `tf.contrib.graph_editor.graph_replace(target_ts, replacement_ts, dst_scope='', src_scope='', reuse_dst_scope=False)` {#graph_replace}
-
-Create a new graph which compute the targets from the replaced Tensors.
-
-##### Args:
-
-
-* <b>`target_ts`</b>: a single tf.Tensor or an iterable of tf.Tensor.
-* <b>`replacement_ts`</b>: dictionary mapping from original tensors to replaced tensors
-* <b>`dst_scope`</b>: the destination scope.
-* <b>`src_scope`</b>: the source scope.
-* <b>`reuse_dst_scope`</b>: if True the dst_scope is re-used if it already exists.
- Otherwise, the scope is given a unique name based on the one given
- by appending an underscore followed by a digit (default).
-
-##### Returns:
-
- A single tf.Tensor or a list of target tf.Tensor, depending on
- the type of the input argument `target_ts`.
- The returned tensors are recomputed using the tensors from replacement_ts.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the targets are not connected to replacement_ts.
-
-
-- - -
-
-### `tf.contrib.graph_editor.keep_t_if_possible_handler(info, t)` {#keep_t_if_possible_handler}
-
-Transform a tensor into itself (identity) if possible.
-
-This handler transform a tensor into itself if the source and destination
-graph are the same. Otherwise it will create a placeholder.
-This handler is typically used to transform a hidden input tensors.
-
-##### Args:
-
-
-* <b>`info`</b>: Transform._TmpInfo instance.
-* <b>`t`</b>: tensor whose input must be transformed into a place holder.
-
-##### Returns:
-
- The tensor generated by the newly created place holder.
-
-
-- - -
-
-### `tf.contrib.graph_editor.make_list_of_op(ops, check_graph=True, allow_graph=True, ignore_ts=False)` {#make_list_of_op}
-
-Convert ops to a list of `tf.Operation`.
-
-##### Args:
-
-
-* <b>`ops`</b>: can be an iterable of `tf.Operation`, a `tf.Graph` or a single
- operation.
-* <b>`check_graph`</b>: if `True` check if all the operations belong to the same graph.
-* <b>`allow_graph`</b>: if `False` a `tf.Graph` cannot be converted.
-* <b>`ignore_ts`</b>: if True, silently ignore `tf.Tensor`.
-
-##### Returns:
-
- A newly created list of `tf.Operation`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ops cannot be converted to a list of `tf.Operation` or,
- if `check_graph` is `True`, if all the ops do not belong to the
- same graph.
-
-
-- - -
-
-### `tf.contrib.graph_editor.make_list_of_t(ts, check_graph=True, allow_graph=True, ignore_ops=False)` {#make_list_of_t}
-
-Convert ts to a list of `tf.Tensor`.
-
-##### Args:
-
-
-* <b>`ts`</b>: can be an iterable of `tf.Tensor`, a `tf.Graph` or a single tensor.
-* <b>`check_graph`</b>: if `True` check if all the tensors belong to the same graph.
-* <b>`allow_graph`</b>: if `False` a `tf.Graph` cannot be converted.
-* <b>`ignore_ops`</b>: if `True`, silently ignore `tf.Operation`.
-
-##### Returns:
-
- A newly created list of `tf.Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `ts` cannot be converted to a list of `tf.Tensor` or,
- if `check_graph` is `True`, if all the ops do not belong to the same graph.
-
-
-- - -
-
-### `tf.contrib.graph_editor.make_placeholder_from_dtype_and_shape(dtype, shape=None, scope=None)` {#make_placeholder_from_dtype_and_shape}
-
-Create a tf.placeholder for the Graph Editor.
-
-Note that the correct graph scope must be set by the calling function.
-The placeholder is named using the function placeholder_name (with no
-tensor argument).
-
-##### Args:
-
-
-* <b>`dtype`</b>: the tensor type.
-* <b>`shape`</b>: the tensor shape (optional).
-* <b>`scope`</b>: absolute scope within which to create the placeholder. None
- means that the scope of t is preserved. "" means the root scope.
-
-##### Returns:
-
- A newly created tf.placeholder.
-
-
-- - -
-
-### `tf.contrib.graph_editor.make_placeholder_from_tensor(t, scope=None)` {#make_placeholder_from_tensor}
-
-Create a `tf.placeholder` for the Graph Editor.
-
-Note that the correct graph scope must be set by the calling function.
-
-##### Args:
-
-
-* <b>`t`</b>: a `tf.Tensor` whose name will be used to create the placeholder
- (see function placeholder_name).
-* <b>`scope`</b>: absolute scope within which to create the placeholder. None
- means that the scope of `t` is preserved. `""` means the root scope.
-
-##### Returns:
-
- A newly created `tf.placeholder`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `t` is not `None` or a `tf.Tensor`.
-
-
-- - -
-
-### `tf.contrib.graph_editor.make_regex(obj)` {#make_regex}
-
-Return a compiled regular expression.
-
-##### Args:
-
-
-* <b>`obj`</b>: a string or a regular expression.
-
-##### Returns:
-
- A compiled regular expression.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if obj could not be converted to a regular expression.
-
-
-- - -
-
-### `tf.contrib.graph_editor.make_view(*args, **kwargs)` {#make_view}
-
-Create a SubGraphView from selected operations and passthrough tensors.
-
-##### Args:
-
-
-* <b>`*args`</b>: list of 1) regular expressions (compiled or not) or 2) (array of)
- `tf.Operation` 3) (array of) `tf.Tensor`. Those objects will be converted
- into a list of operations and a list of candidate for passthrough tensors.
-* <b>`**kwargs`</b>: keyword graph is used 1) to check that the ops and ts are from
- the correct graph 2) for regular expression query
-
-##### Returns:
-
- A subgraph view.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if the optional keyword argument graph is not a `tf.Graph`
- or if an argument in args is not an (array of) `tf.Tensor`
- or an (array of) `tf.Operation` or a string or a regular expression.
-* <b>`ValueError`</b>: if one of the keyword arguments is unexpected.
-
-
-- - -
-
-### `tf.contrib.graph_editor.make_view_from_scope(scope, graph)` {#make_view_from_scope}
-
-Make a subgraph from a name scope.
-
-##### Args:
-
-
-* <b>`scope`</b>: the name of the scope.
-* <b>`graph`</b>: the `tf.Graph`.
-
-##### Returns:
-
- A subgraph view representing the given scope.
-
-
-- - -
-
-### `tf.contrib.graph_editor.op_type(op_types, op=None)` {#op_type}
-
-Check if an op is of the given type.
-
-##### Args:
-
-
-* <b>`op_types`</b>: tuple of strings containing the types to check against.
- For instance: ("Add", "Const")
-* <b>`op`</b>: the operation to check (or None).
-
-##### Returns:
-
- if op is not None, return True if the op is of the correct type.
- if op is None, return a lambda function which does the type checking.
-
-
-- - -
-
-### `tf.contrib.graph_editor.ph(dtype, shape=None, scope=None)` {#ph}
-
-Create a tf.placeholder for the Graph Editor.
-
-Note that the correct graph scope must be set by the calling function.
-The placeholder is named using the function placeholder_name (with no
-tensor argument).
-
-##### Args:
-
-
-* <b>`dtype`</b>: the tensor type.
-* <b>`shape`</b>: the tensor shape (optional).
-* <b>`scope`</b>: absolute scope within which to create the placeholder. None
- means that the scope of t is preserved. "" means the root scope.
-
-##### Returns:
-
- A newly created tf.placeholder.
-
-
-- - -
-
-### `tf.contrib.graph_editor.placeholder_name(t=None, scope=None)` {#placeholder_name}
-
-Create placeholder name for the graph editor.
-
-##### Args:
-
-
-* <b>`t`</b>: optional tensor on which the placeholder operation's name will be based
- on
-* <b>`scope`</b>: absolute scope with which to prefix the placeholder's name. None
- means that the scope of t is preserved. "" means the root scope.
-
-##### Returns:
-
- A new placeholder name prefixed by "geph". Note that "geph" stands for
- Graph Editor PlaceHolder. This convention allows to quickly identify the
- placeholder generated by the Graph Editor.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if t is not None or a tf.Tensor.
-
-
-- - -
-
-### `tf.contrib.graph_editor.remove_control_inputs(op, cops)` {#remove_control_inputs}
-
-Remove the control inputs cops from co.
-
-Warning: this function is directly manipulating the internals of the
-`tf.Graph`.
-
-##### Args:
-
-
-* <b>`op`</b>: a `tf.Operation` from which to remove the control inputs.
-* <b>`cops`</b>: an object convertible to a list of `tf.Operation`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if op is not a `tf.Operation`.
-* <b>`ValueError`</b>: if any cop in cops is not a control input of op.
-
-
-- - -
-
-### `tf.contrib.graph_editor.replace_t_with_placeholder_handler(info, t)` {#replace_t_with_placeholder_handler}
-
-Transform a tensor into a placeholder tensor.
-
-This handler is typically used to transform a subgraph input tensor into a
-placeholder.
-
-##### Args:
-
-
-* <b>`info`</b>: Transform._TmpInfo instance.
-* <b>`t`</b>: tensor whose input must be transformed into a place holder.
-
-##### Returns:
-
- The tensor generated by the newly created place holder.
-
-
-- - -
-
-### `tf.contrib.graph_editor.reroute_inputs(sgv0, sgv1)` {#reroute_inputs}
-
-Re-route all the inputs of sgv0 to sgv1 (see reroute_inputs).
-
-
-- - -
-
-### `tf.contrib.graph_editor.reroute_ios(sgv0, sgv1)` {#reroute_ios}
-
-Re-route the inputs and outputs of sgv0 to sgv1 (see _reroute).
-
-
-- - -
-
-### `tf.contrib.graph_editor.reroute_outputs(sgv0, sgv1)` {#reroute_outputs}
-
-Re-route all the outputs of sgv0 to sgv1 (see _reroute_outputs).
-
-
-- - -
-
-### `tf.contrib.graph_editor.reroute_ts(ts0, ts1, can_modify=None, cannot_modify=None)` {#reroute_ts}
-
-For each tensor's pair, replace the end of t1 by the end of t0.
-
-B0 B1 B0 B1
-| | => |/
-A0 A1 A0 A1
-
-The end of the tensors in ts1 are left dangling.
-
-##### Args:
-
-
-* <b>`ts0`</b>: an object convertible to a list of `tf.Tensor`.
-* <b>`ts1`</b>: an object convertible to a list of `tf.Tensor`.
-* <b>`can_modify`</b>: iterable of operations which can be modified. Any operation
- outside within_ops will be left untouched by this function.
-* <b>`cannot_modify`</b>: iterable of operations which cannot be modified. Any
- operation within cannot_modify will be left untouched by this function.
-
-##### Returns:
-
- The number of individual modifications made by the function.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ts0 or ts1 cannot be converted to a list of tf.Tensor.
-* <b>`TypeError`</b>: if can_modify or cannot_modify is not None and cannot be
- converted to a list of tf.Operation.
-
-
-- - -
-
-### `tf.contrib.graph_editor.select_ops(*args, **kwargs)` {#select_ops}
-
-Helper to select operations.
-
-##### Args:
-
-
-* <b>`*args`</b>: list of 1) regular expressions (compiled or not) or 2) (array of)
- `tf.Operation`. `tf.Tensor` instances are silently ignored.
-* <b>`**kwargs`</b>: 'graph': `tf.Graph` in which to perform the regex query.This is
- required when using regex.
- 'positive_filter': an elem if selected only if `positive_filter(elem)` is
- `True`. This is optional.
- 'restrict_ops_regex': a regular expression is ignored if it doesn't start
- with the substring "(?#ops)".
-
-##### Returns:
-
- A list of `tf.Operation`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if the optional keyword argument graph is not a `tf.Graph`
- or if an argument in args is not an (array of) `tf.Operation`
- or an (array of) `tf.Tensor` (silently ignored) or a string
- or a regular expression.
-* <b>`ValueError`</b>: if one of the keyword arguments is unexpected or if a regular
- expression is used without passing a graph as a keyword argument.
-
-
-- - -
-
-### `tf.contrib.graph_editor.select_ops_and_ts(*args, **kwargs)` {#select_ops_and_ts}
-
-Helper to select operations and tensors.
-
-##### Args:
-
-
-* <b>`*args`</b>: list of 1) regular expressions (compiled or not) or 2) (array of)
- `tf.Operation` 3) (array of) tf.Tensor. Regular expressions matching
- tensors must start with the comment `"(?#ts)"`, for instance:
- `"(?#ts)^foo/.*"`.
-* <b>`**kwargs`</b>: 'graph': `tf.Graph` in which to perform the regex query.This is
- required when using regex.
- 'positive_filter': an elem if selected only if `positive_filter(elem)` is
- `True`. This is optional.
-
-##### Returns:
-
- A tuple `(ops, ts)` where:
- `ops` is a list of `tf.Operation`, and
- `ts` is a list of `tf.Tensor`
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if the optional keyword argument graph is not a `tf.Graph`
- or if an argument in args is not an (array of) `tf.Tensor`
- or an (array of) `tf.Operation` or a string or a regular expression.
-* <b>`ValueError`</b>: if one of the keyword arguments is unexpected or if a regular
- expression is used without passing a graph as a keyword argument.
-
-
-- - -
-
-### `tf.contrib.graph_editor.select_ts(*args, **kwargs)` {#select_ts}
-
-Helper to select tensors.
-
-##### Args:
-
-
-* <b>`*args`</b>: list of 1) regular expressions (compiled or not) or 2) (array of)
- `tf.Tensor`. `tf.Operation` instances are silently ignored.
-* <b>`**kwargs`</b>: 'graph': `tf.Graph` in which to perform the regex query.This is
- required when using regex.
- 'positive_filter': an elem if selected only if `positive_filter(elem)` is
- `True`. This is optional.
- 'restrict_ts_regex': a regular expression is ignored if it doesn't start
- with the substring "(?#ts)".
-
-##### Returns:
-
- A list of `tf.Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if the optional keyword argument graph is not a `tf.Graph`
- or if an argument in args is not an (array of) `tf.Tensor`
- or an (array of) `tf.Operation` (silently ignored) or a string
- or a regular expression.
-* <b>`ValueError`</b>: if one of the keyword arguments is unexpected or if a regular
- expression is used without passing a graph as a keyword argument.
-
-
-- - -
-
-### `tf.contrib.graph_editor.sgv(*args, **kwargs)` {#sgv}
-
-Create a SubGraphView from selected operations and passthrough tensors.
-
-##### Args:
-
-
-* <b>`*args`</b>: list of 1) regular expressions (compiled or not) or 2) (array of)
- `tf.Operation` 3) (array of) `tf.Tensor`. Those objects will be converted
- into a list of operations and a list of candidate for passthrough tensors.
-* <b>`**kwargs`</b>: keyword graph is used 1) to check that the ops and ts are from
- the correct graph 2) for regular expression query
-
-##### Returns:
-
- A subgraph view.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if the optional keyword argument graph is not a `tf.Graph`
- or if an argument in args is not an (array of) `tf.Tensor`
- or an (array of) `tf.Operation` or a string or a regular expression.
-* <b>`ValueError`</b>: if one of the keyword arguments is unexpected.
-
-
-- - -
-
-### `tf.contrib.graph_editor.sgv_scope(scope, graph)` {#sgv_scope}
-
-Make a subgraph from a name scope.
-
-##### Args:
-
-
-* <b>`scope`</b>: the name of the scope.
-* <b>`graph`</b>: the `tf.Graph`.
-
-##### Returns:
-
- A subgraph view representing the given scope.
-
-
-- - -
-
-### `tf.contrib.graph_editor.swap_inputs(sgv0, sgv1)` {#swap_inputs}
-
-Swap all the inputs of sgv0 and sgv1 (see reroute_inputs).
-
-
-- - -
-
-### `tf.contrib.graph_editor.swap_ios(sgv0, sgv1)` {#swap_ios}
-
-Swap the inputs and outputs of sgv1 to sgv0 (see _reroute).
-
-
-- - -
-
-### `tf.contrib.graph_editor.swap_outputs(sgv0, sgv1)` {#swap_outputs}
-
-Swap all the outputs of sgv0 and sgv1 (see _reroute_outputs).
-
-
-- - -
-
-### `tf.contrib.graph_editor.swap_ts(ts0, ts1, can_modify=None, cannot_modify=None)` {#swap_ts}
-
-For each tensor's pair, swap the end of (t0,t1).
-
-B0 B1 B0 B1
-| | => X
-A0 A1 A0 A1
-
-##### Args:
-
-
-* <b>`ts0`</b>: an object convertible to a list of `tf.Tensor`.
-* <b>`ts1`</b>: an object convertible to a list of `tf.Tensor`.
-* <b>`can_modify`</b>: iterable of operations which can be modified. Any operation
- outside within_ops will be left untouched by this function.
-* <b>`cannot_modify`</b>: iterable of operations which cannot be modified.
- Any operation within cannot_modify will be left untouched by this
- function.
-
-##### Returns:
-
- The number of individual modifications made by the function.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ts0 or ts1 cannot be converted to a list of tf.Tensor.
-* <b>`TypeError`</b>: if can_modify or cannot_modify is not None and cannot be
- converted to a list of tf.Operation.
-
-
-- - -
-
-### `tf.contrib.graph_editor.transform_op_if_inside_handler(info, op, keep_if_possible=True)` {#transform_op_if_inside_handler}
-
-Transform an optional op only if it is inside the subgraph.
-
-This handler is typically use to handle original op: it is fine to keep them
-if they are inside the subgraph, otherwise they are just ignored.
-
-##### Args:
-
-
-* <b>`info`</b>: Transform._TmpInfo instance.
-* <b>`op`</b>: the optional op to transform (or ignore).
-* <b>`keep_if_possible`</b>: re-attach to the original op if possible, that is,
- if the source graph and the destination graph are the same.
-
-##### Returns:
-
- The transformed op or None.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.integrate.md b/tensorflow/g3doc/api_docs/python/contrib.integrate.md
deleted file mode 100644
index 5a05662c1f..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.integrate.md
+++ /dev/null
@@ -1,100 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Integrate (contrib)
-[TOC]
-
-Integration and ODE solvers. See the @{$python/contrib.integrate} guide.
-
-- - -
-
-### `tf.contrib.integrate.odeint(func, y0, t, rtol=1e-06, atol=1e-12, method=None, options=None, full_output=False, name=None)` {#odeint}
-
-Integrate a system of ordinary differential equations.
-
-Solves the initial value problem for a non-stiff system of first order ode-s:
-
- ```
- dy/dt = func(y, t), y(t[0]) = y0
- ```
-
-where y is a Tensor of any shape.
-
-For example:
-
- ```
- # solve `dy/dt = -y`, corresponding to exponential decay
- tf.contrib.integrate.odeint(lambda y, _: -y, 1.0, [0, 1, 2])
- => [1, exp(-1), exp(-2)]
- ```
-
-Output dtypes and numerical precision are based on the dtypes of the inputs
-`y0` and `t`.
-
-Currently, implements 5th order Runge-Kutta with adaptive step size control
-and dense output, using the Dormand-Prince method. Similar to the 'dopri5'
-method of `scipy.integrate.ode` and MATLAB's `ode45`.
-
-Based on: Shampine, Lawrence F. (1986), "Some Practical Runge-Kutta Formulas",
-Mathematics of Computation, American Mathematical Society, 46 (173): 135-150,
-doi:10.2307/2008219
-
-##### Args:
-
-
-* <b>`func`</b>: Function that maps a Tensor holding the state `y` and a scalar Tensor
- `t` into a Tensor of state derivatives with respect to time.
-* <b>`y0`</b>: N-D Tensor giving starting value of `y` at time point `t[0]`. May
- have any floating point or complex dtype.
-* <b>`t`</b>: 1-D Tensor holding a sequence of time points for which to solve for
- `y`. The initial time point should be the first element of this sequence,
- and each time must be larger than the previous time. May have any floating
- point dtype. If not provided as a Tensor, converted to a Tensor with
- float64 dtype.
-* <b>`rtol`</b>: optional float64 Tensor specifying an upper bound on relative error,
- per element of `y`.
-* <b>`atol`</b>: optional float64 Tensor specifying an upper bound on absolute error,
- per element of `y`.
-* <b>`method`</b>: optional string indicating the integration method to use. Currently,
- the only valid option is `'dopri5'`.
-* <b>`options`</b>: optional dict of configuring options for the indicated integration
- method. Can only be provided if a `method` is explicitly set. For
- `'dopri5'`, valid options include:
- * first_step: an initial guess for the size of the first integration
- (current default: 1.0, but may later be changed to use heuristics based
- on the gradient).
- * safety: safety factor for adaptive step control, generally a constant
- in the range 0.8-1 (default: 0.9).
- * ifactor: maximum factor by which the adaptive step may be increased
- (default: 10.0).
- * dfactor: maximum factor by which the adpative step may be decreased
- (default: 0.2).
- * max_num_steps: integer maximum number of integrate steps between time
- points in `t` (default: 1000).
-* <b>`full_output`</b>: optional boolean. If True, `odeint` returns a tuple
- `(y, info_dict)` describing the integration process.
-* <b>`name`</b>: Optional name for this operation.
-
-##### Returns:
-
-
-* <b>`y`</b>: (N+1)-D tensor, where the first dimension corresponds to different
- time points. Contains the solved value of y for each desired time point in
- `t`, with the initial value `y0` being the first element along the first
- dimension.
-* <b>`info_dict`</b>: only if `full_output == True`. A dict with the following values:
- * num_func_evals: integer Tensor counting the number of function
- evaluations.
- * integrate_points: 1D float64 Tensor with the upper bound of each
- integration time step.
- * error_ratio: 1D float Tensor with the estimated ratio of the integration
- error to the error tolerance at each integration step. An ratio greater
- than 1 corresponds to rejected steps.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an invalid `method` is provided.
-* <b>`TypeError`</b>: if `options` is supplied without `method`, or if `t` or `y0` has
- an invalid dtype.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.layers.md b/tensorflow/g3doc/api_docs/python/contrib.layers.md
deleted file mode 100644
index 910cab1cc7..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.layers.md
+++ /dev/null
@@ -1,2340 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Layers (contrib)
-[TOC]
-
-Ops for building neural network layers, regularizers, summaries, etc.
-
-See the @{$python/contrib.layers} guide.
-
-- - -
-
-### `tf.contrib.layers.avg_pool2d(*args, **kwargs)` {#avg_pool2d}
-
-Adds a 2D average pooling op.
-
-It is assumed that the pooling is done per image but not in batch or channels.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A 4-D tensor of shape `[batch_size, height, width, channels]` if
- `data_format` is `NHWC`, and `[batch_size, channels, height, width]` if
- `data_format` is `NCHW`.
-* <b>`kernel_size`</b>: A list of length 2: [kernel_height, kernel_width] of the
- pooling kernel over which the op is computed. Can be an int if both
- values are the same.
-* <b>`stride`</b>: A list of length 2: [stride_height, stride_width].
- Can be an int if both strides are the same. Note that presently
- both strides must have the same value.
-* <b>`padding`</b>: The padding method, either 'VALID' or 'SAME'.
-* <b>`data_format`</b>: A string. `NHWC` (default) and `NCHW` are supported.
-* <b>`outputs_collections`</b>: The collections to which the outputs are added.
-* <b>`scope`</b>: Optional scope for name_scope.
-
-##### Returns:
-
- A `Tensor` representing the results of the pooling operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `data_format` is neither `NHWC` nor `NCHW`.
-
-
-- - -
-
-### `tf.contrib.layers.batch_norm(*args, **kwargs)` {#batch_norm}
-
-Adds a Batch Normalization layer from http://arxiv.org/abs/1502.03167.
-
- "Batch Normalization: Accelerating Deep Network Training by Reducing
- Internal Covariate Shift"
-
- Sergey Ioffe, Christian Szegedy
-
-Can be used as a normalizer function for conv2d and fully_connected.
-
-Note: When is_training is True the moving_mean and moving_variance need to be
-updated, by default the update_ops are placed in `tf.GraphKeys.UPDATE_OPS` so
-they need to be added as a dependency to the `train_op`, example:
-
- update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
- if update_ops:
- updates = tf.group(*update_ops)
- total_loss = control_flow_ops.with_dependencies([updates], total_loss)
-
-One can set updates_collections=None to force the updates in place, but that
-can have speed penalty, especially in distributed settings.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A tensor with 2 or more dimensions, where the first dimension has
- `batch_size`. The normalization is over all but the last dimension if
- `data_format` is `NHWC` and the second dimension if `data_format` is
- `NCHW`.
-* <b>`decay`</b>: Decay for the moving average. Reasonable values for `decay` are close
- to 1.0, typically in the multiple-nines range: 0.999, 0.99, 0.9, etc.
- Lower `decay` value (recommend trying `decay`=0.9) if model experiences
- reasonably good training performance but poor validation and/or test
- performance. Try zero_debias_moving_mean=True for improved stability.
-* <b>`center`</b>: If True, add offset of `beta` to normalized tensor. If False, `beta`
- is ignored.
-* <b>`scale`</b>: If True, multiply by `gamma`. If False, `gamma` is
- not used. When the next layer is linear (also e.g. `nn.relu`), this can be
- disabled since the scaling can be done by the next layer.
-* <b>`epsilon`</b>: Small float added to variance to avoid dividing by zero.
-* <b>`activation_fn`</b>: Activation function, default set to None to skip it and
- maintain a linear activation.
-* <b>`param_initializers`</b>: Optional initializers for beta, gamma, moving mean and
- moving variance.
-* <b>`updates_collections`</b>: Collections to collect the update ops for computation.
- The updates_ops need to be executed with the train_op.
- If None, a control dependency would be added to make sure the updates are
- computed in place.
-* <b>`is_training`</b>: Whether or not the layer is in training mode. In training mode
- it would accumulate the statistics of the moments into `moving_mean` and
- `moving_variance` using an exponential moving average with the given
- `decay`. When it is not in training mode then it would use the values of
- the `moving_mean` and the `moving_variance`.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional collections for the variables.
-* <b>`outputs_collections`</b>: Collections to add the outputs.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
-* <b>`batch_weights`</b>: An optional tensor of shape `[batch_size]`,
- containing a frequency weight for each batch item. If present,
- then the batch normalization uses weighted mean and
- variance. (This can be used to correct for bias in training
- example selection.)
-* <b>`fused`</b>: Use nn.fused_batch_norm if True, nn.batch_normalization otherwise.
-* <b>`data_format`</b>: A string. `NHWC` (default) and `NCHW` are supported.
-* <b>`zero_debias_moving_mean`</b>: Use zero_debias for moving_mean. It creates a new
- pair of variables 'moving_mean/biased' and 'moving_mean/local_step'.
-* <b>`scope`</b>: Optional scope for `variable_scope`.
-
-##### Returns:
-
- A `Tensor` representing the output of the operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `batch_weights` is not None and `fused` is True.
-* <b>`ValueError`</b>: If `data_format` is neither `NHWC` nor `NCHW`.
-* <b>`ValueError`</b>: If the rank of `inputs` is undefined.
-* <b>`ValueError`</b>: If rank or channels dimension of `inputs` is undefined.
-
-
-- - -
-
-### `tf.contrib.layers.convolution2d(*args, **kwargs)` {#convolution2d}
-
-Adds an N-D convolution followed by an optional batch_norm layer.
-
-It is required that 1 <= N <= 3.
-
-`convolution` creates a variable called `weights`, representing the
-convolutional kernel, that is convolved (actually cross-correlated) with the
-`inputs` to produce a `Tensor` of activations. If a `normalizer_fn` is
-provided (such as `batch_norm`), it is then applied. Otherwise, if
-`normalizer_fn` is None and a `biases_initializer` is provided then a `biases`
-variable would be created and added the activations. Finally, if
-`activation_fn` is not `None`, it is applied to the activations as well.
-
-Performs a'trous convolution with input stride/dilation rate equal to `rate`
-if a value > 1 for any dimension of `rate` is specified. In this case
-`stride` values != 1 are not supported.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A Tensor of rank N+2 of shape
- `[batch_size] + input_spatial_shape + [in_channels]` if data_format does
- not start with "NC" (default), or
- `[batch_size, in_channels] + input_spatial_shape` if data_format starts
- with "NC".
-* <b>`num_outputs`</b>: Integer, the number of output filters.
-* <b>`kernel_size`</b>: A sequence of N positive integers specifying the spatial
- dimensions of of the filters. Can be a single integer to specify the same
- value for all spatial dimensions.
-* <b>`stride`</b>: A sequence of N positive integers specifying the stride at which to
- compute output. Can be a single integer to specify the same value for all
- spatial dimensions. Specifying any `stride` value != 1 is incompatible
- with specifying any `rate` value != 1.
-* <b>`padding`</b>: One of `"VALID"` or `"SAME"`.
-* <b>`data_format`</b>: A string or None. Specifies whether the channel dimension of
- the `input` and output is the last dimension (default, or if `data_format`
- does not start with "NC"), or the second dimension (if `data_format`
- starts with "NC"). For N=1, the valid values are "NWC" (default) and
- "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For
- N=3, currently the only valid value is "NDHWC".
-* <b>`rate`</b>: A sequence of N positive integers specifying the dilation rate to use
- for a'trous convolution. Can be a single integer to specify the same
- value for all spatial dimensions. Specifying any `rate` value != 1 is
- incompatible with specifying any `stride` value != 1.
-* <b>`activation_fn`</b>: Activation function. The default value is a ReLU function.
- Explicitly set it to None to skip it and maintain a linear activation.
-* <b>`normalizer_fn`</b>: Normalization function to use instead of `biases`. If
- `normalizer_fn` is provided then `biases_initializer` and
- `biases_regularizer` are ignored and `biases` are not created nor added.
- default set to None for no normalizer function
-* <b>`normalizer_params`</b>: Normalization function parameters.
-* <b>`weights_initializer`</b>: An initializer for the weights.
-* <b>`weights_regularizer`</b>: Optional regularizer for the weights.
-* <b>`biases_initializer`</b>: An initializer for the biases. If None skip biases.
-* <b>`biases_regularizer`</b>: Optional regularizer for the biases.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional list of collections for all the variables or
- a dictionary containing a different list of collection per variable.
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for `variable_scope`.
-
-##### Returns:
-
- A tensor representing the output of the operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `data_format` is invalid.
-* <b>`ValueError`</b>: Both 'rate' and `stride` are not uniformly 1.
-
-
-- - -
-
-### `tf.contrib.layers.conv2d_in_plane(*args, **kwargs)` {#conv2d_in_plane}
-
-Performs the same in-plane convolution to each channel independently.
-
-This is useful for performing various simple channel-independent convolution
-operations such as image gradients:
-
- image = tf.constant(..., shape=(16, 240, 320, 3))
- vert_gradients = layers.conv2d_in_plane(image,
- kernel=[1, -1],
- kernel_size=[2, 1])
- horz_gradients = layers.conv2d_in_plane(image,
- kernel=[1, -1],
- kernel_size=[1, 2])
-
-##### Args:
-
-
-* <b>`inputs`</b>: A 4-D tensor with dimensions [batch_size, height, width, channels].
-* <b>`kernel_size`</b>: A list of length 2 holding the [kernel_height, kernel_width] of
- of the pooling. Can be an int if both values are the same.
-* <b>`stride`</b>: A list of length 2 `[stride_height, stride_width]`.
- Can be an int if both strides are the same. Note that presently
- both strides must have the same value.
-* <b>`padding`</b>: The padding type to use, either 'SAME' or 'VALID'.
-* <b>`activation_fn`</b>: Activation function. The default value is a ReLU function.
- Explicitly set it to None to skip it and maintain a linear activation.
-* <b>`normalizer_fn`</b>: Normalization function to use instead of `biases`. If
- `normalizer_fn` is provided then `biases_initializer` and
- `biases_regularizer` are ignored and `biases` are not created nor added.
- default set to None for no normalizer function
-* <b>`normalizer_params`</b>: Normalization function parameters.
-* <b>`weights_initializer`</b>: An initializer for the weights.
-* <b>`weights_regularizer`</b>: Optional regularizer for the weights.
-* <b>`biases_initializer`</b>: An initializer for the biases. If None skip biases.
-* <b>`biases_regularizer`</b>: Optional regularizer for the biases.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional list of collections for all the variables or
- a dictionary containing a different list of collection per variable.
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for `variable_scope`.
-
-##### Returns:
-
- A `Tensor` representing the output of the operation.
-
-
-- - -
-
-### `tf.contrib.layers.convolution2d_in_plane(*args, **kwargs)` {#convolution2d_in_plane}
-
-Performs the same in-plane convolution to each channel independently.
-
-This is useful for performing various simple channel-independent convolution
-operations such as image gradients:
-
- image = tf.constant(..., shape=(16, 240, 320, 3))
- vert_gradients = layers.conv2d_in_plane(image,
- kernel=[1, -1],
- kernel_size=[2, 1])
- horz_gradients = layers.conv2d_in_plane(image,
- kernel=[1, -1],
- kernel_size=[1, 2])
-
-##### Args:
-
-
-* <b>`inputs`</b>: A 4-D tensor with dimensions [batch_size, height, width, channels].
-* <b>`kernel_size`</b>: A list of length 2 holding the [kernel_height, kernel_width] of
- of the pooling. Can be an int if both values are the same.
-* <b>`stride`</b>: A list of length 2 `[stride_height, stride_width]`.
- Can be an int if both strides are the same. Note that presently
- both strides must have the same value.
-* <b>`padding`</b>: The padding type to use, either 'SAME' or 'VALID'.
-* <b>`activation_fn`</b>: Activation function. The default value is a ReLU function.
- Explicitly set it to None to skip it and maintain a linear activation.
-* <b>`normalizer_fn`</b>: Normalization function to use instead of `biases`. If
- `normalizer_fn` is provided then `biases_initializer` and
- `biases_regularizer` are ignored and `biases` are not created nor added.
- default set to None for no normalizer function
-* <b>`normalizer_params`</b>: Normalization function parameters.
-* <b>`weights_initializer`</b>: An initializer for the weights.
-* <b>`weights_regularizer`</b>: Optional regularizer for the weights.
-* <b>`biases_initializer`</b>: An initializer for the biases. If None skip biases.
-* <b>`biases_regularizer`</b>: Optional regularizer for the biases.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional list of collections for all the variables or
- a dictionary containing a different list of collection per variable.
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for `variable_scope`.
-
-##### Returns:
-
- A `Tensor` representing the output of the operation.
-
-
-- - -
-
-### `tf.nn.conv2d_transpose(value, filter, output_shape, strides, padding='SAME', data_format='NHWC', name=None)` {#conv2d_transpose}
-
-The transpose of `conv2d`.
-
-This operation is sometimes called "deconvolution" after [Deconvolutional
-Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is
-actually the transpose (gradient) of `conv2d` rather than an actual
-deconvolution.
-
-##### Args:
-
-
-* <b>`value`</b>: A 4-D `Tensor` of type `float` and shape
- `[batch, height, width, in_channels]` for `NHWC` data format or
- `[batch, in_channels, height, width]` for `NCHW` data format.
-* <b>`filter`</b>: A 4-D `Tensor` with the same type as `value` and shape
- `[height, width, output_channels, in_channels]`. `filter`'s
- `in_channels` dimension must match that of `value`.
-* <b>`output_shape`</b>: A 1-D `Tensor` representing the output shape of the
- deconvolution op.
-* <b>`strides`</b>: A list of ints. The stride of the sliding window for each
- dimension of the input tensor.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
- See the [comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`data_format`</b>: A string. 'NHWC' and 'NCHW' are supported.
-* <b>`name`</b>: Optional name for the returned tensor.
-
-##### Returns:
-
- A `Tensor` with the same type as `value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If input/output depth does not match `filter`'s shape, or if
- padding is other than `'VALID'` or `'SAME'`.
-
-
-- - -
-
-### `tf.contrib.layers.convolution2d_transpose(*args, **kwargs)` {#convolution2d_transpose}
-
-Adds a convolution2d_transpose with an optional batch normalization layer.
-
-The function creates a variable called `weights`, representing the
-kernel, that is convolved with the input. If `batch_norm_params` is `None`, a
-second variable called 'biases' is added to the result of the operation.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A 4-D `Tensor` of type `float` and shape
- `[batch, height, width, in_channels]` for `NHWC` data format or
- `[batch, in_channels, height, width]` for `NCHW` data format.
-* <b>`num_outputs`</b>: Integer, the number of output filters.
-* <b>`kernel_size`</b>: A list of length 2 holding the [kernel_height, kernel_width] of
- of the filters. Can be an int if both values are the same.
-* <b>`stride`</b>: A list of length 2: [stride_height, stride_width].
- Can be an int if both strides are the same. Note that presently
- both strides must have the same value.
-* <b>`padding`</b>: One of 'VALID' or 'SAME'.
-* <b>`data_format`</b>: A string. `NHWC` (default) and `NCHW` are supported.
-* <b>`activation_fn`</b>: Activation function. The default value is a ReLU function.
- Explicitly set it to None to skip it and maintain a linear activation.
-* <b>`normalizer_fn`</b>: Normalization function to use instead of `biases`. If
- `normalizer_fn` is provided then `biases_initializer` and
- `biases_regularizer` are ignored and `biases` are not created nor added.
- default set to None for no normalizer function
-* <b>`normalizer_params`</b>: Normalization function parameters.
-* <b>`weights_initializer`</b>: An initializer for the weights.
-* <b>`weights_regularizer`</b>: Optional regularizer for the weights.
-* <b>`biases_initializer`</b>: An initializer for the biases. If None skip biases.
-* <b>`biases_regularizer`</b>: Optional regularizer for the biases.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional list of collections for all the variables or
- a dictionary containing a different list of collection per variable.
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`trainable`</b>: Whether or not the variables should be trainable or not.
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- A tensor representing the output of the operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If 'kernel_size' is not a list of length 2.
-* <b>`ValueError`</b>: If `data_format` is neither `NHWC` nor `NCHW`.
-* <b>`ValueError`</b>: If `C` dimension of `inputs` is None.
-
-
-- - -
-
-### `tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None)` {#dropout}
-
-Computes dropout.
-
-With probability `keep_prob`, outputs the input element scaled up by
-`1 / keep_prob`, otherwise outputs `0`. The scaling is so that the expected
-sum is unchanged.
-
-By default, each element is kept or dropped independently. If `noise_shape`
-is specified, it must be
-[broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]`
-will make independent decisions. For example, if `shape(x) = [k, l, m, n]`
-and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be
-kept independently and each row and column will be kept or not kept together.
-
-##### Args:
-
-
-* <b>`x`</b>: A tensor.
-* <b>`keep_prob`</b>: A scalar `Tensor` with the same type as x. The probability
- that each element is kept.
-* <b>`noise_shape`</b>: A 1-D `Tensor` of type `int32`, representing the
- shape for randomly generated keep/drop flags.
-* <b>`seed`</b>: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A Tensor of the same shape of `x`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `keep_prob` is not in `(0, 1]`.
-
-
-- - -
-
-### `tf.contrib.layers.flatten(*args, **kwargs)` {#flatten}
-
-Flattens the input while maintaining the batch_size.
-
- Assumes that the first dimension represents the batch.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A tensor of size [batch_size, ...].
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`scope`</b>: Optional scope for name_scope.
-
-##### Returns:
-
- A flattened tensor with shape [batch_size, k].
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If inputs rank is unknown or less than 2.
-
-
-- - -
-
-### `tf.contrib.layers.fully_connected(*args, **kwargs)` {#fully_connected}
-
-Adds a fully connected layer.
-
-`fully_connected` creates a variable called `weights`, representing a fully
-connected weight matrix, which is multiplied by the `inputs` to produce a
-`Tensor` of hidden units. If a `normalizer_fn` is provided (such as
-`batch_norm`), it is then applied. Otherwise, if `normalizer_fn` is
-None and a `biases_initializer` is provided then a `biases` variable would be
-created and added the hidden units. Finally, if `activation_fn` is not `None`,
-it is applied to the hidden units as well.
-
-Note: that if `inputs` have a rank greater than 2, then `inputs` is flattened
-prior to the initial matrix multiply by `weights`.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A tensor of at least rank 2 and static value for the last dimension;
- i.e. `[batch_size, depth]`, `[None, None, None, channels]`.
-* <b>`num_outputs`</b>: Integer or long, the number of output units in the layer.
-* <b>`activation_fn`</b>: Activation function. The default value is a ReLU function.
- Explicitly set it to None to skip it and maintain a linear activation.
-* <b>`normalizer_fn`</b>: Normalization function to use instead of `biases`. If
- `normalizer_fn` is provided then `biases_initializer` and
- `biases_regularizer` are ignored and `biases` are not created nor added.
- default set to None for no normalizer function
-* <b>`normalizer_params`</b>: Normalization function parameters.
-* <b>`weights_initializer`</b>: An initializer for the weights.
-* <b>`weights_regularizer`</b>: Optional regularizer for the weights.
-* <b>`biases_initializer`</b>: An initializer for the biases. If None skip biases.
-* <b>`biases_regularizer`</b>: Optional regularizer for the biases.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional list of collections for all the variables or
- a dictionary containing a different list of collections per variable.
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- The tensor variable representing the result of the series of operations.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If x has rank less than 2 or if its last dimension is not set.
-
-
-- - -
-
-### `tf.contrib.layers.layer_norm(*args, **kwargs)` {#layer_norm}
-
-Adds a Layer Normalization layer from https://arxiv.org/abs/1607.06450.
-
- "Layer Normalization"
-
- Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton
-
-Can be used as a normalizer function for conv2d and fully_connected.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A tensor with 2 or more dimensions. The normalization
- occurs over all but the first dimension.
-* <b>`center`</b>: If True, add offset of `beta` to normalized tensor. If False, `beta`
- is ignored.
-* <b>`scale`</b>: If True, multiply by `gamma`. If False, `gamma` is
- not used. When the next layer is linear (also e.g. `nn.relu`), this can be
- disabled since the scaling can be done by the next layer.
-* <b>`activation_fn`</b>: Activation function, default set to None to skip it and
- maintain a linear activation.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional collections for the variables.
-* <b>`outputs_collections`</b>: Collections to add the outputs.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for `variable_scope`.
-
-##### Returns:
-
- A `Tensor` representing the output of the operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If rank or last dimension of `inputs` is undefined.
-
-
-- - -
-
-### `tf.contrib.layers.linear()` {#linear}
-
-partial(func, *args, **keywords) - new function with partial application
-of the given arguments and keywords.
-
-
-- - -
-
-### `tf.contrib.layers.max_pool2d(*args, **kwargs)` {#max_pool2d}
-
-Adds a 2D Max Pooling op.
-
-It is assumed that the pooling is done per image but not in batch or channels.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A 4-D tensor of shape `[batch_size, height, width, channels]` if
- `data_format` is `NHWC`, and `[batch_size, channels, height, width]` if
- `data_format` is `NCHW`.
-* <b>`kernel_size`</b>: A list of length 2: [kernel_height, kernel_width] of the
- pooling kernel over which the op is computed. Can be an int if both
- values are the same.
-* <b>`stride`</b>: A list of length 2: [stride_height, stride_width].
- Can be an int if both strides are the same. Note that presently
- both strides must have the same value.
-* <b>`padding`</b>: The padding method, either 'VALID' or 'SAME'.
-* <b>`data_format`</b>: A string. `NHWC` (default) and `NCHW` are supported.
-* <b>`outputs_collections`</b>: The collections to which the outputs are added.
-* <b>`scope`</b>: Optional scope for name_scope.
-
-##### Returns:
-
- A `Tensor` representing the results of the pooling operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `data_format` is neither `NHWC` nor `NCHW`.
-* <b>`ValueError`</b>: If 'kernel_size' is not a 2-D list
-
-
-- - -
-
-### `tf.contrib.layers.one_hot_encoding(*args, **kwargs)` {#one_hot_encoding}
-
-Transform numeric labels into onehot_labels using `tf.one_hot`.
-
-##### Args:
-
-
-* <b>`labels`</b>: [batch_size] target labels.
-* <b>`num_classes`</b>: Total number of classes.
-* <b>`on_value`</b>: A scalar defining the on-value.
-* <b>`off_value`</b>: A scalar defining the off-value.
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`scope`</b>: Optional scope for name_scope.
-
-##### Returns:
-
- One-hot encoding of the labels.
-
-
-- - -
-
-### `tf.nn.relu(features, name=None)` {#relu}
-
-Computes rectified linear: `max(features, 0)`.
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `features`.
-
-
-- - -
-
-### `tf.nn.relu6(features, name=None)` {#relu6}
-
-Computes Rectified Linear 6: `min(max(features, 0), 6)`.
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`,
- `int16`, or `int8`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` with the same type as `features`.
-
-
-- - -
-
-### `tf.contrib.layers.repeat(inputs, repetitions, layer, *args, **kwargs)` {#repeat}
-
-Applies the same layer with the same arguments repeatedly.
-
-```python
- y = repeat(x, 3, conv2d, 64, [3, 3], scope='conv1')
- # It is equivalent to:
-
- x = conv2d(x, 64, [3, 3], scope='conv1/conv1_1')
- x = conv2d(x, 64, [3, 3], scope='conv1/conv1_2')
- y = conv2d(x, 64, [3, 3], scope='conv1/conv1_3')
-```
-
-If the `scope` argument is not given in `kwargs`, it is set to
-`layer.__name__`, or `layer.func.__name__` (for `functools.partial`
-objects). If neither `__name__` nor `func.__name__` is available, the
-layers are called with `scope='stack'`.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A `Tensor` suitable for layer.
-* <b>`repetitions`</b>: Int, number of repetitions.
-* <b>`layer`</b>: A layer with arguments `(inputs, *args, **kwargs)`
-* <b>`*args`</b>: Extra args for the layer.
-* <b>`**kwargs`</b>: Extra kwargs for the layer.
-
-##### Returns:
-
- A tensor result of applying the layer, repetitions times.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the op is unknown or wrong.
-
-
-- - -
-
-### `tf.contrib.layers.safe_embedding_lookup_sparse(embedding_weights, sparse_ids, sparse_weights=None, combiner=None, default_id=None, name=None, partition_strategy='div', max_norm=None)` {#safe_embedding_lookup_sparse}
-
-Lookup embedding results, accounting for invalid IDs and empty features.
-
-The partitioned embedding in `embedding_weights` must all be the same shape
-except for the first dimension. The first dimension is allowed to vary as the
-vocabulary size is not necessarily a multiple of `P`. `embedding_weights`
-may be a `PartitionedVariable` as returned by using `tf.get_variable()` with a
-partitioner.
-
-Invalid IDs (< 0) are pruned from input IDs and weights, as well as any IDs
-with non-positive weight. For an entry with no features, the embedding vector
-for `default_id` is returned, or the 0-vector if `default_id` is not supplied.
-
-The ids and weights may be multi-dimensional. Embeddings are always aggregated
-along the last dimension.
-
-##### Args:
-
-
-* <b>`embedding_weights`</b>: A list of `P` float tensors or values representing
- partitioned embedding tensors. Alternatively, a `PartitionedVariable`,
- created by partitioning along dimension 0. The total unpartitioned
- shape should be `[e_0, e_1, ..., e_m]`, where `e_0` represents the
- vocab size and `e_1, ..., e_m` are the embedding dimensions.
-* <b>`sparse_ids`</b>: `SparseTensor` of shape `[d_0, d_1, ..., d_n]` containing the
- ids. `d_0` is typically batch size.
-* <b>`sparse_weights`</b>: `SparseTensor` of same shape as `sparse_ids`, containing
- float weights corresponding to `sparse_ids`, or `None` if all weights
- are be assumed to be 1.0.
-* <b>`combiner`</b>: A string specifying how to combine embedding results for each
- entry. Currently "mean", "sqrtn" and "sum" are supported, with "mean"
- the default.
-* <b>`default_id`</b>: The id to use for an entry with no features.
-* <b>`name`</b>: A name for this operation (optional).
-* <b>`partition_strategy`</b>: A string specifying the partitioning strategy.
- Currently `"div"` and `"mod"` are supported. Default is `"div"`.
-* <b>`max_norm`</b>: If not None, all embeddings are l2-normalized to max_norm before
- combining.
-
-
-##### Returns:
-
- Dense tensor of shape `[d_0, d_1, ..., d_{n-1}, e_1, ..., e_m]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `embedding_weights` is empty.
-
-
-- - -
-
-### `tf.nn.separable_conv2d(input, depthwise_filter, pointwise_filter, strides, padding, rate=None, name=None)` {#separable_conv2d}
-
-2-D convolution with separable filters.
-
-Performs a depthwise convolution that acts separately on channels followed by
-a pointwise convolution that mixes channels. Note that this is separability
-between dimensions `[1, 2]` and `3`, not spatial separability between
-dimensions `1` and `2`.
-
-In detail,
-
- output[b, i, j, k] = sum_{di, dj, q, r]
- input[b, strides[1] * i + di, strides[2] * j + dj, q] *
- depthwise_filter[di, dj, q, r] *
- pointwise_filter[0, 0, q * channel_multiplier + r, k]
-
-`strides` controls the strides for the depthwise convolution only, since
-the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have
-`strides[0] = strides[3] = 1`. For the most common case of the same
-horizontal and vertical strides, `strides = [1, stride, stride, 1]`.
-If any value in `rate` is greater than 1, we perform atrous depthwise
-convolution, in which case all values in the `strides` tensor must be equal
-to 1.
-
-##### Args:
-
-
-* <b>`input`</b>: 4-D `Tensor` with shape `[batch, in_height, in_width, in_channels]`.
-* <b>`depthwise_filter`</b>: 4-D `Tensor` with shape
- `[filter_height, filter_width, in_channels, channel_multiplier]`.
- Contains `in_channels` convolutional filters of depth 1.
-* <b>`pointwise_filter`</b>: 4-D `Tensor` with shape
- `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise
- filter to mix channels after `depthwise_filter` has convolved spatially.
-* <b>`strides`</b>: 1-D of size 4. The strides for the depthwise convolution for
- each dimension of `input`.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
- See the [comment
- here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`rate`</b>: 1-D of size 2. The dilation rate in which we sample input values
- across the `height` and `width` dimensions in atrous convolution. If it is
- greater than 1, then all values of strides must be 1.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A 4-D `Tensor` of shape `[batch, out_height, out_width, out_channels]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If channel_multiplier * in_channels > out_channels,
- which means that the separable convolution is overparameterized.
-
-
-- - -
-
-### `tf.contrib.layers.separable_convolution2d(*args, **kwargs)` {#separable_convolution2d}
-
-Adds a depth-separable 2D convolution with optional batch_norm layer.
-
-This op first performs a depthwise convolution that acts separately on
-channels, creating a variable called `depthwise_weights`. If `num_outputs`
-is not None, it adds a pointwise convolution that mixes channels, creating a
-variable called `pointwise_weights`. Then, if `batch_norm_params` is None,
-it adds bias to the result, creating a variable called 'biases', otherwise
-it adds a batch normalization layer. It finally applies an activation function
-to produce the end result.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A tensor of size [batch_size, height, width, channels].
-* <b>`num_outputs`</b>: The number of pointwise convolution output filters. If is
- None, then we skip the pointwise convolution stage.
-* <b>`kernel_size`</b>: A list of length 2: [kernel_height, kernel_width] of
- of the filters. Can be an int if both values are the same.
-* <b>`depth_multiplier`</b>: The number of depthwise convolution output channels for
- each input channel. The total number of depthwise convolution output
- channels will be equal to `num_filters_in * depth_multiplier`.
-* <b>`stride`</b>: A list of length 2: [stride_height, stride_width], specifying the
- depthwise convolution stride. Can be an int if both strides are the same.
-* <b>`padding`</b>: One of 'VALID' or 'SAME'.
-* <b>`rate`</b>: A list of length 2: [rate_height, rate_width], specifying the dilation
- rates for a'trous convolution. Can be an int if both rates are the same.
- If any value is larger than one, then both stride values need to be one.
-* <b>`activation_fn`</b>: Activation function. The default value is a ReLU function.
- Explicitly set it to None to skip it and maintain a linear activation.
-* <b>`normalizer_fn`</b>: Normalization function to use instead of `biases`. If
- `normalizer_fn` is provided then `biases_initializer` and
- `biases_regularizer` are ignored and `biases` are not created nor added.
- default set to None for no normalizer function
-* <b>`normalizer_params`</b>: Normalization function parameters.
-* <b>`weights_initializer`</b>: An initializer for the weights.
-* <b>`weights_regularizer`</b>: Optional regularizer for the weights.
-* <b>`biases_initializer`</b>: An initializer for the biases. If None skip biases.
-* <b>`biases_regularizer`</b>: Optional regularizer for the biases.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional list of collections for all the variables or
- a dictionay containing a different list of collection per variable.
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`trainable`</b>: Whether or not the variables should be trainable or not.
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- A `Tensor` representing the output of the operation.
-
-
-- - -
-
-### `tf.nn.softmax(logits, dim=-1, name=None)` {#softmax}
-
-Computes softmax activations.
-
-For each batch `i` and class `j` we have
-
- softmax = exp(logits) / reduce_sum(exp(logits), dim)
-
-##### Args:
-
-
-* <b>`logits`</b>: A non-empty `Tensor`. Must be one of the following types: `half`,
- `float32`, `float64`.
-* <b>`dim`</b>: The dimension softmax would be performed on. The default is -1 which
- indicates the last dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `logits`. Same shape as `logits`.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: if `logits` is empty or `dim` is beyond the last
- dimension of `logits`.
-
-
-- - -
-
-### `tf.stack(values, axis=0, name='stack')` {#stack}
-
-Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor.
-
-Packs the list of tensors in `values` into a tensor with rank one higher than
-each tensor in `values`, by packing them along the `axis` dimension.
-Given a list of length `N` of tensors of shape `(A, B, C)`;
-
-if `axis == 0` then the `output` tensor will have the shape `(N, A, B, C)`.
-if `axis == 1` then the `output` tensor will have the shape `(A, N, B, C)`.
-Etc.
-
-For example:
-
-```prettyprint
-# 'x' is [1, 4]
-# 'y' is [2, 5]
-# 'z' is [3, 6]
-stack([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim.
-stack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]]
-```
-
-This is the opposite of unstack. The numpy equivalent is
-
- tf.stack([x, y, z]) = np.asarray([x, y, z])
-
-##### Args:
-
-
-* <b>`values`</b>: A list of `Tensor` objects with the same shape and type.
-* <b>`axis`</b>: An `int`. The axis to stack along. Defaults to the first dimension.
- Supports negative indexes.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
-
-* <b>`output`</b>: A stacked `Tensor` with the same type as `values`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `axis` is out of the range [-(R+1), R+1).
-
-
-- - -
-
-### `tf.contrib.layers.unit_norm(*args, **kwargs)` {#unit_norm}
-
-Normalizes the given input across the specified dimension to unit length.
-
-Note that the rank of `input` must be known.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A `Tensor` of arbitrary size.
-* <b>`dim`</b>: The dimension along which the input is normalized.
-* <b>`epsilon`</b>: A small value to add to the inputs to avoid dividing by zero.
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- The normalized `Tensor`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If dim is smaller than the number of dimensions in 'inputs'.
-
-
-- - -
-
-### `tf.contrib.layers.embed_sequence(ids, vocab_size=None, embed_dim=None, unique=False, initializer=None, regularizer=None, trainable=True, scope=None, reuse=None)` {#embed_sequence}
-
-Maps a sequence of symbols to a sequence of embeddings.
-
-Typical use case would be reusing embeddings between an encoder and decoder.
-
-##### Args:
-
-
-* <b>`ids`</b>: `[batch_size, doc_length]` `Tensor` of type `int32` or `int64`
- with symbol ids.
-* <b>`vocab_size`</b>: Integer number of symbols in vocabulary.
-* <b>`embed_dim`</b>: Integer number of dimensions for embedding matrix.
-* <b>`unique`</b>: If `True`, will first compute the unique set of indices, and then
- lookup each embedding once, repeating them in the output as needed.
-* <b>`initializer`</b>: An initializer for the embeddings, if `None` default for
- current scope is used.
-* <b>`regularizer`</b>: Optional regularizer for the embeddings.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
-* <b>`scope`</b>: Optional string specifying the variable scope for the op, required
- if `reuse=True`.
-* <b>`reuse`</b>: If `True`, variables inside the op will be reused.
-
-##### Returns:
-
- `Tensor` of `[batch_size, doc_length, embed_dim]` with embedded sequences.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `embed_dim` or `vocab_size` are not specified when not
- `reuse` is `None` or `False`.
-
-
-
-- - -
-
-### `tf.contrib.layers.apply_regularization(regularizer, weights_list=None)` {#apply_regularization}
-
-Returns the summed penalty by applying `regularizer` to the `weights_list`.
-
-Adding a regularization penalty over the layer weights and embedding weights
-can help prevent overfitting the training data. Regularization over layer
-biases is less common/useful, but assuming proper data preprocessing/mean
-subtraction, it usually shouldn't hurt much either.
-
-##### Args:
-
-
-* <b>`regularizer`</b>: A function that takes a single `Tensor` argument and returns
- a scalar `Tensor` output.
-* <b>`weights_list`</b>: List of weights `Tensors` or `Variables` to apply
- `regularizer` over. Defaults to the `GraphKeys.WEIGHTS` collection if
- `None`.
-
-##### Returns:
-
- A scalar representing the overall regularization penalty.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `regularizer` does not return a scalar output, or if we find
- no weights.
-
-
-- - -
-
-### `tf.contrib.layers.l1_regularizer(scale, scope=None)` {#l1_regularizer}
-
-Returns a function that can be used to apply L1 regularization to weights.
-
-L1 regularization encourages sparsity.
-
-##### Args:
-
-
-* <b>`scale`</b>: A scalar multiplier `Tensor`. 0.0 disables the regularizer.
-* <b>`scope`</b>: An optional scope name.
-
-##### Returns:
-
- A function with signature `l1(weights)` that apply L1 regularization.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If scale is negative or if scale is not a float.
-
-
-- - -
-
-### `tf.contrib.layers.l2_regularizer(scale, scope=None)` {#l2_regularizer}
-
-Returns a function that can be used to apply L2 regularization to weights.
-
-Small values of L2 can help prevent overfitting the training data.
-
-##### Args:
-
-
-* <b>`scale`</b>: A scalar multiplier `Tensor`. 0.0 disables the regularizer.
-* <b>`scope`</b>: An optional scope name.
-
-##### Returns:
-
- A function with signature `l2(weights)` that applies L2 regularization.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If scale is negative or if scale is not a float.
-
-
-- - -
-
-### `tf.contrib.layers.sum_regularizer(regularizer_list, scope=None)` {#sum_regularizer}
-
-Returns a function that applies the sum of multiple regularizers.
-
-##### Args:
-
-
-* <b>`regularizer_list`</b>: A list of regularizers to apply.
-* <b>`scope`</b>: An optional scope name
-
-##### Returns:
-
- A function with signature `sum_reg(weights)` that applies the
- sum of all the input regularizers.
-
-
-
-- - -
-
-### `tf.contrib.layers.xavier_initializer(uniform=True, seed=None, dtype=tf.float32)` {#xavier_initializer}
-
-Returns an initializer performing "Xavier" initialization for weights.
-
-This function implements the weight initialization from:
-
-Xavier Glorot and Yoshua Bengio (2010):
- Understanding the difficulty of training deep feedforward neural
- networks. International conference on artificial intelligence and
- statistics.
-
-This initializer is designed to keep the scale of the gradients roughly the
-same in all layers. In uniform distribution this ends up being the range:
-`x = sqrt(6. / (in + out)); [-x, x]` and for normal distribution a standard
-deviation of `sqrt(3. / (in + out))` is used.
-
-##### Args:
-
-
-* <b>`uniform`</b>: Whether to use uniform or normal distributed random initialization.
-* <b>`seed`</b>: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`dtype`</b>: The data type. Only floating point types are supported.
-
-##### Returns:
-
- An initializer for a weight matrix.
-
-
-- - -
-
-### `tf.contrib.layers.xavier_initializer_conv2d(uniform=True, seed=None, dtype=tf.float32)` {#xavier_initializer_conv2d}
-
-Returns an initializer performing "Xavier" initialization for weights.
-
-This function implements the weight initialization from:
-
-Xavier Glorot and Yoshua Bengio (2010):
- Understanding the difficulty of training deep feedforward neural
- networks. International conference on artificial intelligence and
- statistics.
-
-This initializer is designed to keep the scale of the gradients roughly the
-same in all layers. In uniform distribution this ends up being the range:
-`x = sqrt(6. / (in + out)); [-x, x]` and for normal distribution a standard
-deviation of `sqrt(3. / (in + out))` is used.
-
-##### Args:
-
-
-* <b>`uniform`</b>: Whether to use uniform or normal distributed random initialization.
-* <b>`seed`</b>: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`dtype`</b>: The data type. Only floating point types are supported.
-
-##### Returns:
-
- An initializer for a weight matrix.
-
-
-- - -
-
-### `tf.contrib.layers.variance_scaling_initializer(factor=2.0, mode='FAN_IN', uniform=False, seed=None, dtype=tf.float32)` {#variance_scaling_initializer}
-
-Returns an initializer that generates tensors without scaling variance.
-
-When initializing a deep network, it is in principle advantageous to keep
-the scale of the input variance constant, so it does not explode or diminish
-by reaching the final layer. This initializer use the following formula:
-
-```python
- if mode='FAN_IN': # Count only number of input connections.
- n = fan_in
- elif mode='FAN_OUT': # Count only number of output connections.
- n = fan_out
- elif mode='FAN_AVG': # Average number of inputs and output connections.
- n = (fan_in + fan_out)/2.0
-
- truncated_normal(shape, 0.0, stddev=sqrt(factor / n))
-```
-
-* To get [Delving Deep into Rectifiers](
- http://arxiv.org/pdf/1502.01852v1.pdf), use (Default):<br/>
- `factor=2.0 mode='FAN_IN' uniform=False`
-* To get [Convolutional Architecture for Fast Feature Embedding](
- http://arxiv.org/abs/1408.5093), use:<br/>
- `factor=1.0 mode='FAN_IN' uniform=True`
-* To get [Understanding the difficulty of training deep feedforward neural
- networks](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf),
- use:<br/>
- `factor=1.0 mode='FAN_AVG' uniform=True.`
-* To get `xavier_initializer` use either:<br/>
- `factor=1.0 mode='FAN_AVG' uniform=True`, or<br/>
- `factor=1.0 mode='FAN_AVG' uniform=False`.
-
-##### Args:
-
-
-* <b>`factor`</b>: Float. A multiplicative factor.
-* <b>`mode`</b>: String. 'FAN_IN', 'FAN_OUT', 'FAN_AVG'.
-* <b>`uniform`</b>: Whether to use uniform or normal distributed random initialization.
-* <b>`seed`</b>: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`dtype`</b>: The data type. Only floating point types are supported.
-
-##### Returns:
-
- An initializer that generates tensors with unit variance.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `dtype` is not a floating point type.
-* <b>`TypeError`</b>: if `mode` is not in ['FAN_IN', 'FAN_OUT', 'FAN_AVG'].
-
-
-
-- - -
-
-### `tf.contrib.layers.optimize_loss(loss, global_step, learning_rate, optimizer, gradient_noise_scale=None, gradient_multipliers=None, clip_gradients=None, learning_rate_decay_fn=None, update_ops=None, variables=None, name=None, summaries=None, colocate_gradients_with_ops=False)` {#optimize_loss}
-
-Given loss and parameters for optimizer, returns a training op.
-
-Various ways of passing optimizers, include:
-
-- string, name of the optimizer like 'SGD', 'Adam', see OPTIMIZER_CLS_NAMES
- for full list. E.g. `optimize_loss(..., optimizer='Adam')`.
-- function, takes learning rate `Tensor` as argument and must return
- `Optimizer` instance. E.g. `optimize_loss(...,
- optimizer=lambda lr: tf.train.MomentumOptimizer(lr, momentum=0.5))`.
- Alternatively, if `learning_rate` is `None`, the function takes no
- arguments. E.g. `optimize_loss(..., learning_rate=None,
- optimizer=lambda: tf.train.MomentumOptimizer(0.5, momentum=0.5))`.
-- class, subclass of `Optimizer` that takes only one required argument -
- learning rate, such as AdamOptimizer, AdagradOptimizer.
- E.g. `optimize_loss(..., optimizer=tf.train.AdagradOptimizer)`.
-- object, instance of subclass of `Optimizer`.
- E.g., `optimizer_loss(..., optimizer=tf.train.AdagradOptimizer(0.5))`.
-
-##### Args:
-
-
-* <b>`loss`</b>: Scalar `Tensor`.
-* <b>`global_step`</b>: Scalar int `Tensor`, step counter for each update. If not
- supplied, it will be fetched from the default graph (see
- `tf.contrib.framework.get_global_step` for details). If it's
- not been created, no step will be incremented with each weight
- update. `learning_rate_decay_fn` requires `global_step`.
-* <b>`learning_rate`</b>: float or `Tensor`, magnitude of update per each training
- step. Can be `None`.
-* <b>`optimizer`</b>: string, class or optimizer instance, used as trainer.
- string should be name of optimizer, like 'SGD',
- 'Adam', 'Adagrad'. Full list in OPTIMIZER_CLS_NAMES constant.
- class should be sub-class of `tf.Optimizer` that implements
- `compute_gradients` and `apply_gradients` functions.
- optimizer instance should be instantiation of `tf.Optimizer`
- sub-class and have `compute_gradients` and `apply_gradients`
- functions.
-* <b>`gradient_noise_scale`</b>: float or None, adds 0-mean normal noise scaled by this
- value.
-* <b>`gradient_multipliers`</b>: dict of variables or variable names to floats.
- If present, gradients for specified
- variables will be multiplied by given constant.
-* <b>`clip_gradients`</b>: float, callable or `None`. If float, is provided, a global
- clipping is applied to prevent the norm of the gradient to exceed this
- value. Alternatively, a callable can be provided e.g.: adaptive_clipping.
- This callable takes a `list` of `(gradients, variables)` `tuple`s and
- returns the same thing with the gradients modified.
-* <b>`learning_rate_decay_fn`</b>: function, takes `learning_rate` and `global_step`
- `Tensor`s, returns `Tensor`.
- Can be used to implement any learning rate decay
- functions.
- For example: `tf.train.exponential_decay`.
- Ignored if `learning_rate` is not supplied.
-* <b>`update_ops`</b>: list of update `Operation`s to execute at each step. If `None`,
- uses elements of UPDATE_OPS collection. The order of execution
- between `update_ops` and `loss` is non-deterministic.
-* <b>`variables`</b>: list of variables to optimize or
- `None` to use all trainable variables.
-* <b>`name`</b>: The name for this operation is used to scope operations and summaries.
-* <b>`summaries`</b>: List of internal quantities to visualize on tensorboard. If not
- set only the loss and the learning rate will be reported. The
- complete list is in OPTIMIZER_SUMMARIES.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with the
- corresponding op.
-
-##### Returns:
-
- Training op.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if:
- * `loss` is an invalid type or shape.
- * `global_step` is an invalid type or shape.
- * `learning_rate` is an invalid type or value.
- * `optimizer` is wrong type.
- * `clip_gradients` is not float or callable.
- * `learning_rate` and `learning_rate_decay_fn` are supplied, but no
- `global_step` is available.
-
-
-
-- - -
-
-### `tf.contrib.layers.summarize_activation(op)` {#summarize_activation}
-
-Summarize an activation.
-
-This applies the given activation and adds useful summaries specific to the
-activation.
-
-##### Args:
-
-
-* <b>`op`</b>: The tensor to summarize (assumed to be a layer activation).
-
-##### Returns:
-
- The summary op created to summarize `op`.
-
-
-- - -
-
-### `tf.contrib.layers.summarize_tensor(tensor, tag=None)` {#summarize_tensor}
-
-Summarize a tensor using a suitable summary type.
-
-This function adds a summary op for `tensor`. The type of summary depends on
-the shape of `tensor`. For scalars, a `scalar_summary` is created, for all
-other tensors, `histogram_summary` is used.
-
-##### Args:
-
-
-* <b>`tensor`</b>: The tensor to summarize
-* <b>`tag`</b>: The tag to use, if None then use tensor's op's name.
-
-##### Returns:
-
- The summary op created or None for string tensors.
-
-
-- - -
-
-### `tf.contrib.layers.summarize_tensors(tensors, summarizer=summarize_tensor)` {#summarize_tensors}
-
-Summarize a set of tensors.
-
-
-- - -
-
-### `tf.contrib.layers.summarize_collection(collection, name_filter=None, summarizer=summarize_tensor)` {#summarize_collection}
-
-Summarize a graph collection of tensors, possibly filtered by name.
-
-
-
-- - -
-
-### `tf.contrib.layers.summarize_activations(name_filter=None, summarizer=summarize_activation)` {#summarize_activations}
-
-Summarize activations, using `summarize_activation` to summarize.
-
-
-
-- - -
-
-### `tf.contrib.layers.bucketized_column(source_column, boundaries)` {#bucketized_column}
-
-Creates a _BucketizedColumn for discretizing dense input.
-
-##### Args:
-
-
-* <b>`source_column`</b>: A _RealValuedColumn defining dense column.
-* <b>`boundaries`</b>: A list of floats specifying the boundaries. It has to be sorted.
-
-##### Returns:
-
- A _BucketizedColumn.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if 'boundaries' is empty or not sorted.
-
-
-- - -
-
-### `tf.contrib.layers.check_feature_columns(feature_columns)` {#check_feature_columns}
-
-Checks the validity of the set of FeatureColumns.
-
-##### Args:
-
-
-* <b>`feature_columns`</b>: An iterable of instances or subclasses of FeatureColumn.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `feature_columns` is a dict.
-* <b>`ValueError`</b>: If there are duplicate feature column keys.
-
-
-- - -
-
-### `tf.contrib.layers.create_feature_spec_for_parsing(feature_columns)` {#create_feature_spec_for_parsing}
-
-Helper that prepares features config from input feature_columns.
-
-The returned feature config can be used as arg 'features' in tf.parse_example.
-
-Typical usage example:
-
-```python
-# Define features and transformations
-feature_a = sparse_column_with_vocabulary_file(...)
-feature_b = real_valued_column(...)
-feature_c_bucketized = bucketized_column(real_valued_column("feature_c"), ...)
-feature_a_x_feature_c = crossed_column(
- columns=[feature_a, feature_c_bucketized], ...)
-
-feature_columns = set(
- [feature_b, feature_c_bucketized, feature_a_x_feature_c])
-batch_examples = tf.parse_example(
- serialized=serialized_examples,
- features=create_feature_spec_for_parsing(feature_columns))
-```
-
-For the above example, create_feature_spec_for_parsing would return the dict:
-{
- "feature_a": parsing_ops.VarLenFeature(tf.string),
- "feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32),
- "feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32)
-}
-
-##### Args:
-
-
-* <b>`feature_columns`</b>: An iterable containing all the feature columns. All items
- should be instances of classes derived from _FeatureColumn, unless
- feature_columns is a dict -- in which case, this should be true of all
- values in the dict.
-
-##### Returns:
-
- A dict mapping feature keys to FixedLenFeature or VarLenFeature values.
-
-
-- - -
-
-### `tf.contrib.layers.crossed_column(columns, hash_bucket_size, combiner='sum', ckpt_to_load_from=None, tensor_name_in_ckpt=None, hash_key=None)` {#crossed_column}
-
-Creates a _CrossedColumn for performing feature crosses.
-
-##### Args:
-
-
-* <b>`columns`</b>: An iterable of _FeatureColumn. Items can be an instance of
- _SparseColumn, _CrossedColumn, or _BucketizedColumn.
-* <b>`hash_bucket_size`</b>: An int that is > 1. The number of buckets.
-* <b>`combiner`</b>: A string specifying how to reduce if there are multiple entries
- in a single row. Currently "mean", "sqrtn" and "sum" are supported, with
- "sum" the default. "sqrtn" often achieves good accuracy, in particular
- with bag-of-words columns. Each of this can be thought as example level
- normalizations on the column::
- * "sum": do not normalize
- * "mean": do l1 normalization
- * "sqrtn": do l2 normalization
- For more information: `tf.embedding_lookup_sparse`.
-* <b>`ckpt_to_load_from`</b>: (Optional). String representing checkpoint name/pattern
- to restore the column weights. Required if `tensor_name_in_ckpt` is not
- None.
-* <b>`tensor_name_in_ckpt`</b>: (Optional). Name of the `Tensor` in the provided
- checkpoint from which to restore the column weights. Required if
- `ckpt_to_load_from` is not None.
-* <b>`hash_key`</b>: Specify the hash_key that will be used by the `FingerprintCat64`
- function to combine the crosses fingerprints on SparseFeatureCrossOp
- (optional).
-
-##### Returns:
-
- A _CrossedColumn.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if any item in columns is not an instance of _SparseColumn,
- _CrossedColumn, or _BucketizedColumn, or
- hash_bucket_size is not an int.
-* <b>`ValueError`</b>: if hash_bucket_size is not > 1 or
- len(columns) is not > 1.
-
-
-- - -
-
-### `tf.contrib.layers.embedding_column(sparse_id_column, dimension, combiner='mean', initializer=None, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None)` {#embedding_column}
-
-Creates an `_EmbeddingColumn` for feeding sparse data into a DNN.
-
-##### Args:
-
-
-* <b>`sparse_id_column`</b>: A `_SparseColumn` which is created by for example
- `sparse_column_with_*` or crossed_column functions. Note that `combiner`
- defined in `sparse_id_column` is ignored.
-* <b>`dimension`</b>: An integer specifying dimension of the embedding.
-* <b>`combiner`</b>: A string specifying how to reduce if there are multiple entries
- in a single row. Currently "mean", "sqrtn" and "sum" are supported, with
- "mean" the default. "sqrtn" often achieves good accuracy, in particular
- with bag-of-words columns. Each of this can be thought as example level
- normalizations on the column:
- * "sum": do not normalize
- * "mean": do l1 normalization
- * "sqrtn": do l2 normalization
- For more information: `tf.embedding_lookup_sparse`.
-* <b>`initializer`</b>: A variable initializer function to be used in embedding
- variable initialization. If not specified, defaults to
- `tf.truncated_normal_initializer` with mean 0.0 and standard deviation
- 1/sqrt(sparse_id_column.length).
-* <b>`ckpt_to_load_from`</b>: (Optional). String representing checkpoint name/pattern
- to restore the column weights. Required if `tensor_name_in_ckpt` is not
- None.
-* <b>`tensor_name_in_ckpt`</b>: (Optional). Name of the `Tensor` in the provided
- checkpoint from which to restore the column weights. Required if
- `ckpt_to_load_from` is not None.
-* <b>`max_norm`</b>: (Optional). If not None, embedding values are l2-normalized to
- the value of max_norm.
-
-##### Returns:
-
- An `_EmbeddingColumn`.
-
-
-- - -
-
-### `tf.contrib.layers.scattered_embedding_column(column_name, size, dimension, hash_key, combiner='mean', initializer=None)` {#scattered_embedding_column}
-
-Creates an embedding column of a sparse feature using parameter hashing.
-
-The i-th embedding component of a value v is found by retrieving an
-embedding weight whose index is a fingerprint of the pair (v,i).
-
-An embedding column with sparse_column_with_hash_bucket such as
- embedding_column(
- sparse_column_with_hash_bucket(column_name, bucket_size),
- dimension)
-
-could be replaced by
- scattered_embedding_column(
- column_name, size=bucket_size * dimension, dimension=dimension,
- hash_key=tf.contrib.layers.SPARSE_FEATURE_CROSS_DEFAULT_HASH_KEY)
-
-for the same number of embedding parameters and hopefully reduced impact of
-collisions with a cost of slowing down training.
-
-##### Args:
-
-
-* <b>`column_name`</b>: A string defining sparse column name.
-* <b>`size`</b>: An integer specifying the number of parameters in the embedding layer.
-* <b>`dimension`</b>: An integer specifying dimension of the embedding.
-* <b>`hash_key`</b>: Specify the hash_key that will be used by the `FingerprintCat64`
- function to combine the crosses fingerprints on SparseFeatureCrossOp.
-* <b>`combiner`</b>: A string specifying how to reduce if there are multiple entries
- in a single row. Currently "mean", "sqrtn" and "sum" are supported, with
- "mean" the default. "sqrtn" often achieves good accuracy, in particular
- with bag-of-words columns. Each of this can be thought as example level
- normalizations on the column:
- * "sum": do not normalize features in the column
- * "mean": do l1 normalization on features in the column
- * "sqrtn": do l2 normalization on features in the column
- For more information: `tf.embedding_lookup_sparse`.
-* <b>`initializer`</b>: A variable initializer function to be used in embedding
- variable initialization. If not specified, defaults to
- `tf.truncated_normal_initializer` with mean 0 and standard deviation 0.1.
-
-##### Returns:
-
- A _ScatteredEmbeddingColumn.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if dimension or size is not a positive integer; or if combiner
- is not supported.
-
-
-- - -
-
-### `tf.contrib.layers.input_from_feature_columns(columns_to_tensors, feature_columns, weight_collections=None, trainable=True, scope=None)` {#input_from_feature_columns}
-
-A tf.contrib.layer style input layer builder based on FeatureColumns.
-
-Generally a single example in training data is described with feature columns.
-At the first layer of the model, this column oriented data should be converted
-to a single tensor. Each feature column needs a different kind of operation
-during this conversion. For example sparse features need a totally different
-handling than continuous features.
-
-Example:
-
-```python
- # Building model for training
- columns_to_tensor = tf.parse_example(...)
- first_layer = input_from_feature_columns(
- columns_to_tensors=columns_to_tensor,
- feature_columns=feature_columns)
- second_layer = fully_connected(inputs=first_layer, ...)
- ...
-```
-
-where feature_columns can be defined as follows:
-
-```python
- sparse_feature = sparse_column_with_hash_bucket(
- column_name="sparse_col", ...)
- sparse_feature_emb = embedding_column(sparse_id_column=sparse_feature, ...)
- real_valued_feature = real_valued_column(...)
- real_valued_buckets = bucketized_column(
- source_column=real_valued_feature, ...)
-
- feature_columns=[sparse_feature_emb, real_valued_buckets]
-```
-
-##### Args:
-
-
-* <b>`columns_to_tensors`</b>: A mapping from feature column to tensors. 'string' key
- means a base feature (not-transformed). It can have FeatureColumn as a
- key too. That means that FeatureColumn is already transformed by input
- pipeline. For example, `inflow` may have handled transformations.
-* <b>`feature_columns`</b>: A set containing all the feature columns. All items in the
- set should be instances of classes derived by FeatureColumn.
-* <b>`weight_collections`</b>: List of graph collections to which weights are added.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- A Tensor which can be consumed by hidden layers in the neural network.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if FeatureColumn cannot be consumed by a neural network.
-
-
-- - -
-
-### `tf.contrib.layers.joint_weighted_sum_from_feature_columns(columns_to_tensors, feature_columns, num_outputs, weight_collections=None, trainable=True, scope=None)` {#joint_weighted_sum_from_feature_columns}
-
-A restricted linear prediction builder based on FeatureColumns.
-
-As long as all feature columns are unweighted sparse columns this computes the
-prediction of a linear model which stores all weights in a single variable.
-
-##### Args:
-
-
-* <b>`columns_to_tensors`</b>: A mapping from feature column to tensors. 'string' key
- means a base feature (not-transformed). It can have FeatureColumn as a
- key too. That means that FeatureColumn is already transformed by input
- pipeline. For example, `inflow` may have handled transformations.
-* <b>`feature_columns`</b>: A set containing all the feature columns. All items in the
- set should be instances of classes derived from FeatureColumn.
-* <b>`num_outputs`</b>: An integer specifying number of outputs. Default value is 1.
-* <b>`weight_collections`</b>: List of graph collections to which weights are added.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- A tuple containing:
-
- * A Tensor which represents predictions of a linear model.
- * A list of Variables storing the weights.
- * A Variable which is used for bias.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if FeatureColumn cannot be used for linear predictions.
-
-
-- - -
-
-### `tf.contrib.layers.make_place_holder_tensors_for_base_features(feature_columns)` {#make_place_holder_tensors_for_base_features}
-
-Returns placeholder tensors for inference.
-
-##### Args:
-
-
-* <b>`feature_columns`</b>: An iterable containing all the feature columns. All items
- should be instances of classes derived from _FeatureColumn.
-
-##### Returns:
-
- A dict mapping feature keys to SparseTensors (sparse columns) or
- placeholder Tensors (dense columns).
-
-
-- - -
-
-### `tf.contrib.layers.multi_class_target(*args, **kwargs)` {#multi_class_target}
-
-Creates a _TargetColumn for multi class single label classification. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-11-12.
-Instructions for updating:
-This file will be removed after the deprecation date.Please switch to third_party/tensorflow/contrib/learn/python/learn/estimators/head.py
-
-The target column uses softmax cross entropy loss.
-
-##### Args:
-
-
-* <b>`n_classes`</b>: Integer, number of classes, must be >= 2
-* <b>`label_name`</b>: String, name of the key in label dict. Can be null if label
- is a tensor (single headed models).
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training. It
- will be multiplied by the loss of the example.
-
-##### Returns:
-
- An instance of _MultiClassTargetColumn.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if n_classes is < 2
-
-
-- - -
-
-### `tf.contrib.layers.one_hot_column(sparse_id_column)` {#one_hot_column}
-
-Creates an `_OneHotColumn` for a one-hot or multi-hot repr in a DNN.
-
-##### Args:
-
-
-* <b>`sparse_id_column`</b>: A _SparseColumn which is created by
- `sparse_column_with_*`
- or crossed_column functions. Note that `combiner` defined in
- `sparse_id_column` is ignored.
-
-##### Returns:
-
- An _OneHotColumn.
-
-
-- - -
-
-### `tf.contrib.layers.parse_feature_columns_from_examples(serialized, feature_columns, name=None, example_names=None)` {#parse_feature_columns_from_examples}
-
-Parses tf.Examples to extract tensors for given feature_columns.
-
-This is a wrapper of 'tf.parse_example'.
-
-Example:
-
-```python
-columns_to_tensor = parse_feature_columns_from_examples(
- serialized=my_data,
- feature_columns=my_features)
-
-# Where my_features are:
-# Define features and transformations
-sparse_feature_a = sparse_column_with_keys(
- column_name="sparse_feature_a", keys=["AB", "CD", ...])
-
-embedding_feature_a = embedding_column(
- sparse_id_column=sparse_feature_a, dimension=3, combiner="sum")
-
-sparse_feature_b = sparse_column_with_hash_bucket(
- column_name="sparse_feature_b", hash_bucket_size=1000)
-
-embedding_feature_b = embedding_column(
- sparse_id_column=sparse_feature_b, dimension=16, combiner="sum")
-
-crossed_feature_a_x_b = crossed_column(
- columns=[sparse_feature_a, sparse_feature_b], hash_bucket_size=10000)
-
-real_feature = real_valued_column("real_feature")
-real_feature_buckets = bucketized_column(
- source_column=real_feature, boundaries=[...])
-
-my_features = [embedding_feature_b, real_feature_buckets, embedding_feature_a]
-```
-
-##### Args:
-
-
-* <b>`serialized`</b>: A vector (1-D Tensor) of strings, a batch of binary
- serialized `Example` protos.
-* <b>`feature_columns`</b>: An iterable containing all the feature columns. All items
- should be instances of classes derived from _FeatureColumn.
-* <b>`name`</b>: A name for this operation (optional).
-* <b>`example_names`</b>: A vector (1-D Tensor) of strings (optional), the names of
- the serialized protos in the batch.
-
-##### Returns:
-
- A `dict` mapping FeatureColumn to `Tensor` and `SparseTensor` values.
-
-
-- - -
-
-### `tf.contrib.layers.parse_feature_columns_from_sequence_examples(serialized, context_feature_columns, sequence_feature_columns, name=None, example_name=None)` {#parse_feature_columns_from_sequence_examples}
-
-Parses tf.SequenceExamples to extract tensors for given `FeatureColumn`s.
-
-##### Args:
-
-
-* <b>`serialized`</b>: A scalar (0-D Tensor) of type string, a single serialized
- `SequenceExample` proto.
-* <b>`context_feature_columns`</b>: An iterable containing the feature columns for
- context features. All items should be instances of classes derived from
- `_FeatureColumn`. Can be `None`.
-* <b>`sequence_feature_columns`</b>: An iterable containing the feature columns for
- sequence features. All items should be instances of classes derived from
- `_FeatureColumn`. Can be `None`.
-* <b>`name`</b>: A name for this operation (optional).
-* <b>`example_name`</b>: A scalar (0-D Tensor) of type string (optional), the names of
- the serialized proto.
-
-##### Returns:
-
- A tuple consisting of:
-
-* <b>`context_features`</b>: a dict mapping `FeatureColumns` from
- `context_feature_columns` to their parsed `Tensors`/`SparseTensor`s.
-* <b>`sequence_features`</b>: a dict mapping `FeatureColumns` from
- `sequence_feature_columns` to their parsed `Tensors`/`SparseTensor`s.
-
-
-- - -
-
-### `tf.contrib.layers.real_valued_column(column_name, dimension=1, default_value=None, dtype=tf.float32, normalizer=None)` {#real_valued_column}
-
-Creates a `_RealValuedColumn` for dense numeric data.
-
-##### Args:
-
-
-* <b>`column_name`</b>: A string defining real valued column name.
-* <b>`dimension`</b>: An integer specifying dimension of the real valued column.
- The default is 1. When dimension is not None, the Tensor representing
- the _RealValuedColumn will have the shape of [batch_size, dimension].
- A None dimension means the feature column should be treat as variable
- length and will be parsed as a `SparseTensor`.
-* <b>`default_value`</b>: A single value compatible with dtype or a list of values
- compatible with dtype which the column takes on during tf.Example parsing
- if data is missing. When dimension is not None, a default value of None
- will cause tf.parse_example to fail if an example does not contain this
- column. If a single value is provided, the same value will be applied as
- the default value for every dimension. If a list of values is provided,
- the length of the list should be equal to the value of `dimension`.
- Only scalar default value is supported in case dimension is not specified.
-* <b>`dtype`</b>: defines the type of values. Default value is tf.float32. Must be a
- non-quantized, real integer or floating point type.
-* <b>`normalizer`</b>: If not None, a function that can be used to normalize the value
- of the real valued column after default_value is applied for parsing.
- Normalizer function takes the input tensor as its argument, and returns
- the output tensor. (e.g. lambda x: (x - 3.0) / 4.2). Note that for
- variable length columns, the normalizer should expect an input_tensor of
- type `SparseTensor`.
-
-##### Returns:
-
- A _RealValuedColumn.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if dimension is not an int
-* <b>`ValueError`</b>: if dimension is not a positive integer
-* <b>`TypeError`</b>: if default_value is a list but its length is not equal to the
- value of `dimension`.
-* <b>`TypeError`</b>: if default_value is not compatible with dtype.
-* <b>`ValueError`</b>: if dtype is not convertable to tf.float32.
-
-
-- - -
-
-### `tf.contrib.layers.shared_embedding_columns(sparse_id_columns, dimension, combiner='mean', shared_embedding_name=None, initializer=None, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None)` {#shared_embedding_columns}
-
-Creates a list of `_EmbeddingColumn` sharing the same embedding.
-
-##### Args:
-
-
-* <b>`sparse_id_columns`</b>: An iterable of `_SparseColumn`, such as those created by
- `sparse_column_with_*` or crossed_column functions. Note that `combiner`
- defined in each sparse_id_column is ignored.
-* <b>`dimension`</b>: An integer specifying dimension of the embedding.
-* <b>`combiner`</b>: A string specifying how to reduce if there are multiple entries
- in a single row. Currently "mean", "sqrtn" and "sum" are supported, with
- "mean" the default. "sqrtn" often achieves good accuracy, in particular
- with bag-of-words columns. Each of this can be thought as example level
- normalizations on the column:
- * "sum": do not normalize
- * "mean": do l1 normalization
- * "sqrtn": do l2 normalization
- For more information: `tf.embedding_lookup_sparse`.
-* <b>`shared_embedding_name`</b>: (Optional). A string specifying the name of shared
- embedding weights. This will be needed if you want to reference the shared
- embedding separately from the generated `_EmbeddingColumn`.
-* <b>`initializer`</b>: A variable initializer function to be used in embedding
- variable initialization. If not specified, defaults to
- `tf.truncated_normal_initializer` with mean 0.0 and standard deviation
- 1/sqrt(sparse_id_columns[0].length).
-* <b>`ckpt_to_load_from`</b>: (Optional). String representing checkpoint name/pattern
- to restore the column weights. Required if `tensor_name_in_ckpt` is not
- None.
-* <b>`tensor_name_in_ckpt`</b>: (Optional). Name of the `Tensor` in the provided
- checkpoint from which to restore the column weights. Required if
- `ckpt_to_load_from` is not None.
-* <b>`max_norm`</b>: (Optional). If not None, embedding values are l2-normalized to
- the value of max_norm.
-
-##### Returns:
-
- A tuple of `_EmbeddingColumn` with shared embedding space.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if sparse_id_columns is empty, or its elements are not
- compatible with each other.
-* <b>`TypeError`</b>: if `sparse_id_columns` is not a sequence or is a string. If at
- least one element of `sparse_id_columns` is not a `SparseTensor`.
-
-
-- - -
-
-### `tf.contrib.layers.sparse_column_with_hash_bucket(column_name, hash_bucket_size, combiner='sum', dtype=tf.string)` {#sparse_column_with_hash_bucket}
-
-Creates a _SparseColumn with hashed bucket configuration.
-
-Use this when your sparse features are in string or integer format, but you
-don't have a vocab file that maps each value to an integer ID.
-output_id = Hash(input_feature_string) % bucket_size
-
-##### Args:
-
-
-* <b>`column_name`</b>: A string defining sparse column name.
-* <b>`hash_bucket_size`</b>: An int that is > 1. The number of buckets.
-* <b>`combiner`</b>: A string specifying how to reduce if the sparse column is
- multivalent. Currently "mean", "sqrtn" and "sum" are supported, with "sum"
- the default. "sqrtn" often achieves good accuracy, in particular with
- bag-of-words columns.
- * "sum": do not normalize features in the column
- * "mean": do l1 normalization on features in the column
- * "sqrtn": do l2 normalization on features in the column
- For more information: `tf.embedding_lookup_sparse`.
-* <b>`dtype`</b>: The type of features. Only string and integer types are supported.
-
-##### Returns:
-
- A _SparseColumn with hashed bucket configuration
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: hash_bucket_size is not greater than 2.
-* <b>`ValueError`</b>: dtype is neither string nor integer.
-
-
-- - -
-
-### `tf.contrib.layers.sparse_column_with_integerized_feature(column_name, bucket_size, combiner='sum', dtype=tf.int64)` {#sparse_column_with_integerized_feature}
-
-Creates an integerized _SparseColumn.
-
-Use this when your features are already pre-integerized into int64 IDs, that
-is, when the set of values to output is already coming in as what's desired in
-the output. Integerized means we can use the feature value itself as id.
-
-Typically this is used for reading contiguous ranges of integers indexes, but
-it doesn't have to be. The output value is simply copied from the
-input_feature, whatever it is. Just be aware, however, that if you have large
-gaps of unused integers it might affect what you feed those in (for instance,
-if you make up a one-hot tensor from these, the unused integers will appear as
-values in the tensor which are always zero.)
-
-##### Args:
-
-
-* <b>`column_name`</b>: A string defining sparse column name.
-* <b>`bucket_size`</b>: An int that is > 1. The number of buckets. It should be bigger
- than maximum feature. In other words features in this column should be an
- int64 in range [0, bucket_size)
-* <b>`combiner`</b>: A string specifying how to reduce if the sparse column is
- multivalent. Currently "mean", "sqrtn" and "sum" are supported, with "sum"
- the default. "sqrtn" often achieves good accuracy, in particular with
- bag-of-words columns.
- * "sum": do not normalize features in the column
- * "mean": do l1 normalization on features in the column
- * "sqrtn": do l2 normalization on features in the column
- For more information: `tf.embedding_lookup_sparse`.
-* <b>`dtype`</b>: Type of features. It should be an integer type. Default value is
- dtypes.int64.
-
-##### Returns:
-
- An integerized _SparseColumn definition.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: bucket_size is not greater than 1.
-* <b>`ValueError`</b>: dtype is not integer.
-
-
-- - -
-
-### `tf.contrib.layers.sparse_column_with_keys(column_name, keys, default_value=-1, combiner='sum')` {#sparse_column_with_keys}
-
-Creates a _SparseColumn with keys.
-
-Look up logic is as follows:
-lookup_id = index_of_feature_in_keys if feature in keys else default_value
-
-##### Args:
-
-
-* <b>`column_name`</b>: A string defining sparse column name.
-* <b>`keys`</b>: a string list defining vocabulary.
-* <b>`default_value`</b>: The value to use for out-of-vocabulary feature values.
- Default is -1.
-* <b>`combiner`</b>: A string specifying how to reduce if the sparse column is
- multivalent. Currently "mean", "sqrtn" and "sum" are supported, with "sum"
- the default. "sqrtn" often achieves good accuracy, in particular with
- bag-of-words columns.
- * "sum": do not normalize features in the column
- * "mean": do l1 normalization on features in the column
- * "sqrtn": do l2 normalization on features in the column
- For more information: `tf.embedding_lookup_sparse`.
-
-##### Returns:
-
- A _SparseColumnKeys with keys configuration.
-
-
-- - -
-
-### `tf.contrib.layers.weighted_sparse_column(sparse_id_column, weight_column_name, dtype=tf.float32)` {#weighted_sparse_column}
-
-Creates a _SparseColumn by combining sparse_id_column with a weight column.
-
-Example:
-
- ```python
- sparse_feature = sparse_column_with_hash_bucket(column_name="sparse_col",
- hash_bucket_size=1000)
- weighted_feature = weighted_sparse_column(sparse_id_column=sparse_feature,
- weight_column_name="weights_col")
- ```
-
- This configuration assumes that input dictionary of model contains the
- following two items:
- * (key="sparse_col", value=sparse_tensor) where sparse_tensor is
- a SparseTensor.
- * (key="weights_col", value=weights_tensor) where weights_tensor
- is a SparseTensor.
- Following are assumed to be true:
- * sparse_tensor.indices = weights_tensor.indices
- * sparse_tensor.dense_shape = weights_tensor.dense_shape
-
-##### Args:
-
-
-* <b>`sparse_id_column`</b>: A `_SparseColumn` which is created by
- `sparse_column_with_*` functions.
-* <b>`weight_column_name`</b>: A string defining a sparse column name which represents
- weight or value of the corresponding sparse id feature.
-* <b>`dtype`</b>: Type of weights, such as `tf.float32`. Only floating and integer
- weights are supported.
-
-##### Returns:
-
- A _WeightedSparseColumn composed of two sparse features: one represents id,
- the other represents weight (value) of the id feature in that example.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if dtype is not convertible to float.
-
-
-- - -
-
-### `tf.contrib.layers.weighted_sum_from_feature_columns(columns_to_tensors, feature_columns, num_outputs, weight_collections=None, trainable=True, scope=None)` {#weighted_sum_from_feature_columns}
-
-A tf.contrib.layer style linear prediction builder based on FeatureColumns.
-
-Generally a single example in training data is described with feature columns.
-This function generates weighted sum for each num_outputs. Weighted sum refers
-to logits in classification problems. It refers to prediction itself for
-linear regression problems.
-
-Example:
-
- ```
- # Building model for training
- feature_columns = (
- real_valued_column("my_feature1"),
- ...
- )
- columns_to_tensor = tf.parse_example(...)
- logits = weighted_sum_from_feature_columns(
- columns_to_tensors=columns_to_tensor,
- feature_columns=feature_columns,
- num_outputs=1)
- loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=labels,
- logits=logits)
- ```
-
-##### Args:
-
-
-* <b>`columns_to_tensors`</b>: A mapping from feature column to tensors. 'string' key
- means a base feature (not-transformed). It can have FeatureColumn as a
- key too. That means that FeatureColumn is already transformed by input
- pipeline. For example, `inflow` may have handled transformations.
-* <b>`feature_columns`</b>: A set containing all the feature columns. All items in the
- set should be instances of classes derived from FeatureColumn.
-* <b>`num_outputs`</b>: An integer specifying number of outputs. Default value is 1.
-* <b>`weight_collections`</b>: List of graph collections to which weights are added.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- A tuple containing:
-
- * A Tensor which represents predictions of a linear model.
- * A dictionary which maps feature_column to corresponding Variable.
- * A Variable which is used for bias.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if FeatureColumn cannot be used for linear predictions.
-
-
-- - -
-
-### `tf.contrib.layers.infer_real_valued_columns(features)` {#infer_real_valued_columns}
-
-
-
-
-- - -
-
-### `tf.contrib.layers.sequence_input_from_feature_columns(*args, **kwargs)` {#sequence_input_from_feature_columns}
-
-Builds inputs for sequence models from `FeatureColumn`s. (experimental)
-
-THIS FUNCTION IS EXPERIMENTAL. It may change or be removed at any time, and without warning.
-
-
-See documentation for `input_from_feature_columns`. The following types of
-`FeatureColumn` are permitted in `feature_columns`: `_OneHotColumn`,
-`_EmbeddingColumn`, `_ScatteredEmbeddingColumn`, `_RealValuedColumn`,
-`_DataFrameColumn`. In addition, columns in `feature_columns` may not be
-constructed using any of the following: `ScatteredEmbeddingColumn`,
-`BucketizedColumn`, `CrossedColumn`.
-
-##### Args:
-
-
-* <b>`columns_to_tensors`</b>: A mapping from feature column to tensors. 'string' key
- means a base feature (not-transformed). It can have FeatureColumn as a
- key too. That means that FeatureColumn is already transformed by input
- pipeline. For example, `inflow` may have handled transformations.
-* <b>`feature_columns`</b>: A set containing all the feature columns. All items in the
- set should be instances of classes derived by FeatureColumn.
-* <b>`weight_collections`</b>: List of graph collections to which weights are added.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- A Tensor which can be consumed by hidden layers in the neural network.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if FeatureColumn cannot be consumed by a neural network.
-
-
-
-## Other Functions and Classes
-- - -
-
-### `tf.contrib.layers.legacy_fully_connected(x, num_output_units, activation_fn=None, weight_init=_initializer, bias_init=Zeros(), name=None, weight_collections=('weights',), bias_collections=('biases',), output_collections=('activations',), trainable=True, weight_regularizer=None, bias_regularizer=None)` {#legacy_fully_connected}
-
-Adds the parameters for a fully connected layer and returns the output.
-
-A fully connected layer is generally defined as a matrix multiply:
-`y = f(w * x + b)` where `f` is given by `activation_fn`. If
-`activation_fn` is `None`, the result of `y = w * x + b` is
-returned.
-
-If `x` has shape [\\\(\\text{dim}_0, \\text{dim}_1, ..., \\text{dim}_n\\\)]
-with more than 2 dimensions (\\\(n > 1\\\)), then we repeat the matrix
-multiply along the first dimensions. The result r is a tensor of shape
-[\\\(\\text{dim}_0, ..., \\text{dim}_{n-1},\\\) `num_output_units`],
-where \\\( r_{i_0, ..., i_{n-1}, k} =
-\\sum_{0 \\leq j < \\text{dim}_n} x_{i_0, ... i_{n-1}, j} \cdot w_{j, k}\\\).
-This is accomplished by reshaping `x` to 2-D
-[\\\(\\text{dim}_0 \\cdot ... \\cdot \\text{dim}_{n-1}, \\text{dim}_n\\\)]
-before the matrix multiply and afterwards reshaping it to
-[\\\(\\text{dim}_0, ..., \\text{dim}_{n-1},\\\) `num_output_units`].
-
-This op creates `w` and optionally `b`. Bias (`b`) can be disabled by setting
-`bias_init` to `None`.
-
-The variable creation is compatible with `tf.variable_scope` and so can be
-reused with `tf.variable_scope` or `tf.make_template`.
-
-Most of the details of variable creation can be controlled by specifying the
-initializers (`weight_init` and `bias_init`) and in which collections to place
-the created variables (`weight_collections` and `bias_collections`; note that
-the variables are always added to the `VARIABLES` collection). The output of
-the layer can be placed in custom collections using `output_collections`.
-The collections arguments default to `WEIGHTS`, `BIASES` and `ACTIVATIONS`,
-respectively.
-
-A per layer regularization can be specified by setting `weight_regularizer`
-and `bias_regularizer`, which are applied to the weights and biases
-respectively, and whose output is added to the `REGULARIZATION_LOSSES`
-collection.
-
-##### Args:
-
-
-* <b>`x`</b>: The input `Tensor`.
-* <b>`num_output_units`</b>: The size of the output.
-* <b>`activation_fn`</b>: Activation function, default set to None to skip it and
- maintain a linear activation.
-* <b>`weight_init`</b>: An optional weight initialization, defaults to
- `xavier_initializer`.
-* <b>`bias_init`</b>: An initializer for the bias, defaults to 0. Set to `None` in
- order to disable bias.
-* <b>`name`</b>: The name for this operation is used to name operations and to find
- variables. If specified it must be unique for this scope, otherwise a
- unique name starting with "fully_connected" will be created. See
- `tf.variable_scope` for details.
-* <b>`weight_collections`</b>: List of graph collections to which weights are added.
-* <b>`bias_collections`</b>: List of graph collections to which biases are added.
-* <b>`output_collections`</b>: List of graph collections to which outputs are added.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`weight_regularizer`</b>: A regularizer like the result of
- `l1_regularizer` or `l2_regularizer`. Used for weights.
-* <b>`bias_regularizer`</b>: A regularizer like the result of
- `l1_regularizer` or `l2_regularizer`. Used for biases.
-
-##### Returns:
-
- The output of the fully connected layer.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If x has rank less than 2 or if its last dimension is not set.
-
-
-- - -
-
-### `tf.contrib.layers.legacy_linear(x, num_output_units, weight_init=_initializer, bias_init=Zeros(), name=None, weight_collections=('weights',), bias_collections=('biases',), output_collections=('activations',), trainable=True, weight_regularizer=None, bias_regularizer=None)` {#legacy_linear}
-
-partial(func, *args, **keywords) - new function with partial application
-of the given arguments and keywords.
-
-
-- - -
-
-### `tf.contrib.layers.legacy_relu(x, num_output_units, weight_init=_initializer, bias_init=Zeros(), name=None, weight_collections=('weights',), bias_collections=('biases',), output_collections=('activations',), trainable=True, weight_regularizer=None, bias_regularizer=None)` {#legacy_relu}
-
-partial(func, *args, **keywords) - new function with partial application
-of the given arguments and keywords.
-
-
-- - -
-
-### `tf.contrib.layers.regression_target(*args, **kwargs)` {#regression_target}
-
-Creates a _TargetColumn for linear regression. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-11-12.
-Instructions for updating:
-This file will be removed after the deprecation date.Please switch to third_party/tensorflow/contrib/learn/python/learn/estimators/head.py
-
-##### Args:
-
-
-* <b>`label_name`</b>: String, name of the key in label dict. Can be null if label
- is a tensor (single headed models).
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training. It
- will be multiplied by the loss of the example.
-* <b>`label_dimension`</b>: dimension of the target for multilabels.
-
-##### Returns:
-
- An instance of _TargetColumn
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.learn.md b/tensorflow/g3doc/api_docs/python/contrib.learn.md
deleted file mode 100644
index 12e5bd6da0..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.learn.md
+++ /dev/null
@@ -1,5510 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Learn (contrib)
-[TOC]
-
-High level API for learning. See the @{$python/contrib.learn} guide.
-
-- - -
-
-### `class tf.contrib.learn.BaseEstimator` {#BaseEstimator}
-
-Abstract BaseEstimator class to train and evaluate TensorFlow models.
-
-Users should not instantiate or subclass this class. Instead, use `Estimator`.
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.__init__(model_dir=None, config=None)` {#BaseEstimator.__init__}
-
-Initializes a BaseEstimator instance.
-
-##### Args:
-
-
-* <b>`model_dir`</b>: Directory to save model parameters, graph and etc. This can
- also be used to load checkpoints from the directory into a estimator to
- continue training a previously saved model.
-* <b>`config`</b>: A RunConfig instance.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.__repr__()` {#BaseEstimator.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.config` {#BaseEstimator.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.evaluate(*args, **kwargs)` {#BaseEstimator.evaluate}
-
-See `Evaluable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` or `y` is provided, and at least one of
- `input_fn` or `feed_fn` is provided.
- Or if `metrics` is not `None` or `dict`.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.export(*args, **kwargs)` {#BaseEstimator.export}
-
-Exports inference graph into given dir. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-23.
-Instructions for updating:
-The signature of the input_fn accepted by export is changing to be consistent with what's used by tf.Learn Estimator's train/evaluate. input_fn (and in most cases, input_feature_key) will become required args, and use_deprecated_input_fn will default to False and be removed altogether.
-
-##### Args:
-
-
-* <b>`export_dir`</b>: A string containing a directory to write the exported graph
- and checkpoints.
-* <b>`input_fn`</b>: If `use_deprecated_input_fn` is true, then a function that given
- `Tensor` of `Example` strings, parses it into features that are then
- passed to the model. Otherwise, a function that takes no argument and
- returns a tuple of (features, labels), where features is a dict of
- string key to `Tensor` and labels is a `Tensor` that's currently not
- used (and so can be `None`).
-* <b>`input_feature_key`</b>: Only used if `use_deprecated_input_fn` is false. String
- key into the features dict returned by `input_fn` that corresponds to a
- the raw `Example` strings `Tensor` that the exported model will take as
- input. Can only be `None` if you're using a custom `signature_fn` that
- does not use the first arg (examples).
-* <b>`use_deprecated_input_fn`</b>: Determines the signature format of `input_fn`.
-* <b>`signature_fn`</b>: Function that returns a default signature and a named
- signature map, given `Tensor` of `Example` strings, `dict` of `Tensor`s
- for features and `Tensor` or `dict` of `Tensor`s for predictions.
-* <b>`prediction_key`</b>: The key for a tensor in the `predictions` dict (output
- from the `model_fn`) to use as the `predictions` input to the
- `signature_fn`. Optional. If `None`, predictions will pass to
- `signature_fn` without filtering.
-* <b>`default_batch_size`</b>: Default batch size of the `Example` placeholder.
-* <b>`exports_to_keep`</b>: Number of exports to keep.
-* <b>`checkpoint_path`</b>: the checkpoint path of the model to be exported. If it is
- `None` (which is default), will use the latest checkpoint in
- export_dir.
-
-##### Returns:
-
- The string path to the exported directory. NB: this functionality was
- added ca. 2016/09/25; clients that depend on the return value may need
- to handle the case where this function returns None because subclasses
- are not returning a value.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.fit(*args, **kwargs)` {#BaseEstimator.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.get_params(deep=True)` {#BaseEstimator.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.get_variable_names()` {#BaseEstimator.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.get_variable_value(name)` {#BaseEstimator.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.model_dir` {#BaseEstimator.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.partial_fit(*args, **kwargs)` {#BaseEstimator.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.predict(*args, **kwargs)` {#BaseEstimator.predict}
-
-Returns predictions for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x` and 'batch_size' must be `None`.
-* <b>`batch_size`</b>: Override default batch size. If set, 'input_fn' must be
- 'None'.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns all.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- A numpy array of predicted classes or regression values if the
- constructor's `model_fn` returns a `Tensor` for `predictions` or a `dict`
- of numpy arrays if `model_fn` returns a `dict`. Returns an iterable of
- predictions if as_iterable is True.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If x and input_fn are both provided or both `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.set_params(**params)` {#BaseEstimator.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
-
-- - -
-
-### `class tf.contrib.learn.Estimator` {#Estimator}
-
-Estimator class is the basic TensorFlow model trainer/evaluator.
-- - -
-
-#### `tf.contrib.learn.Estimator.__init__(model_fn=None, model_dir=None, config=None, params=None, feature_engineering_fn=None)` {#Estimator.__init__}
-
-Constructs an `Estimator` instance.
-
-##### Args:
-
-
-* <b>`model_fn`</b>: Model function. Follows the signature:
- * Args:
- * `features`: single `Tensor` or `dict` of `Tensor`s
- (depending on data passed to `fit`),
- * `labels`: `Tensor` or `dict` of `Tensor`s (for multi-head
- models). If mode is `ModeKeys.INFER`, `labels=None` will be
- passed. If the `model_fn`'s signature does not accept
- `mode`, the `model_fn` must still be able to handle
- `labels=None`.
- * `mode`: Optional. Specifies if this training, evaluation or
- prediction. See `ModeKeys`.
- * `params`: Optional `dict` of hyperparameters. Will receive what
- is passed to Estimator in `params` parameter. This allows
- to configure Estimators from hyper parameter tuning.
- * `config`: Optional configuration object. Will receive what is passed
- to Estimator in `config` parameter, or the default `config`.
- Allows updating things in your model_fn based on configuration
- such as `num_ps_replicas`.
- * `model_dir`: Optional directory where model parameters, graph etc
- are saved. Will receive what is passed to Estimator in
- `model_dir` parameter, or the default `model_dir`. Allows
- updating things in your model_fn that expect model_dir, such as
- training hooks.
-
- * Returns:
- `ModelFnOps`
-
- Also supports a legacy signature which returns tuple of:
-
- * predictions: `Tensor`, `SparseTensor` or dictionary of same.
- Can also be any type that is convertible to a `Tensor` or
- `SparseTensor`, or dictionary of same.
- * loss: Scalar loss `Tensor`.
- * train_op: Training update `Tensor` or `Operation`.
-
- Supports next three signatures for the function:
-
- * `(features, labels) -> (predictions, loss, train_op)`
- * `(features, labels, mode) -> (predictions, loss, train_op)`
- * `(features, labels, mode, params) -> (predictions, loss, train_op)`
- * `(features, labels, mode, params, config) ->
- (predictions, loss, train_op)`
- * `(features, labels, mode, params, config, model_dir) ->
- (predictions, loss, train_op)`
-
-
-* <b>`model_dir`</b>: Directory to save model parameters, graph and etc. This can
- also be used to load checkpoints from the directory into a estimator to
- continue training a previously saved model.
-* <b>`config`</b>: Configuration object.
-* <b>`params`</b>: `dict` of hyper parameters that will be passed into `model_fn`.
- Keys are names of parameters, values are basic python types.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into `model_fn`. Please check `model_fn` for
- a definition of features and labels.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: parameters of `model_fn` don't match `params`.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.__repr__()` {#Estimator.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.config` {#Estimator.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.evaluate(*args, **kwargs)` {#Estimator.evaluate}
-
-See `Evaluable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` or `y` is provided, and at least one of
- `input_fn` or `feed_fn` is provided.
- Or if `metrics` is not `None` or `dict`.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.export(*args, **kwargs)` {#Estimator.export}
-
-Exports inference graph into given dir. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-23.
-Instructions for updating:
-The signature of the input_fn accepted by export is changing to be consistent with what's used by tf.Learn Estimator's train/evaluate. input_fn (and in most cases, input_feature_key) will become required args, and use_deprecated_input_fn will default to False and be removed altogether.
-
-##### Args:
-
-
-* <b>`export_dir`</b>: A string containing a directory to write the exported graph
- and checkpoints.
-* <b>`input_fn`</b>: If `use_deprecated_input_fn` is true, then a function that given
- `Tensor` of `Example` strings, parses it into features that are then
- passed to the model. Otherwise, a function that takes no argument and
- returns a tuple of (features, labels), where features is a dict of
- string key to `Tensor` and labels is a `Tensor` that's currently not
- used (and so can be `None`).
-* <b>`input_feature_key`</b>: Only used if `use_deprecated_input_fn` is false. String
- key into the features dict returned by `input_fn` that corresponds to a
- the raw `Example` strings `Tensor` that the exported model will take as
- input. Can only be `None` if you're using a custom `signature_fn` that
- does not use the first arg (examples).
-* <b>`use_deprecated_input_fn`</b>: Determines the signature format of `input_fn`.
-* <b>`signature_fn`</b>: Function that returns a default signature and a named
- signature map, given `Tensor` of `Example` strings, `dict` of `Tensor`s
- for features and `Tensor` or `dict` of `Tensor`s for predictions.
-* <b>`prediction_key`</b>: The key for a tensor in the `predictions` dict (output
- from the `model_fn`) to use as the `predictions` input to the
- `signature_fn`. Optional. If `None`, predictions will pass to
- `signature_fn` without filtering.
-* <b>`default_batch_size`</b>: Default batch size of the `Example` placeholder.
-* <b>`exports_to_keep`</b>: Number of exports to keep.
-* <b>`checkpoint_path`</b>: the checkpoint path of the model to be exported. If it is
- `None` (which is default), will use the latest checkpoint in
- export_dir.
-
-##### Returns:
-
- The string path to the exported directory. NB: this functionality was
- added ca. 2016/09/25; clients that depend on the return value may need
- to handle the case where this function returns None because subclasses
- are not returning a value.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#Estimator.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.fit(*args, **kwargs)` {#Estimator.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.get_params(deep=True)` {#Estimator.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.get_variable_names()` {#Estimator.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.get_variable_value(name)` {#Estimator.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.model_dir` {#Estimator.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.partial_fit(*args, **kwargs)` {#Estimator.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.predict(*args, **kwargs)` {#Estimator.predict}
-
-Returns predictions for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x` and 'batch_size' must be `None`.
-* <b>`batch_size`</b>: Override default batch size. If set, 'input_fn' must be
- 'None'.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns all.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- A numpy array of predicted classes or regression values if the
- constructor's `model_fn` returns a `Tensor` for `predictions` or a `dict`
- of numpy arrays if `model_fn` returns a `dict`. Returns an iterable of
- predictions if as_iterable is True.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If x and input_fn are both provided or both `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.set_params(**params)` {#Estimator.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
-
-- - -
-
-### `class tf.contrib.learn.Trainable` {#Trainable}
-
-Interface for objects that are trainable by, e.g., `Experiment`.
-- - -
-
-#### `tf.contrib.learn.Trainable.fit(x=None, y=None, input_fn=None, steps=None, batch_size=None, monitors=None, max_steps=None)` {#Trainable.fit}
-
-Trains a model given training data `x` predictions and `y` labels.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...] or the dictionary of Matrices.
- Can be iterator that returns arrays of features or dictionary of arrays of features.
- The training input samples for fitting the model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs] or the dictionary of same.
- Can be iterator that returns array of labels or dictionary of array of labels.
- The training label values (class labels in classification, real numbers in regression).
- If set, `input_fn` must be `None`. Note: For classification, label values must
- be integers representing the class index (i.e. values from 0 to
- n_classes-1).
-* <b>`input_fn`</b>: Input function returning a tuple of:
- features - `Tensor` or dictionary of string feature name to `Tensor`.
- labels - `Tensor` or dictionary of `Tensor` with labels.
- If input_fn is set, `x`, `y`, and `batch_size` must be `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
- 'steps' works incrementally. If you call two times fit(steps=10) then
- training occurs in total 20 steps. If you don't want to have incremental
- behaviour please set `max_steps` instead. If set, `max_steps` must be
- `None`.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-* <b>`max_steps`</b>: Number of total steps for which to train model. If `None`,
- train forever. If set, `steps` must be `None`.
-
- Two calls to `fit(steps=100)` means 200 training
- iterations. On the other hand, two calls to `fit(max_steps=100)` means
- that the second call will not do any iteration since first call did
- all 100 steps.
-
-##### Returns:
-
- `self`, for chaining.
-
-
-
-- - -
-
-### `class tf.contrib.learn.Evaluable` {#Evaluable}
-
-Interface for objects that are evaluatable by, e.g., `Experiment`.
-- - -
-
-#### `tf.contrib.learn.Evaluable.evaluate(x=None, y=None, input_fn=None, feed_fn=None, batch_size=None, steps=None, metrics=None, name=None, checkpoint_path=None, hooks=None)` {#Evaluable.evaluate}
-
-Evaluates given model with provided evaluation data.
-
-Stop conditions - we evaluate on the given input data until one of the
-following:
-- If `steps` is provided, and `steps` batches of size `batch_size` are
-processed.
-- If `input_fn` is provided, and it raises an end-of-input
-exception (`OutOfRangeError` or `StopIteration`).
-- If `x` is provided, and all items in `x` have been processed.
-
-The return value is a dict containing the metrics specified in `metrics`, as
-well as an entry `global_step` which contains the value of the global step
-for which this evaluation was performed.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...] or dictionary of many matrices
- containing the input samples for fitting the model. Can be iterator that returns
- arrays of features or dictionary of array of features. If set, `input_fn` must
- be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs] containing the
- label values (class labels in classification, real numbers in
- regression) or dictionary of multiple vectors/matrices. Can be iterator
- that returns array of targets or dictionary of array of targets. If set,
- `input_fn` must be `None`. Note: For classification, label values must
- be integers representing the class index (i.e. values from 0 to
- n_classes-1).
-* <b>`input_fn`</b>: Input function returning a tuple of:
- features - Dictionary of string feature name to `Tensor` or `Tensor`.
- labels - `Tensor` or dictionary of `Tensor` with labels.
- If input_fn is set, `x`, `y`, and `batch_size` must be `None`. If
- `steps` is not provided, this should raise `OutOfRangeError` or
- `StopIteration` after the desired amount of data (e.g., one epoch) has
- been provided. See "Stop conditions" above for specifics.
-* <b>`feed_fn`</b>: Function creating a feed dict every time it is called. Called
- once per iteration. Must be `None` if `input_fn` is provided.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`, if specified. Must be `None` if `input_fn` is
- provided.
-* <b>`steps`</b>: Number of steps for which to evaluate model. If `None`, evaluate
- until `x` is consumed or `input_fn` raises an end-of-input exception.
- See "Stop conditions" above for specifics.
-* <b>`metrics`</b>: Dict of metrics to run. If None, the default metric functions
- are used; if {}, no metrics are used. Otherwise, `metrics` should map
- friendly names for the metric to a `MetricSpec` object defining which
- model outputs to evaluate against which labels with which metric
- function.
-
- Metric ops should support streaming, e.g., returning `update_op` and
- `value` tensors. For example, see the options defined in
- `../../../metrics/python/ops/metrics_ops.py`.
-
-* <b>`name`</b>: Name of the evaluation if user needs to run multiple evaluations on
- different data sets, such as on training data vs test data.
-* <b>`checkpoint_path`</b>: Path of a specific checkpoint to evaluate. If `None`, the
- latest checkpoint in `model_dir` is used.
-* <b>`hooks`</b>: List of `SessionRunHook` subclass instances. Used for callbacks
- inside the evaluation call.
-
-##### Returns:
-
- Returns `dict` with evaluation results.
-
-
-- - -
-
-#### `tf.contrib.learn.Evaluable.model_dir` {#Evaluable.model_dir}
-
-Returns a path in which the eval process will look for checkpoints.
-
-
-
-- - -
-
-### `class tf.contrib.learn.KMeansClustering` {#KMeansClustering}
-
-An Estimator for K-Means clustering.
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.__init__(num_clusters, model_dir=None, initial_clusters='random', distance_metric='squared_euclidean', random_seed=0, use_mini_batch=True, mini_batch_steps_per_iteration=1, kmeans_plus_plus_num_retries=2, relative_tolerance=None, config=None)` {#KMeansClustering.__init__}
-
-Creates a model for running KMeans training and inference.
-
-##### Args:
-
-
-* <b>`num_clusters`</b>: number of clusters to train.
-* <b>`model_dir`</b>: the directory to save the model results and log files.
-* <b>`initial_clusters`</b>: specifies how to initialize the clusters for training.
- See clustering_ops.kmeans for the possible values.
-* <b>`distance_metric`</b>: the distance metric used for clustering.
- See clustering_ops.kmeans for the possible values.
-* <b>`random_seed`</b>: Python integer. Seed for PRNG used to initialize centers.
-* <b>`use_mini_batch`</b>: If true, use the mini-batch k-means algorithm. Else assume
- full batch.
-* <b>`mini_batch_steps_per_iteration`</b>: number of steps after which the updated
- cluster centers are synced back to a master copy. See clustering_ops.py
- for more details.
-* <b>`kmeans_plus_plus_num_retries`</b>: For each point that is sampled during
- kmeans++ initialization, this parameter specifies the number of
- additional points to draw from the current distribution before selecting
- the best. If a negative value is specified, a heuristic is used to
- sample O(log(num_to_sample)) additional points.
-* <b>`relative_tolerance`</b>: A relative tolerance of change in the loss between
- iterations. Stops learning if the loss changes less than this amount.
- Note that this may not work correctly if use_mini_batch=True.
-* <b>`config`</b>: See Estimator
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.__repr__()` {#KMeansClustering.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.clusters()` {#KMeansClustering.clusters}
-
-Returns cluster centers.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.config` {#KMeansClustering.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.evaluate(*args, **kwargs)` {#KMeansClustering.evaluate}
-
-See `Evaluable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` or `y` is provided, and at least one of
- `input_fn` or `feed_fn` is provided.
- Or if `metrics` is not `None` or `dict`.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.export(*args, **kwargs)` {#KMeansClustering.export}
-
-Exports inference graph into given dir. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-23.
-Instructions for updating:
-The signature of the input_fn accepted by export is changing to be consistent with what's used by tf.Learn Estimator's train/evaluate. input_fn (and in most cases, input_feature_key) will become required args, and use_deprecated_input_fn will default to False and be removed altogether.
-
-##### Args:
-
-
-* <b>`export_dir`</b>: A string containing a directory to write the exported graph
- and checkpoints.
-* <b>`input_fn`</b>: If `use_deprecated_input_fn` is true, then a function that given
- `Tensor` of `Example` strings, parses it into features that are then
- passed to the model. Otherwise, a function that takes no argument and
- returns a tuple of (features, labels), where features is a dict of
- string key to `Tensor` and labels is a `Tensor` that's currently not
- used (and so can be `None`).
-* <b>`input_feature_key`</b>: Only used if `use_deprecated_input_fn` is false. String
- key into the features dict returned by `input_fn` that corresponds to a
- the raw `Example` strings `Tensor` that the exported model will take as
- input. Can only be `None` if you're using a custom `signature_fn` that
- does not use the first arg (examples).
-* <b>`use_deprecated_input_fn`</b>: Determines the signature format of `input_fn`.
-* <b>`signature_fn`</b>: Function that returns a default signature and a named
- signature map, given `Tensor` of `Example` strings, `dict` of `Tensor`s
- for features and `Tensor` or `dict` of `Tensor`s for predictions.
-* <b>`prediction_key`</b>: The key for a tensor in the `predictions` dict (output
- from the `model_fn`) to use as the `predictions` input to the
- `signature_fn`. Optional. If `None`, predictions will pass to
- `signature_fn` without filtering.
-* <b>`default_batch_size`</b>: Default batch size of the `Example` placeholder.
-* <b>`exports_to_keep`</b>: Number of exports to keep.
-* <b>`checkpoint_path`</b>: the checkpoint path of the model to be exported. If it is
- `None` (which is default), will use the latest checkpoint in
- export_dir.
-
-##### Returns:
-
- The string path to the exported directory. NB: this functionality was
- added ca. 2016/09/25; clients that depend on the return value may need
- to handle the case where this function returns None because subclasses
- are not returning a value.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#KMeansClustering.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.fit(*args, **kwargs)` {#KMeansClustering.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.get_params(deep=True)` {#KMeansClustering.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.get_variable_names()` {#KMeansClustering.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.get_variable_value(name)` {#KMeansClustering.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.model_dir` {#KMeansClustering.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.partial_fit(*args, **kwargs)` {#KMeansClustering.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.predict(*args, **kwargs)` {#KMeansClustering.predict}
-
-Returns predictions for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x` and 'batch_size' must be `None`.
-* <b>`batch_size`</b>: Override default batch size. If set, 'input_fn' must be
- 'None'.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns all.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- A numpy array of predicted classes or regression values if the
- constructor's `model_fn` returns a `Tensor` for `predictions` or a `dict`
- of numpy arrays if `model_fn` returns a `dict`. Returns an iterable of
- predictions if as_iterable is True.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If x and input_fn are both provided or both `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.predict_cluster_idx(input_fn=None)` {#KMeansClustering.predict_cluster_idx}
-
-Yields predicted cluster indices.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.score(input_fn=None, steps=None)` {#KMeansClustering.score}
-
-Predict total sum of distances to nearest clusters.
-
-Note that this function is different from the corresponding one in sklearn
-which returns the negative of the sum of distances.
-
-##### Args:
-
-
-* <b>`input_fn`</b>: see predict.
-* <b>`steps`</b>: see predict.
-
-##### Returns:
-
- Total sum of distances to nearest clusters.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.set_params(**params)` {#KMeansClustering.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.transform(input_fn=None, as_iterable=False)` {#KMeansClustering.transform}
-
-Transforms each element to distances to cluster centers.
-
-Note that this function is different from the corresponding one in sklearn.
-For SQUARED_EUCLIDEAN distance metric, sklearn transform returns the
-EUCLIDEAN distance, while this function returns the SQUARED_EUCLIDEAN
-distance.
-
-##### Args:
-
-
-* <b>`input_fn`</b>: see predict.
-* <b>`as_iterable`</b>: see predict
-
-##### Returns:
-
- Array with same number of rows as x, and num_clusters columns, containing
- distances to the cluster centers.
-
-
-
-- - -
-
-### `class tf.contrib.learn.ModeKeys` {#ModeKeys}
-
-Standard names for model modes.
-
-The following standard keys are defined:
-
-* `TRAIN`: training mode.
-* `EVAL`: evaluation mode.
-* `INFER`: inference mode.
-
-- - -
-
-### `class tf.contrib.learn.ModelFnOps` {#ModelFnOps}
-
-Ops returned from a model_fn.
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.__getnewargs__()` {#ModelFnOps.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.__getstate__()` {#ModelFnOps.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.__new__(cls, mode, predictions=None, loss=None, train_op=None, eval_metric_ops=None, output_alternatives=None, training_chief_hooks=None, training_hooks=None, scaffold=None)` {#ModelFnOps.__new__}
-
-Creates a validated `ModelFnOps` instance.
-
-For a multi-headed model, the predictions dict here will contain the outputs
-of all of the heads. However: at serving time, requests will be made
-specifically for one or more heads, and the RPCs used for these requests may
-differ by problem type (i.e., regression, classification, other). The
-purpose of the output_alternatives dict is to aid in exporting a SavedModel
-from which such head-specific queries can be served. These
-output_alternatives will be combined with input_alternatives (see
-`saved_model_export_utils`) to produce a set of `SignatureDef`s specifying
-the valid requests that can be served from this model.
-
-For a single-headed model, it is still adviseable to provide
-output_alternatives with a single entry, because this is how the problem
-type is communicated for export and serving. If output_alternatives is not
-given, the resulting SavedModel will support only one head of unspecified
-type.
-
-##### Args:
-
-
-* <b>`mode`</b>: One of `ModeKeys`. Specifies if this training, evaluation or
- prediction.
-* <b>`predictions`</b>: Predictions `Tensor` or dict of `Tensor`.
-* <b>`loss`</b>: Training loss `Tensor`.
-* <b>`train_op`</b>: Op for the training step.
-* <b>`eval_metric_ops`</b>: Dict of metric results keyed by name. The values of the
- dict are the results of calling a metric function, such as `Tensor`.
-* <b>`output_alternatives`</b>: a dict of
- `{submodel_name: (problem_type, {tensor_name: Tensor})}`, where
- `submodel_name` is a submodel identifier that should be consistent
- across the pipeline (here likely taken from the name of each `Head`,
- for models that use them), `problem_type` is a `ProblemType`,
- `tensor_name` is a symbolic name for an output Tensor possibly but not
- necessarily taken from `PredictionKey`, and `Tensor` is the
- corresponding output Tensor itself.
-* <b>`training_chief_hooks`</b>: A list of `SessionRunHook` objects that will be
- run on the chief worker during training.
-* <b>`training_hooks`</b>: A list of `SessionRunHook` objects that will be run on
- all workers during training.
-* <b>`scaffold`</b>: A `tf.train.Scaffold` object that can be used to set
- initialization, saver, and more to be used in training.
-
-##### Returns:
-
- A validated `ModelFnOps` object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If validation fails.
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.__repr__()` {#ModelFnOps.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.eval_metric_ops` {#ModelFnOps.eval_metric_ops}
-
-Alias for field number 3
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.loss` {#ModelFnOps.loss}
-
-Alias for field number 1
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.output_alternatives` {#ModelFnOps.output_alternatives}
-
-Alias for field number 4
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.predictions` {#ModelFnOps.predictions}
-
-Alias for field number 0
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.scaffold` {#ModelFnOps.scaffold}
-
-Alias for field number 7
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.train_op` {#ModelFnOps.train_op}
-
-Alias for field number 2
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.training_chief_hooks` {#ModelFnOps.training_chief_hooks}
-
-Alias for field number 5
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.training_hooks` {#ModelFnOps.training_hooks}
-
-Alias for field number 6
-
-
-
-- - -
-
-### `class tf.contrib.learn.MetricSpec` {#MetricSpec}
-
-MetricSpec connects a model to metric functions.
-
-The MetricSpec class contains all information necessary to connect the
-output of a `model_fn` to the metrics (usually, streaming metrics) that are
-used in evaluation.
-
-It is passed in the `metrics` argument of `Estimator.evaluate`. The
-`Estimator` then knows which predictions, labels, and weight to use to call a
-given metric function.
-
-When building the ops to run in evaluation, `Estimator` will call
-`create_metric_ops`, which will connect the given `metric_fn` to the model
-as detailed in the docstring for `create_metric_ops`, and return the metric.
-
-Example:
-
-Assuming a model has an input function which returns inputs containing
-(among other things) a tensor with key "input_key", and a labels dictionary
-containing "label_key". Let's assume that the `model_fn` for this model
-returns a prediction with key "prediction_key".
-
-In order to compute the accuracy of the "prediction_key" prediction, we
-would add
-
-```
-"prediction accuracy": MetricSpec(metric_fn=prediction_accuracy_fn,
- prediction_key="prediction_key",
- label_key="label_key")
-```
-
-to the metrics argument to `evaluate`. `prediction_accuracy_fn` can be either
-a predefined function in metric_ops (e.g., `streaming_accuracy`) or a custom
-function you define.
-
-If we would like the accuracy to be weighted by "input_key", we can add that
-as the `weight_key` argument.
-
-```
-"prediction accuracy": MetricSpec(metric_fn=prediction_accuracy_fn,
- prediction_key="prediction_key",
- label_key="label_key",
- weight_key="input_key")
-```
-
-An end-to-end example is as follows:
-
-```
-estimator = tf.contrib.learn.Estimator(...)
-estimator.fit(...)
-_ = estimator.evaluate(
- input_fn=input_fn,
- steps=1,
- metrics={
- 'prediction accuracy':
- metric_spec.MetricSpec(
- metric_fn=prediction_accuracy_fn,
- prediction_key="prediction_key",
- label_key="label_key")
- })
-```
-- - -
-
-#### `tf.contrib.learn.MetricSpec.__init__(metric_fn, prediction_key=None, label_key=None, weight_key=None)` {#MetricSpec.__init__}
-
-Constructor.
-
-Creates a MetricSpec.
-
-##### Args:
-
-
-* <b>`metric_fn`</b>: A function to use as a metric. See `_adapt_metric_fn` for
- rules on how `predictions`, `labels`, and `weights` are passed to this
- function. This must return either a single `Tensor`, which is
- interpreted as a value of this metric, or a pair
- `(value_op, update_op)`, where `value_op` is the op to call to
- obtain the value of the metric, and `update_op` should be run for
- each batch to update internal state.
-* <b>`prediction_key`</b>: The key for a tensor in the `predictions` dict (output
- from the `model_fn`) to use as the `predictions` input to the
- `metric_fn`. Optional. If `None`, the `model_fn` must return a single
- tensor or a dict with only a single entry as `predictions`.
-* <b>`label_key`</b>: The key for a tensor in the `labels` dict (output from the
- `input_fn`) to use as the `labels` input to the `metric_fn`.
- Optional. If `None`, the `input_fn` must return a single tensor or a
- dict with only a single entry as `labels`.
-* <b>`weight_key`</b>: The key for a tensor in the `inputs` dict (output from the
- `input_fn`) to use as the `weights` input to the `metric_fn`.
- Optional. If `None`, no weights will be passed to the `metric_fn`.
-
-
-- - -
-
-#### `tf.contrib.learn.MetricSpec.__str__()` {#MetricSpec.__str__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.MetricSpec.create_metric_ops(inputs, labels, predictions)` {#MetricSpec.create_metric_ops}
-
-Connect our `metric_fn` to the specified members of the given dicts.
-
-This function will call the `metric_fn` given in our constructor as follows:
-
-```
- metric_fn(predictions[self.prediction_key],
- labels[self.label_key],
- weights=weights[self.weight_key])
-```
-
-And returns the result. The `weights` argument is only passed if
-`self.weight_key` is not `None`.
-
-`predictions` and `labels` may be single tensors as well as dicts. If
-`predictions` is a single tensor, `self.prediction_key` must be `None`. If
-`predictions` is a single element dict, `self.prediction_key` is allowed to
-be `None`. Conversely, if `labels` is a single tensor, `self.label_key` must
-be `None`. If `labels` is a single element dict, `self.label_key` is allowed
-to be `None`.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A dict of inputs produced by the `input_fn`
-* <b>`labels`</b>: A dict of labels or a single label tensor produced by the
- `input_fn`.
-* <b>`predictions`</b>: A dict of predictions or a single tensor produced by the
- `model_fn`.
-
-##### Returns:
-
- The result of calling `metric_fn`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` or `labels` is a single `Tensor` and
- `self.prediction_key` or `self.label_key` is not `None`; or if
- `self.label_key` is `None` but `labels` is a dict with more than one
- element, or if `self.prediction_key` is `None` but `predictions` is a
- dict with more than one element.
-
-
-- - -
-
-#### `tf.contrib.learn.MetricSpec.label_key` {#MetricSpec.label_key}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.MetricSpec.metric_fn` {#MetricSpec.metric_fn}
-
-Metric function.
-
-This function accepts named args: `predictions`, `labels`, `weights`. It
-returns a single `Tensor` or `(value_op, update_op)` pair. See `metric_fn`
-constructor argument for more details.
-
-##### Returns:
-
- Function, see `metric_fn` constructor argument for more details.
-
-
-- - -
-
-#### `tf.contrib.learn.MetricSpec.prediction_key` {#MetricSpec.prediction_key}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.MetricSpec.weight_key` {#MetricSpec.weight_key}
-
-
-
-
-
-- - -
-
-### `class tf.contrib.learn.PredictionKey` {#PredictionKey}
-
-
-
-- - -
-
-### `class tf.contrib.learn.DNNClassifier` {#DNNClassifier}
-
-A classifier for TensorFlow DNN models.
-
-Example:
-
-```python
-sparse_feature_a = sparse_column_with_hash_bucket(...)
-sparse_feature_b = sparse_column_with_hash_bucket(...)
-
-sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a,
- ...)
-sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b,
- ...)
-
-estimator = DNNClassifier(
- feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
- hidden_units=[1024, 512, 256])
-
-# Or estimator using the ProximalAdagradOptimizer optimizer with
-# regularization.
-estimator = DNNClassifier(
- feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
- hidden_units=[1024, 512, 256],
- optimizer=tf.train.ProximalAdagradOptimizer(
- learning_rate=0.1,
- l1_regularization_strength=0.001
- ))
-
-# Input builders
-def input_fn_train: # returns x, y (where y represents label's class index).
- pass
-estimator.fit(input_fn=input_fn_train)
-
-def input_fn_eval: # returns x, y (where y represents label's class index).
- pass
-estimator.evaluate(input_fn=input_fn_eval)
-estimator.predict(x=x) # returns predicted labels (i.e. label's class index).
-```
-
-Input of `fit` and `evaluate` should have following features,
- otherwise there will be a `KeyError`:
-
-* if `weight_column_name` is not `None`, a feature with
- `key=weight_column_name` whose value is a `Tensor`.
-* for each `column` in `feature_columns`:
- - if `column` is a `SparseColumn`, a feature with `key=column.name`
- whose `value` is a `SparseTensor`.
- - if `column` is a `WeightedSparseColumn`, two features: the first with
- `key` the id column name, the second with `key` the weight column name.
- Both features' `value` must be a `SparseTensor`.
- - if `column` is a `RealValuedColumn`, a feature with `key=column.name`
- whose `value` is a `Tensor`.
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.__init__(hidden_units, feature_columns, model_dir=None, n_classes=2, weight_column_name=None, optimizer=None, activation_fn=relu, dropout=None, gradient_clip_norm=None, enable_centered_bias=False, config=None, feature_engineering_fn=None, embedding_lr_multipliers=None, input_layer_min_slice_size=None)` {#DNNClassifier.__init__}
-
-Initializes a DNNClassifier instance.
-
-##### Args:
-
-
-* <b>`hidden_units`</b>: List of hidden units per layer. All layers are fully
- connected. Ex. `[64, 32]` means first layer has 64 nodes and second one
- has 32.
-* <b>`feature_columns`</b>: An iterable containing all the feature columns used by
- the model. All items in the set should be instances of classes derived
- from `FeatureColumn`.
-* <b>`model_dir`</b>: Directory to save model parameters, graph and etc. This can
- also be used to load checkpoints from the directory into a estimator to
- continue training a previously saved model.
-* <b>`n_classes`</b>: number of label classes. Default is binary classification.
- It must be greater than 1. Note: Class labels are integers representing
- the class index (i.e. values from 0 to n_classes-1). For arbitrary
- label values (e.g. string labels), convert to class indices first.
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training. It
- will be multiplied by the loss of the example.
-* <b>`optimizer`</b>: An instance of `tf.Optimizer` used to train the model. If
- `None`, will use an Adagrad optimizer.
-* <b>`activation_fn`</b>: Activation function applied to each layer. If `None`, will
- use `tf.nn.relu`.
-* <b>`dropout`</b>: When not `None`, the probability we will drop out a given
- coordinate.
-* <b>`gradient_clip_norm`</b>: A float > 0. If provided, gradients are
- clipped to their global norm with this clipping ratio. See
- `tf.clip_by_global_norm` for more details.
-* <b>`enable_centered_bias`</b>: A bool. If True, estimator will learn a centered
- bias variable for each class. Rest of the model structure learns the
- residual after centered bias.
-* <b>`config`</b>: `RunConfig` object to configure the runtime settings.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into the model.
-* <b>`embedding_lr_multipliers`</b>: Optional. A dictionary from `EmbeddingColumn` to
- a `float` multiplier. Multiplier will be used to multiply with
- learning rate for the embedding variables.
-* <b>`input_layer_min_slice_size`</b>: Optional. The min slice size of input layer
- partitions. If not provided, will use the default of 64M.
-
-##### Returns:
-
- A `DNNClassifier` estimator.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `n_classes` < 2.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.__repr__()` {#DNNClassifier.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.bias_` {#DNNClassifier.bias_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.config` {#DNNClassifier.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.evaluate(*args, **kwargs)` {#DNNClassifier.evaluate}
-
-See `Evaluable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` or `y` is provided, and at least one of
- `input_fn` or `feed_fn` is provided.
- Or if `metrics` is not `None` or `dict`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.export(export_dir, input_fn=None, input_feature_key=None, use_deprecated_input_fn=True, signature_fn=None, default_batch_size=1, exports_to_keep=None)` {#DNNClassifier.export}
-
-See BaseEstimator.export.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#DNNClassifier.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.fit(*args, **kwargs)` {#DNNClassifier.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.get_params(deep=True)` {#DNNClassifier.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.get_variable_names()` {#DNNClassifier.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.get_variable_value(name)` {#DNNClassifier.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.model_dir` {#DNNClassifier.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.partial_fit(*args, **kwargs)` {#DNNClassifier.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.predict(*args, **kwargs)` {#DNNClassifier.predict}
-
-Returns predictions for given features. (deprecated arguments) (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2017-03-01.
-Instructions for updating:
-Please switch to predict_classes, or set `outputs` argument.
-
-By default, returns predicted classes. But this default will be dropped
-soon. Users should either pass `outputs`, or call `predict_classes` method.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns classes.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted classes with shape [batch_size] (or an iterable
- of predicted classes if as_iterable is True). Each predicted class is
- represented by its class index (i.e. integer from 0 to n_classes-1).
- If `outputs` is set, returns a dict of predictions.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.predict_classes(*args, **kwargs)` {#DNNClassifier.predict_classes}
-
-Returns predicted classes for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted classes with shape [batch_size] (or an iterable
- of predicted classes if as_iterable is True). Each predicted class is
- represented by its class index (i.e. integer from 0 to n_classes-1).
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.predict_proba(*args, **kwargs)` {#DNNClassifier.predict_proba}
-
-Returns predicted probabilities for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x and y must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted probabilities with shape [batch_size, n_classes]
- (or an iterable of predicted probabilities if as_iterable is True).
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.set_params(**params)` {#DNNClassifier.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.weights_` {#DNNClassifier.weights_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-
-- - -
-
-### `class tf.contrib.learn.DNNRegressor` {#DNNRegressor}
-
-A regressor for TensorFlow DNN models.
-
-Example:
-
-```python
-sparse_feature_a = sparse_column_with_hash_bucket(...)
-sparse_feature_b = sparse_column_with_hash_bucket(...)
-
-sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a,
- ...)
-sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b,
- ...)
-
-estimator = DNNRegressor(
- feature_columns=[sparse_feature_a, sparse_feature_b],
- hidden_units=[1024, 512, 256])
-
-# Or estimator using the ProximalAdagradOptimizer optimizer with
-# regularization.
-estimator = DNNRegressor(
- feature_columns=[sparse_feature_a, sparse_feature_b],
- hidden_units=[1024, 512, 256],
- optimizer=tf.train.ProximalAdagradOptimizer(
- learning_rate=0.1,
- l1_regularization_strength=0.001
- ))
-
-# Input builders
-def input_fn_train: # returns x, y
- pass
-estimator.fit(input_fn=input_fn_train)
-
-def input_fn_eval: # returns x, y
- pass
-estimator.evaluate(input_fn=input_fn_eval)
-estimator.predict(x=x)
-```
-
-Input of `fit` and `evaluate` should have following features,
- otherwise there will be a `KeyError`:
-
-* if `weight_column_name` is not `None`, a feature with
- `key=weight_column_name` whose value is a `Tensor`.
-* for each `column` in `feature_columns`:
- - if `column` is a `SparseColumn`, a feature with `key=column.name`
- whose `value` is a `SparseTensor`.
- - if `column` is a `WeightedSparseColumn`, two features: the first with
- `key` the id column name, the second with `key` the weight column name.
- Both features' `value` must be a `SparseTensor`.
- - if `column` is a `RealValuedColumn`, a feature with `key=column.name`
- whose `value` is a `Tensor`.
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.__init__(hidden_units, feature_columns, model_dir=None, weight_column_name=None, optimizer=None, activation_fn=relu, dropout=None, gradient_clip_norm=None, enable_centered_bias=False, config=None, feature_engineering_fn=None, label_dimension=1, embedding_lr_multipliers=None, input_layer_min_slice_size=None)` {#DNNRegressor.__init__}
-
-Initializes a `DNNRegressor` instance.
-
-##### Args:
-
-
-* <b>`hidden_units`</b>: List of hidden units per layer. All layers are fully
- connected. Ex. `[64, 32]` means first layer has 64 nodes and second one
- has 32.
-* <b>`feature_columns`</b>: An iterable containing all the feature columns used by
- the model. All items in the set should be instances of classes derived
- from `FeatureColumn`.
-* <b>`model_dir`</b>: Directory to save model parameters, graph and etc. This can
- also be used to load checkpoints from the directory into a estimator to
- continue training a previously saved model.
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training. It
- will be multiplied by the loss of the example.
-* <b>`optimizer`</b>: An instance of `tf.Optimizer` used to train the model. If
- `None`, will use an Adagrad optimizer.
-* <b>`activation_fn`</b>: Activation function applied to each layer. If `None`, will
- use `tf.nn.relu`.
-* <b>`dropout`</b>: When not `None`, the probability we will drop out a given
- coordinate.
-* <b>`gradient_clip_norm`</b>: A `float` > 0. If provided, gradients are clipped
- to their global norm with this clipping ratio. See
- `tf.clip_by_global_norm` for more details.
-* <b>`enable_centered_bias`</b>: A bool. If True, estimator will learn a centered
- bias variable for each class. Rest of the model structure learns the
- residual after centered bias.
-* <b>`config`</b>: `RunConfig` object to configure the runtime settings.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into the model.
-* <b>`label_dimension`</b>: Number of regression targets per example. This is the
- size of the last dimension of the labels and logits `Tensor` objects
- (typically, these have shape `[batch_size, label_dimension]`).
-* <b>`embedding_lr_multipliers`</b>: Optional. A dictionary from `EbeddingColumn` to
- a `float` multiplier. Multiplier will be used to multiply with
- learning rate for the embedding variables.
-* <b>`input_layer_min_slice_size`</b>: Optional. The min slice size of input layer
- partitions. If not provided, will use the default of 64M.
-
-##### Returns:
-
- A `DNNRegressor` estimator.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.__repr__()` {#DNNRegressor.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.config` {#DNNRegressor.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.evaluate(x=None, y=None, input_fn=None, feed_fn=None, batch_size=None, steps=None, metrics=None, name=None, checkpoint_path=None, hooks=None)` {#DNNRegressor.evaluate}
-
-See evaluable.Evaluable.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.export(export_dir, input_fn=None, input_feature_key=None, use_deprecated_input_fn=True, signature_fn=None, default_batch_size=1, exports_to_keep=None)` {#DNNRegressor.export}
-
-See BaseEstimator.export.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#DNNRegressor.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.fit(*args, **kwargs)` {#DNNRegressor.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.get_params(deep=True)` {#DNNRegressor.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.get_variable_names()` {#DNNRegressor.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.get_variable_value(name)` {#DNNRegressor.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.model_dir` {#DNNRegressor.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.partial_fit(*args, **kwargs)` {#DNNRegressor.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.predict(*args, **kwargs)` {#DNNRegressor.predict}
-
-Returns predictions for given features. (deprecated arguments) (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2017-03-01.
-Instructions for updating:
-Please switch to predict_scores, or set `outputs` argument.
-
-By default, returns predicted scores. But this default will be dropped
-soon. Users should either pass `outputs`, or call `predict_scores` method.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns scores.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted scores (or an iterable of predicted scores if
- as_iterable is True). If `label_dimension == 1`, the shape of the output
- is `[batch_size]`, otherwise the shape is `[batch_size, label_dimension]`.
- If `outputs` is set, returns a dict of predictions.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.predict_scores(*args, **kwargs)` {#DNNRegressor.predict_scores}
-
-Returns predicted scores for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted scores (or an iterable of predicted scores if
- as_iterable is True). If `label_dimension == 1`, the shape of the output
- is `[batch_size]`, otherwise the shape is `[batch_size, label_dimension]`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.set_params(**params)` {#DNNRegressor.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
-
-- - -
-
-### `class tf.contrib.learn.DNNLinearCombinedRegressor` {#DNNLinearCombinedRegressor}
-
-A regressor for TensorFlow Linear and DNN joined training models.
-
-Example:
-
-```python
-sparse_feature_a = sparse_column_with_hash_bucket(...)
-sparse_feature_b = sparse_column_with_hash_bucket(...)
-
-sparse_feature_a_x_sparse_feature_b = crossed_column(...)
-
-sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a,
- ...)
-sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b,
- ...)
-
-estimator = DNNLinearCombinedRegressor(
- # common settings
- weight_column_name=weight_column_name,
- # wide settings
- linear_feature_columns=[sparse_feature_a_x_sparse_feature_b],
- linear_optimizer=tf.train.FtrlOptimizer(...),
- # deep settings
- dnn_feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
- dnn_hidden_units=[1000, 500, 100],
- dnn_optimizer=tf.train.ProximalAdagradOptimizer(...))
-
-# To apply L1 and L2 regularization, you can set optimizers as follows:
-tf.train.ProximalAdagradOptimizer(
- learning_rate=0.1,
- l1_regularization_strength=0.001,
- l2_regularization_strength=0.001)
-# It is same for FtrlOptimizer.
-
-# Input builders
-def input_fn_train: # returns x, y
- ...
-def input_fn_eval: # returns x, y
- ...
-estimator.train(input_fn_train)
-estimator.evaluate(input_fn_eval)
-estimator.predict(x)
-```
-
-Input of `fit`, `train`, and `evaluate` should have following features,
- otherwise there will be a `KeyError`:
- if `weight_column_name` is not `None`, a feature with
- `key=weight_column_name` whose value is a `Tensor`.
- for each `column` in `dnn_feature_columns` + `linear_feature_columns`:
- - if `column` is a `SparseColumn`, a feature with `key=column.name`
- whose `value` is a `SparseTensor`.
- - if `column` is a `WeightedSparseColumn`, two features: the first with
- `key` the id column name, the second with `key` the weight column name.
- Both features' `value` must be a `SparseTensor`.
- - if `column` is a `RealValuedColumn, a feature with `key=column.name`
- whose `value` is a `Tensor`.
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.__init__(model_dir=None, weight_column_name=None, linear_feature_columns=None, linear_optimizer=None, _joint_linear_weights=False, dnn_feature_columns=None, dnn_optimizer=None, dnn_hidden_units=None, dnn_activation_fn=relu, dnn_dropout=None, gradient_clip_norm=None, enable_centered_bias=False, label_dimension=1, config=None, feature_engineering_fn=None, embedding_lr_multipliers=None, input_layer_min_slice_size=None)` {#DNNLinearCombinedRegressor.__init__}
-
-Initializes a DNNLinearCombinedRegressor instance.
-
-##### Args:
-
-
-* <b>`model_dir`</b>: Directory to save model parameters, graph and etc. This can
- also be used to load checkpoints from the directory into a estimator
- to continue training a previously saved model.
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training. It
- will be multiplied by the loss of the example.
-* <b>`linear_feature_columns`</b>: An iterable containing all the feature columns
- used by linear part of the model. All items in the set must be
- instances of classes derived from `FeatureColumn`.
-* <b>`linear_optimizer`</b>: An instance of `tf.Optimizer` used to apply gradients to
- the linear part of the model. If `None`, will use a FTRL optimizer.
- _joint_linear_weights: If True a single (possibly partitioned) variable
- will be used to store the linear model weights. It's faster, but
- requires that all columns are sparse and have the 'sum' combiner.
-
-* <b>`dnn_feature_columns`</b>: An iterable containing all the feature columns used
- by deep part of the model. All items in the set must be instances of
- classes derived from `FeatureColumn`.
-* <b>`dnn_optimizer`</b>: An instance of `tf.Optimizer` used to apply gradients to
- the deep part of the model. If `None`, will use an Adagrad optimizer.
-* <b>`dnn_hidden_units`</b>: List of hidden units per layer. All layers are fully
- connected.
-* <b>`dnn_activation_fn`</b>: Activation function applied to each layer. If None,
- will use `tf.nn.relu`.
-* <b>`dnn_dropout`</b>: When not None, the probability we will drop out
- a given coordinate.
-* <b>`gradient_clip_norm`</b>: A float > 0. If provided, gradients are clipped
- to their global norm with this clipping ratio. See
- tf.clip_by_global_norm for more details.
-* <b>`enable_centered_bias`</b>: A bool. If True, estimator will learn a centered
- bias variable for each class. Rest of the model structure learns the
- residual after centered bias.
-* <b>`label_dimension`</b>: Number of regression targets per example. This is the
- size of the last dimension of the labels and logits `Tensor` objects
- (typically, these have shape `[batch_size, label_dimension]`).
-* <b>`config`</b>: RunConfig object to configure the runtime settings.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into the model.
-* <b>`embedding_lr_multipliers`</b>: Optional. A dictionary from `EmbeddingColumn` to
- a `float` multiplier. Multiplier will be used to multiply with
- learning rate for the embedding variables.
-* <b>`input_layer_min_slice_size`</b>: Optional. The min slice size of input layer
- partitions. If not provided, will use the default of 64M.
-
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both linear_feature_columns and dnn_features_columns are
- empty at the same time.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.__repr__()` {#DNNLinearCombinedRegressor.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.config` {#DNNLinearCombinedRegressor.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.evaluate(x=None, y=None, input_fn=None, feed_fn=None, batch_size=None, steps=None, metrics=None, name=None, checkpoint_path=None, hooks=None)` {#DNNLinearCombinedRegressor.evaluate}
-
-See evaluable.Evaluable.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.export(export_dir, input_fn=None, input_feature_key=None, use_deprecated_input_fn=True, signature_fn=None, default_batch_size=1, exports_to_keep=None)` {#DNNLinearCombinedRegressor.export}
-
-See BaseEstimator.export.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#DNNLinearCombinedRegressor.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.fit(*args, **kwargs)` {#DNNLinearCombinedRegressor.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.get_params(deep=True)` {#DNNLinearCombinedRegressor.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.get_variable_names()` {#DNNLinearCombinedRegressor.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.get_variable_value(name)` {#DNNLinearCombinedRegressor.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.model_dir` {#DNNLinearCombinedRegressor.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.partial_fit(*args, **kwargs)` {#DNNLinearCombinedRegressor.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.predict(*args, **kwargs)` {#DNNLinearCombinedRegressor.predict}
-
-Returns predictions for given features. (deprecated arguments) (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2017-03-01.
-Instructions for updating:
-Please switch to predict_scores, or set `outputs` argument.
-
-By default, returns predicted scores. But this default will be dropped
-soon. Users should either pass `outputs`, or call `predict_scores` method.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns scores.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted scores (or an iterable of predicted scores if
- as_iterable is True). If `label_dimension == 1`, the shape of the output
- is `[batch_size]`, otherwise the shape is `[batch_size, label_dimension]`.
- If `outputs` is set, returns a dict of predictions.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.predict_scores(*args, **kwargs)` {#DNNLinearCombinedRegressor.predict_scores}
-
-Returns predicted scores for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted scores (or an iterable of predicted scores if
- as_iterable is True). If `label_dimension == 1`, the shape of the output
- is `[batch_size]`, otherwise the shape is `[batch_size, label_dimension]`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.set_params(**params)` {#DNNLinearCombinedRegressor.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
-
-- - -
-
-### `class tf.contrib.learn.DNNLinearCombinedClassifier` {#DNNLinearCombinedClassifier}
-
-A classifier for TensorFlow Linear and DNN joined training models.
-
-Example:
-
-```python
-sparse_feature_a = sparse_column_with_hash_bucket(...)
-sparse_feature_b = sparse_column_with_hash_bucket(...)
-
-sparse_feature_a_x_sparse_feature_b = crossed_column(...)
-
-sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a,
- ...)
-sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b,
- ...)
-
-estimator = DNNLinearCombinedClassifier(
- # common settings
- n_classes=n_classes,
- weight_column_name=weight_column_name,
- # wide settings
- linear_feature_columns=[sparse_feature_a_x_sparse_feature_b],
- linear_optimizer=tf.train.FtrlOptimizer(...),
- # deep settings
- dnn_feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
- dnn_hidden_units=[1000, 500, 100],
- dnn_optimizer=tf.train.AdagradOptimizer(...))
-
-# Input builders
-def input_fn_train: # returns x, y (where y represents label's class index).
- ...
-def input_fn_eval: # returns x, y (where y represents label's class index).
- ...
-estimator.fit(input_fn=input_fn_train)
-estimator.evaluate(input_fn=input_fn_eval)
-estimator.predict(x=x) # returns predicted labels (i.e. label's class index).
-```
-
-Input of `fit` and `evaluate` should have following features,
- otherwise there will be a `KeyError`:
- if `weight_column_name` is not `None`, a feature with
- `key=weight_column_name` whose value is a `Tensor`.
- for each `column` in `dnn_feature_columns` + `linear_feature_columns`:
- - if `column` is a `SparseColumn`, a feature with `key=column.name`
- whose `value` is a `SparseTensor`.
- - if `column` is a `WeightedSparseColumn`, two features: the first with
- `key` the id column name, the second with `key` the weight column name.
- Both features' `value` must be a `SparseTensor`.
- - if `column` is a `RealValuedColumn, a feature with `key=column.name`
- whose `value` is a `Tensor`.
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.__init__(model_dir=None, n_classes=2, weight_column_name=None, linear_feature_columns=None, linear_optimizer=None, _joint_linear_weights=False, dnn_feature_columns=None, dnn_optimizer=None, dnn_hidden_units=None, dnn_activation_fn=relu, dnn_dropout=None, gradient_clip_norm=None, enable_centered_bias=False, config=None, feature_engineering_fn=None, embedding_lr_multipliers=None, input_layer_min_slice_size=None)` {#DNNLinearCombinedClassifier.__init__}
-
-Constructs a DNNLinearCombinedClassifier instance.
-
-##### Args:
-
-
-* <b>`model_dir`</b>: Directory to save model parameters, graph and etc. This can
- also be used to load checkpoints from the directory into a estimator
- to continue training a previously saved model.
-* <b>`n_classes`</b>: number of label classes. Default is binary classification.
- Note that class labels are integers representing the class index (i.e.
- values from 0 to n_classes-1). For arbitrary label values (e.g. string
- labels), convert to class indices first.
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training.
- It will be multiplied by the loss of the example.
-* <b>`linear_feature_columns`</b>: An iterable containing all the feature columns
- used by linear part of the model. All items in the set must be
- instances of classes derived from `FeatureColumn`.
-* <b>`linear_optimizer`</b>: An instance of `tf.Optimizer` used to apply gradients to
- the linear part of the model. If `None`, will use a FTRL optimizer.
- _joint_linear_weights: If True a single (possibly partitioned) variable
- will be used to store the linear model weights. It's faster, but
- requires all columns are sparse and have the 'sum' combiner.
-
-* <b>`dnn_feature_columns`</b>: An iterable containing all the feature columns used
- by deep part of the model. All items in the set must be instances of
- classes derived from `FeatureColumn`.
-* <b>`dnn_optimizer`</b>: An instance of `tf.Optimizer` used to apply gradients to
- the deep part of the model. If `None`, will use an Adagrad optimizer.
-* <b>`dnn_hidden_units`</b>: List of hidden units per layer. All layers are fully
- connected.
-* <b>`dnn_activation_fn`</b>: Activation function applied to each layer. If `None`,
- will use `tf.nn.relu`.
-* <b>`dnn_dropout`</b>: When not None, the probability we will drop out
- a given coordinate.
-* <b>`gradient_clip_norm`</b>: A float > 0. If provided, gradients are clipped
- to their global norm with this clipping ratio. See
- tf.clip_by_global_norm for more details.
-* <b>`enable_centered_bias`</b>: A bool. If True, estimator will learn a centered
- bias variable for each class. Rest of the model structure learns the
- residual after centered bias.
-* <b>`config`</b>: RunConfig object to configure the runtime settings.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into the model.
-* <b>`embedding_lr_multipliers`</b>: Optional. A dictionary from `EmbeddingColumn` to
- a `float` multiplier. Multiplier will be used to multiply with
- learning rate for the embedding variables.
-* <b>`input_layer_min_slice_size`</b>: Optional. The min slice size of input layer
- partitions. If not provided, will use the default of 64M.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `n_classes` < 2.
-* <b>`ValueError`</b>: If both `linear_feature_columns` and `dnn_features_columns`
- are empty at the same time.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.__repr__()` {#DNNLinearCombinedClassifier.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.config` {#DNNLinearCombinedClassifier.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.dnn_bias_` {#DNNLinearCombinedClassifier.dnn_bias_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.dnn_weights_` {#DNNLinearCombinedClassifier.dnn_weights_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.evaluate(*args, **kwargs)` {#DNNLinearCombinedClassifier.evaluate}
-
-See `Evaluable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` or `y` is provided, and at least one of
- `input_fn` or `feed_fn` is provided.
- Or if `metrics` is not `None` or `dict`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.export(export_dir, input_fn=None, input_feature_key=None, use_deprecated_input_fn=True, signature_fn=None, default_batch_size=1, exports_to_keep=None)` {#DNNLinearCombinedClassifier.export}
-
-See BasEstimator.export.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#DNNLinearCombinedClassifier.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.fit(*args, **kwargs)` {#DNNLinearCombinedClassifier.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.get_params(deep=True)` {#DNNLinearCombinedClassifier.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.get_variable_names()` {#DNNLinearCombinedClassifier.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.get_variable_value(name)` {#DNNLinearCombinedClassifier.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.linear_bias_` {#DNNLinearCombinedClassifier.linear_bias_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.linear_weights_` {#DNNLinearCombinedClassifier.linear_weights_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.model_dir` {#DNNLinearCombinedClassifier.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.partial_fit(*args, **kwargs)` {#DNNLinearCombinedClassifier.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.predict(*args, **kwargs)` {#DNNLinearCombinedClassifier.predict}
-
-Returns predictions for given features. (deprecated arguments) (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2017-03-01.
-Instructions for updating:
-Please switch to predict_classes, or set `outputs` argument.
-
-By default, returns predicted classes. But this default will be dropped
-soon. Users should either pass `outputs`, or call `predict_classes` method.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns classes.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted classes with shape [batch_size] (or an iterable
- of predicted classes if as_iterable is True). Each predicted class is
- represented by its class index (i.e. integer from 0 to n_classes-1).
- If `outputs` is set, returns a dict of predictions.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.predict_classes(*args, **kwargs)` {#DNNLinearCombinedClassifier.predict_classes}
-
-Returns predicted classes for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted classes with shape [batch_size] (or an iterable
- of predicted classes if as_iterable is True). Each predicted class is
- represented by its class index (i.e. integer from 0 to n_classes-1).
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.predict_proba(*args, **kwargs)` {#DNNLinearCombinedClassifier.predict_proba}
-
-Returns prediction probabilities for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x and y must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted probabilities with shape [batch_size, n_classes]
- (or an iterable of predicted probabilities if as_iterable is True).
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.set_params(**params)` {#DNNLinearCombinedClassifier.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
-
-- - -
-
-### `class tf.contrib.learn.LinearClassifier` {#LinearClassifier}
-
-Linear classifier model.
-
-Train a linear model to classify instances into one of multiple possible
-classes. When number of possible classes is 2, this is binary classification.
-
-Example:
-
-```python
-sparse_column_a = sparse_column_with_hash_bucket(...)
-sparse_column_b = sparse_column_with_hash_bucket(...)
-
-sparse_feature_a_x_sparse_feature_b = crossed_column(...)
-
-# Estimator using the default optimizer.
-estimator = LinearClassifier(
- feature_columns=[sparse_column_a, sparse_feature_a_x_sparse_feature_b])
-
-# Or estimator using the FTRL optimizer with regularization.
-estimator = LinearClassifier(
- feature_columns=[sparse_column_a, sparse_feature_a_x_sparse_feature_b],
- optimizer=tf.train.FtrlOptimizer(
- learning_rate=0.1,
- l1_regularization_strength=0.001
- ))
-
-# Or estimator using the SDCAOptimizer.
-estimator = LinearClassifier(
- feature_columns=[sparse_column_a, sparse_feature_a_x_sparse_feature_b],
- optimizer=tf.contrib.linear_optimizer.SDCAOptimizer(
- example_id_column='example_id',
- num_loss_partitions=...,
- symmetric_l2_regularization=2.0
- ))
-
-# Input builders
-def input_fn_train: # returns x, y (where y represents label's class index).
- ...
-def input_fn_eval: # returns x, y (where y represents label's class index).
- ...
-estimator.fit(input_fn=input_fn_train)
-estimator.evaluate(input_fn=input_fn_eval)
-estimator.predict(x=x) # returns predicted labels (i.e. label's class index).
-```
-
-Input of `fit` and `evaluate` should have following features,
- otherwise there will be a `KeyError`:
-
-* if `weight_column_name` is not `None`, a feature with
- `key=weight_column_name` whose value is a `Tensor`.
-* for each `column` in `feature_columns`:
- - if `column` is a `SparseColumn`, a feature with `key=column.name`
- whose `value` is a `SparseTensor`.
- - if `column` is a `WeightedSparseColumn`, two features: the first with
- `key` the id column name, the second with `key` the weight column name.
- Both features' `value` must be a `SparseTensor`.
- - if `column` is a `RealValuedColumn`, a feature with `key=column.name`
- whose `value` is a `Tensor`.
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.__init__(feature_columns, model_dir=None, n_classes=2, weight_column_name=None, optimizer=None, gradient_clip_norm=None, enable_centered_bias=False, _joint_weight=False, config=None, feature_engineering_fn=None)` {#LinearClassifier.__init__}
-
-Construct a `LinearClassifier` estimator object.
-
-##### Args:
-
-
-* <b>`feature_columns`</b>: An iterable containing all the feature columns used by
- the model. All items in the set should be instances of classes derived
- from `FeatureColumn`.
-* <b>`model_dir`</b>: Directory to save model parameters, graph and etc. This can
- also be used to load checkpoints from the directory into a estimator
- to continue training a previously saved model.
-* <b>`n_classes`</b>: number of label classes. Default is binary classification.
- Note that class labels are integers representing the class index (i.e.
- values from 0 to n_classes-1). For arbitrary label values (e.g. string
- labels), convert to class indices first.
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training. It
- will be multiplied by the loss of the example.
-* <b>`optimizer`</b>: The optimizer used to train the model. If specified, it should
- be either an instance of `tf.Optimizer` or the SDCAOptimizer. If `None`,
- the Ftrl optimizer will be used.
-* <b>`gradient_clip_norm`</b>: A `float` > 0. If provided, gradients are clipped
- to their global norm with this clipping ratio. See
- `tf.clip_by_global_norm` for more details.
-* <b>`enable_centered_bias`</b>: A bool. If True, estimator will learn a centered
- bias variable for each class. Rest of the model structure learns the
- residual after centered bias.
- _joint_weight: If True, the weights for all columns will be stored in a
- single (possibly partitioned) variable. It's more efficient, but it's
- incompatible with SDCAOptimizer, and requires all feature columns are
- sparse and use the 'sum' combiner.
-
-* <b>`config`</b>: `RunConfig` object to configure the runtime settings.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into the model.
-
-##### Returns:
-
- A `LinearClassifier` estimator.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if n_classes < 2.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.__repr__()` {#LinearClassifier.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.bias_` {#LinearClassifier.bias_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.config` {#LinearClassifier.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.evaluate(*args, **kwargs)` {#LinearClassifier.evaluate}
-
-See `Evaluable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` or `y` is provided, and at least one of
- `input_fn` or `feed_fn` is provided.
- Or if `metrics` is not `None` or `dict`.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.export(export_dir, input_fn=None, input_feature_key=None, use_deprecated_input_fn=True, signature_fn=None, default_batch_size=1, exports_to_keep=None)` {#LinearClassifier.export}
-
-See BaseEstimator.export.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#LinearClassifier.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.fit(*args, **kwargs)` {#LinearClassifier.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.get_params(deep=True)` {#LinearClassifier.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.get_variable_names()` {#LinearClassifier.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.get_variable_value(name)` {#LinearClassifier.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.model_dir` {#LinearClassifier.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.partial_fit(*args, **kwargs)` {#LinearClassifier.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.predict(*args, **kwargs)` {#LinearClassifier.predict}
-
-Returns predictions for given features. (deprecated arguments) (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2017-03-01.
-Instructions for updating:
-Please switch to predict_classes, or set `outputs` argument.
-
-By default, returns predicted classes. But this default will be dropped
-soon. Users should either pass `outputs`, or call `predict_classes` method.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns classes.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted classes with shape [batch_size] (or an iterable
- of predicted classes if as_iterable is True). Each predicted class is
- represented by its class index (i.e. integer from 0 to n_classes-1).
- If `outputs` is set, returns a dict of predictions.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.predict_classes(*args, **kwargs)` {#LinearClassifier.predict_classes}
-
-Returns predicted classes for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted classes with shape [batch_size] (or an iterable
- of predicted classes if as_iterable is True). Each predicted class is
- represented by its class index (i.e. integer from 0 to n_classes-1).
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.predict_proba(*args, **kwargs)` {#LinearClassifier.predict_proba}
-
-Returns predicted probabilities for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x and y must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted probabilities with shape [batch_size, n_classes]
- (or an iterable of predicted probabilities if as_iterable is True).
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.set_params(**params)` {#LinearClassifier.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.weights_` {#LinearClassifier.weights_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-
-- - -
-
-### `class tf.contrib.learn.LinearRegressor` {#LinearRegressor}
-
-Linear regressor model.
-
-Train a linear regression model to predict label value given observation of
-feature values.
-
-Example:
-
-```python
-sparse_column_a = sparse_column_with_hash_bucket(...)
-sparse_column_b = sparse_column_with_hash_bucket(...)
-
-sparse_feature_a_x_sparse_feature_b = crossed_column(...)
-
-estimator = LinearRegressor(
- feature_columns=[sparse_column_a, sparse_feature_a_x_sparse_feature_b])
-
-# Input builders
-def input_fn_train: # returns x, y
- ...
-def input_fn_eval: # returns x, y
- ...
-estimator.fit(input_fn=input_fn_train)
-estimator.evaluate(input_fn=input_fn_eval)
-estimator.predict(x=x)
-```
-
-Input of `fit` and `evaluate` should have following features,
- otherwise there will be a KeyError:
-
-* if `weight_column_name` is not `None`:
- key=weight_column_name, value=a `Tensor`
-* for column in `feature_columns`:
- - if isinstance(column, `SparseColumn`):
- key=column.name, value=a `SparseTensor`
- - if isinstance(column, `WeightedSparseColumn`):
- {key=id column name, value=a `SparseTensor`,
- key=weight column name, value=a `SparseTensor`}
- - if isinstance(column, `RealValuedColumn`):
- key=column.name, value=a `Tensor`
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.__init__(feature_columns, model_dir=None, weight_column_name=None, optimizer=None, gradient_clip_norm=None, enable_centered_bias=False, label_dimension=1, _joint_weights=False, config=None, feature_engineering_fn=None)` {#LinearRegressor.__init__}
-
-Construct a `LinearRegressor` estimator object.
-
-##### Args:
-
-
-* <b>`feature_columns`</b>: An iterable containing all the feature columns used by
- the model. All items in the set should be instances of classes derived
- from `FeatureColumn`.
-* <b>`model_dir`</b>: Directory to save model parameters, graph, etc. This can
- also be used to load checkpoints from the directory into a estimator
- to continue training a previously saved model.
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training. It
- will be multiplied by the loss of the example.
-* <b>`optimizer`</b>: An instance of `tf.Optimizer` used to train the model. If
- `None`, will use an Ftrl optimizer.
-* <b>`gradient_clip_norm`</b>: A `float` > 0. If provided, gradients are clipped
- to their global norm with this clipping ratio. See
- `tf.clip_by_global_norm` for more details.
-* <b>`enable_centered_bias`</b>: A bool. If True, estimator will learn a centered
- bias variable for each class. Rest of the model structure learns the
- residual after centered bias.
-* <b>`label_dimension`</b>: Number of regression targets per example. This is the
- size of the last dimension of the labels and logits `Tensor` objects
- (typically, these have shape `[batch_size, label_dimension]`).
- _joint_weights: If True use a single (possibly partitioned) variable to
- store the weights. It's faster, but requires all feature columns are
- sparse and have the 'sum' combiner. Incompatible with SDCAOptimizer.
-
-* <b>`config`</b>: `RunConfig` object to configure the runtime settings.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into the model.
-
-##### Returns:
-
- A `LinearRegressor` estimator.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.__repr__()` {#LinearRegressor.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.bias_` {#LinearRegressor.bias_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.config` {#LinearRegressor.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.evaluate(*args, **kwargs)` {#LinearRegressor.evaluate}
-
-See `Evaluable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` or `y` is provided, and at least one of
- `input_fn` or `feed_fn` is provided.
- Or if `metrics` is not `None` or `dict`.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.export(export_dir, input_fn=None, input_feature_key=None, use_deprecated_input_fn=True, signature_fn=None, default_batch_size=1, exports_to_keep=None)` {#LinearRegressor.export}
-
-See BaseEstimator.export.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#LinearRegressor.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.fit(*args, **kwargs)` {#LinearRegressor.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.get_params(deep=True)` {#LinearRegressor.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.get_variable_names()` {#LinearRegressor.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.get_variable_value(name)` {#LinearRegressor.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.model_dir` {#LinearRegressor.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.partial_fit(*args, **kwargs)` {#LinearRegressor.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.predict(*args, **kwargs)` {#LinearRegressor.predict}
-
-Returns predictions for given features. (deprecated arguments) (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2017-03-01.
-Instructions for updating:
-Please switch to predict_scores, or set `outputs` argument.
-
-By default, returns predicted scores. But this default will be dropped
-soon. Users should either pass `outputs`, or call `predict_scores` method.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns scores.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted scores (or an iterable of predicted scores if
- as_iterable is True). If `label_dimension == 1`, the shape of the output
- is `[batch_size]`, otherwise the shape is `[batch_size, label_dimension]`.
- If `outputs` is set, returns a dict of predictions.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.predict_scores(*args, **kwargs)` {#LinearRegressor.predict_scores}
-
-Returns predicted scores for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted scores (or an iterable of predicted scores if
- as_iterable is True). If `label_dimension == 1`, the shape of the output
- is `[batch_size]`, otherwise the shape is `[batch_size, label_dimension]`.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.set_params(**params)` {#LinearRegressor.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.weights_` {#LinearRegressor.weights_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-
-- - -
-
-### `tf.contrib.learn.LogisticRegressor(model_fn, thresholds=None, model_dir=None, config=None, feature_engineering_fn=None)` {#LogisticRegressor}
-
-Builds a logistic regression Estimator for binary classification.
-
-This method provides a basic Estimator with some additional metrics for custom
-binary classification models, including AUC, precision/recall and accuracy.
-
-Example:
-
-```python
- # See tf.contrib.learn.Estimator(...) for details on model_fn structure
- def my_model_fn(...):
- pass
-
- estimator = LogisticRegressor(model_fn=my_model_fn)
-
- # Input builders
- def input_fn_train:
- pass
-
- estimator.fit(input_fn=input_fn_train)
- estimator.predict(x=x)
-```
-
-##### Args:
-
-
-* <b>`model_fn`</b>: Model function with the signature:
- `(features, labels, mode) -> (predictions, loss, train_op)`.
- Expects the returned predictions to be probabilities in [0.0, 1.0].
-* <b>`thresholds`</b>: List of floating point thresholds to use for accuracy,
- precision, and recall metrics. If `None`, defaults to `[0.5]`.
-* <b>`model_dir`</b>: Directory to save model parameters, graphs, etc. This can also
- be used to load checkpoints from the directory into a estimator to
- continue training a previously saved model.
-* <b>`config`</b>: A RunConfig configuration object.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into the model.
-
-##### Returns:
-
- A `tf.contrib.learn.Estimator` instance.
-
-
-
-- - -
-
-### `class tf.contrib.learn.Experiment` {#Experiment}
-
-Experiment is a class containing all information needed to train a model.
-
-After an experiment is created (by passing an Estimator and inputs for
-training and evaluation), an Experiment instance knows how to invoke training
-and eval loops in a sensible fashion for distributed training.
-- - -
-
-#### `tf.contrib.learn.Experiment.__init__(*args, **kwargs)` {#Experiment.__init__}
-
-Constructor for `Experiment`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-10-23.
-Instructions for updating:
-local_eval_frequency is deprecated as local_run will be renamed to train_and_evaluate. Use min_eval_frequency and call train_and_evaluate instead. Note, however, that the default for min_eval_frequency is 1, meaning models will be evaluated every time a new checkpoint is available. In contrast, the default for local_eval_frequency is None, resulting in evaluation occurring only after training has completed. min_eval_frequency is ignored when calling the deprecated local_run.
-
-Creates an Experiment instance. None of the functions passed to this
-constructor are executed at construction time. They are stored and used
-when a method is executed which requires it.
-
-##### Args:
-
-
-* <b>`estimator`</b>: Object implementing `Trainable` and `Evaluable`.
-* <b>`train_input_fn`</b>: function, returns features and labels for training.
-* <b>`eval_input_fn`</b>: function, returns features and labels for evaluation. If
- `eval_steps` is `None`, this should be configured only to produce for a
- finite number of batches (generally, 1 epoch over the evaluation data).
-* <b>`eval_metrics`</b>: `dict` of string, metric function. If `None`, default set
- is used.
-* <b>`train_steps`</b>: Perform this many steps of training. `None`, the default,
- means train forever.
-* <b>`eval_steps`</b>: `evaluate` runs until input is exhausted (or another exception
- is raised), or for `eval_steps` steps, if specified.
-* <b>`train_monitors`</b>: A list of monitors to pass to the `Estimator`'s `fit`
- function.
-* <b>`eval_hooks`</b>: A list of `SessionRunHook` hooks to pass to the
- `Estimator`'s `evaluate` function.
-* <b>`local_eval_frequency`</b>: Frequency of running eval in steps,
- when running locally. If `None`, runs evaluation only at the end of
- training.
-* <b>`eval_delay_secs`</b>: Start evaluating after waiting for this many seconds.
-* <b>`continuous_eval_throttle_secs`</b>: Do not re-evaluate unless the last
- evaluation was started at least this many seconds ago for
- continuous_eval().
-* <b>`min_eval_frequency`</b>: (applies only to train_and_evaluate). the minimum
- number of steps between evaluations. Of course, evaluation does not
- occur if no new snapshot is available, hence, this is the minimum.
-* <b>`delay_workers_by_global_step`</b>: if `True` delays training workers
- based on global step instead of time.
-* <b>`export_strategies`</b>: A list of `ExportStrategy`s, or a single one, or None.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `estimator` does not implement `Evaluable` and `Trainable`,
- or if export_strategies has the wrong type.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.continuous_eval(delay_secs=None, throttle_delay_secs=None, evaluate_checkpoint_only_once=True, continuous_eval_predicate_fn=None)` {#Experiment.continuous_eval}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.continuous_eval_on_train_data(delay_secs=None, throttle_delay_secs=None, continuous_eval_predicate_fn=None)` {#Experiment.continuous_eval_on_train_data}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.estimator` {#Experiment.estimator}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.eval_metrics` {#Experiment.eval_metrics}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.eval_steps` {#Experiment.eval_steps}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.evaluate(delay_secs=None)` {#Experiment.evaluate}
-
-Evaluate on the evaluation data.
-
-Runs evaluation on the evaluation data and returns the result. Runs for
-`self._eval_steps` steps, or if it's `None`, then run until input is
-exhausted or another exception is raised. Start the evaluation after
-`delay_secs` seconds, or if it's `None`, defaults to using
-`self._eval_delay_secs` seconds.
-
-##### Args:
-
-
-* <b>`delay_secs`</b>: Start evaluating after this many seconds. If `None`, defaults
- to using `self._eval_delays_secs`.
-
-##### Returns:
-
- The result of the `evaluate` call to the `Estimator`.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.extend_train_hooks(additional_hooks)` {#Experiment.extend_train_hooks}
-
-Extends the hooks for training.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.local_run(*args, **kwargs)` {#Experiment.local_run}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-23.
-Instructions for updating:
-local_run will be renamed to train_and_evaluate and the new default behavior will be to run evaluation every time there is a new checkpoint.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.reset_export_strategies(new_export_strategies=None)` {#Experiment.reset_export_strategies}
-
-Resets the export strategies with the `new_export_strategies`.
-
-##### Args:
-
-
-* <b>`new_export_strategies`</b>: A new list of `ExportStrategy`s, or a single one,
- or None.
-
-##### Returns:
-
- The old export strategies.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.run_std_server()` {#Experiment.run_std_server}
-
-Starts a TensorFlow server and joins the serving thread.
-
-Typically used for parameter servers.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if not enough information is available in the estimator's
- config to create a server.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.test()` {#Experiment.test}
-
-Tests training and evaluating the estimator both for a single step.
-
-##### Returns:
-
- The result of the `evaluate` call to the `Estimator`.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.train(delay_secs=None)` {#Experiment.train}
-
-Fit the estimator using the training data.
-
-Train the estimator for `self._train_steps` steps, after waiting for
-`delay_secs` seconds. If `self._train_steps` is `None`, train forever.
-
-##### Args:
-
-
-* <b>`delay_secs`</b>: Start training after this many seconds.
-
-##### Returns:
-
- The trained estimator.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.train_and_evaluate()` {#Experiment.train_and_evaluate}
-
-Interleaves training and evaluation.
-
-The frequency of evaluation is controlled by the contructor arg
-`min_eval_frequency`. When this parameter is None or 0, evaluation happens
-only after training has completed. Note that evaluation cannot happen
-more frequently than checkpoints are taken. If no new snapshots are
-available when evaluation is supposed to occur, then evaluation doesn't
-happen for another `min_eval_frequency` steps (assuming a checkpoint is
-available at that point). Thus, settings `min_eval_frequency` to 1 means
-that the model will be evaluated everytime there is a new checkpoint.
-
-This is particular useful for a "Master" task in the cloud, whose
-responsibility it is to take checkpoints, evaluate those checkpoints,
-and write out summaries. Participating in training as the supervisor
-allows such a task to accomplish the first and last items, while
-performing evaluation allows for the second.
-
-##### Returns:
-
- The result of the `evaluate` call to the `Estimator`.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.train_steps` {#Experiment.train_steps}
-
-
-
-
-
-- - -
-
-### `class tf.contrib.learn.ExportStrategy` {#ExportStrategy}
-
-A class representing a type of model export.
-
-Typically constructed by a utility function specific to the exporter, such as
-`saved_model_export_utils.make_export_strategy()`.
-
-The fields are:
- name: The directory name under the export base directory where exports of
- this type will be written.
- export_fn: A function that writes an export, given an estimator, a
- destination path, and optionally a checkpoint path and an evaluation
- result for that checkpoint. This export_fn() may be run repeatedly during
- continuous training, or just once at the end of fixed-length training.
- Note the export_fn() may choose whether or not to export based on the eval
- result or based on an internal timer or any other criterion, if exports
- are not desired for every checkpoint.
-
- The signature of this function must be one of:
- * (estimator, export_path) -> export_path`
- * (estimator, export_path, checkpoint_path) -> export_path`
- * (estimator, export_path, checkpoint_path, eval_result) -> export_path`
-- - -
-
-#### `tf.contrib.learn.ExportStrategy.__getnewargs__()` {#ExportStrategy.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.contrib.learn.ExportStrategy.__getstate__()` {#ExportStrategy.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.contrib.learn.ExportStrategy.__new__(_cls, name, export_fn)` {#ExportStrategy.__new__}
-
-Create new instance of ExportStrategy(name, export_fn)
-
-
-- - -
-
-#### `tf.contrib.learn.ExportStrategy.__repr__()` {#ExportStrategy.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.contrib.learn.ExportStrategy.export(estimator, export_path, checkpoint_path=None, eval_result=None)` {#ExportStrategy.export}
-
-Exports the given Estimator to a specific format.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the Estimator to export.
-* <b>`export_path`</b>: A string containing a directory where to write the export.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the strategy may locate a checkpoint (e.g. the most recent) by itself.
-* <b>`eval_result`</b>: The output of Estimator.evaluate on this checkpoint. This
- should be set only if checkpoint_path is provided (otherwise it is
- unclear which checkpoint this eval refers to).
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the export_fn does not have the required signature
-
-
-- - -
-
-#### `tf.contrib.learn.ExportStrategy.export_fn` {#ExportStrategy.export_fn}
-
-Alias for field number 1
-
-
-- - -
-
-#### `tf.contrib.learn.ExportStrategy.name` {#ExportStrategy.name}
-
-Alias for field number 0
-
-
-
-- - -
-
-### `class tf.contrib.learn.TaskType` {#TaskType}
-
-
-
-
-- - -
-
-### `class tf.train.NanLossDuringTrainingError` {#NanLossDuringTrainingError}
-
-
-- - -
-
-#### `tf.train.NanLossDuringTrainingError.__str__()` {#NanLossDuringTrainingError.__str__}
-
-
-
-
-
-- - -
-
-### `class tf.contrib.learn.RunConfig` {#RunConfig}
-
-This class specifies the configurations for an `Estimator` run.
-
-If you're a Google-internal user using command line flags with
-`learn_runner.py` (for instance, to do distributed training or to use
-parameter servers), you probably want to use `learn_runner.EstimatorConfig`
-instead.
-- - -
-
-#### `tf.contrib.learn.RunConfig.__init__(master=None, num_cores=0, log_device_placement=False, gpu_memory_fraction=1, tf_random_seed=None, save_summary_steps=100, save_checkpoints_secs=600, save_checkpoints_steps=None, keep_checkpoint_max=5, keep_checkpoint_every_n_hours=10000, evaluation_master='')` {#RunConfig.__init__}
-
-Constructor.
-
-Note that the superclass `ClusterConfig` may set properties like
-`cluster_spec`, `is_chief`, `master` (if `None` in the args),
-`num_ps_replicas`, `task_id`, and `task_type` based on the `TF_CONFIG`
-environment variable. See `ClusterConfig` for more details.
-
-##### Args:
-
-
-* <b>`master`</b>: TensorFlow master. Defaults to empty string for local.
-* <b>`num_cores`</b>: Number of cores to be used. If 0, the system picks an
- appropriate number (default: 0).
-* <b>`log_device_placement`</b>: Log the op placement to devices (default: False).
-* <b>`gpu_memory_fraction`</b>: Fraction of GPU memory used by the process on
- each GPU uniformly on the same machine.
-* <b>`tf_random_seed`</b>: Random seed for TensorFlow initializers.
- Setting this value allows consistency between reruns.
-* <b>`save_summary_steps`</b>: Save summaries every this many steps.
-* <b>`save_checkpoints_secs`</b>: Save checkpoints every this many seconds. Can not
- be specified with `save_checkpoints_steps`.
-* <b>`save_checkpoints_steps`</b>: Save checkpoints every this many steps. Can not be
- specified with `save_checkpoints_secs`.
-* <b>`keep_checkpoint_max`</b>: The maximum number of recent checkpoint files to
- keep. As new files are created, older files are deleted. If None or 0,
- all checkpoint files are kept. Defaults to 5 (that is, the 5 most recent
- checkpoint files are kept.)
-* <b>`keep_checkpoint_every_n_hours`</b>: Number of hours between each checkpoint
- to be saved. The default value of 10,000 hours effectively disables
- the feature.
-* <b>`evaluation_master`</b>: the master on which to perform evaluation.
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.cluster_spec` {#RunConfig.cluster_spec}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.environment` {#RunConfig.environment}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.evaluation_master` {#RunConfig.evaluation_master}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.get_task_id()` {#RunConfig.get_task_id}
-
-Returns task index from `TF_CONFIG` environmental variable.
-
-If you have a ClusterConfig instance, you can just access its task_id
-property instead of calling this function and re-parsing the environmental
-variable.
-
-##### Returns:
-
- `TF_CONFIG['task']['index']`. Defaults to 0.
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.is_chief` {#RunConfig.is_chief}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.keep_checkpoint_every_n_hours` {#RunConfig.keep_checkpoint_every_n_hours}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.keep_checkpoint_max` {#RunConfig.keep_checkpoint_max}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.master` {#RunConfig.master}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.num_ps_replicas` {#RunConfig.num_ps_replicas}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.save_checkpoints_secs` {#RunConfig.save_checkpoints_secs}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.save_checkpoints_steps` {#RunConfig.save_checkpoints_steps}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.save_summary_steps` {#RunConfig.save_summary_steps}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.task_id` {#RunConfig.task_id}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.task_type` {#RunConfig.task_type}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.tf_config` {#RunConfig.tf_config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.tf_random_seed` {#RunConfig.tf_random_seed}
-
-
-
-
-
-- - -
-
-### `tf.contrib.learn.evaluate(*args, **kwargs)` {#evaluate}
-
-Evaluate a model loaded from a checkpoint. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-02-15.
-Instructions for updating:
-graph_actions.py will be deleted. Use tf.train.* utilities instead. You can use learn/estimators/estimator.py as an example.
-
-Given `graph`, a directory to write summaries to (`output_dir`), a checkpoint
-to restore variables from, and a `dict` of `Tensor`s to evaluate, run an eval
-loop for `max_steps` steps, or until an exception (generally, an
-end-of-input signal from a reader operation) is raised from running
-`eval_dict`.
-
-In each step of evaluation, all tensors in the `eval_dict` are evaluated, and
-every `log_every_steps` steps, they are logged. At the very end of evaluation,
-a summary is evaluated (finding the summary ops using `Supervisor`'s logic)
-and written to `output_dir`.
-
-##### Args:
-
-
-* <b>`graph`</b>: A `Graph` to train. It is expected that this graph is not in use
- elsewhere.
-* <b>`output_dir`</b>: A string containing the directory to write a summary to.
-* <b>`checkpoint_path`</b>: A string containing the path to a checkpoint to restore.
- Can be `None` if the graph doesn't require loading any variables.
-* <b>`eval_dict`</b>: A `dict` mapping string names to tensors to evaluate. It is
- evaluated in every logging step. The result of the final evaluation is
- returned. If `update_op` is None, then it's evaluated in every step. If
- `max_steps` is `None`, this should depend on a reader that will raise an
- end-of-input exception when the inputs are exhausted.
-* <b>`update_op`</b>: A `Tensor` which is run in every step.
-* <b>`global_step_tensor`</b>: A `Variable` containing the global step. If `None`,
- one is extracted from the graph using the same logic as in `Supervisor`.
- Used to place eval summaries on training curves.
-* <b>`supervisor_master`</b>: The master string to use when preparing the session.
-* <b>`log_every_steps`</b>: Integer. Output logs every `log_every_steps` evaluation
- steps. The logs contain the `eval_dict` and timing information.
-* <b>`feed_fn`</b>: A function that is called every iteration to produce a `feed_dict`
- passed to `session.run` calls. Optional.
-* <b>`max_steps`</b>: Integer. Evaluate `eval_dict` this many times.
-
-##### Returns:
-
- A tuple `(eval_results, global_step)`:
-
-* <b>`eval_results`</b>: A `dict` mapping `string` to numeric values (`int`, `float`)
- that are the result of running eval_dict in the last step. `None` if no
- eval steps were run.
-* <b>`global_step`</b>: The global step this evaluation corresponds to.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `output_dir` is empty.
-
-
-- - -
-
-### `tf.contrib.learn.infer(*args, **kwargs)` {#infer}
-
-Restore graph from `restore_checkpoint_path` and run `output_dict` tensors. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-02-15.
-Instructions for updating:
-graph_actions.py will be deleted. Use tf.train.* utilities instead. You can use learn/estimators/estimator.py as an example.
-
-If `restore_checkpoint_path` is supplied, restore from checkpoint. Otherwise,
-init all variables.
-
-##### Args:
-
-
-* <b>`restore_checkpoint_path`</b>: A string containing the path to a checkpoint to
- restore.
-* <b>`output_dict`</b>: A `dict` mapping string names to `Tensor` objects to run.
- Tensors must all be from the same graph.
-* <b>`feed_dict`</b>: `dict` object mapping `Tensor` objects to input values to feed.
-
-##### Returns:
-
- Dict of values read from `output_dict` tensors. Keys are the same as
- `output_dict`, values are the results read from the corresponding `Tensor`
- in `output_dict`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `output_dict` or `feed_dicts` is None or empty.
-
-
-- - -
-
-### `tf.contrib.learn.run_feeds(*args, **kwargs)` {#run_feeds}
-
-See run_feeds_iter(). Returns a `list` instead of an iterator. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-02-15.
-Instructions for updating:
-graph_actions.py will be deleted. Use tf.train.* utilities instead. You can use learn/estimators/estimator.py as an example.
-
-
-- - -
-
-### `tf.contrib.learn.run_n(*args, **kwargs)` {#run_n}
-
-Run `output_dict` tensors `n` times, with the same `feed_dict` each run. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-02-15.
-Instructions for updating:
-graph_actions.py will be deleted. Use tf.train.* utilities instead. You can use learn/estimators/estimator.py as an example.
-
-##### Args:
-
-
-* <b>`output_dict`</b>: A `dict` mapping string names to tensors to run. Must all be
- from the same graph.
-* <b>`feed_dict`</b>: `dict` of input values to feed each run.
-* <b>`restore_checkpoint_path`</b>: A string containing the path to a checkpoint to
- restore.
-* <b>`n`</b>: Number of times to repeat.
-
-##### Returns:
-
- A list of `n` `dict` objects, each containing values read from `output_dict`
- tensors.
-
-
-- - -
-
-### `tf.contrib.learn.train(*args, **kwargs)` {#train}
-
-Train a model. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-02-15.
-Instructions for updating:
-graph_actions.py will be deleted. Use tf.train.* utilities instead. You can use learn/estimators/estimator.py as an example.
-
-Given `graph`, a directory to write outputs to (`output_dir`), and some ops,
-run a training loop. The given `train_op` performs one step of training on the
-model. The `loss_op` represents the objective function of the training. It is
-expected to increment the `global_step_tensor`, a scalar integer tensor
-counting training steps. This function uses `Supervisor` to initialize the
-graph (from a checkpoint if one is available in `output_dir`), write summaries
-defined in the graph, and write regular checkpoints as defined by
-`supervisor_save_model_secs`.
-
-Training continues until `global_step_tensor` evaluates to `max_steps`, or, if
-`fail_on_nan_loss`, until `loss_op` evaluates to `NaN`. In that case the
-program is terminated with exit code 1.
-
-##### Args:
-
-
-* <b>`graph`</b>: A graph to train. It is expected that this graph is not in use
- elsewhere.
-* <b>`output_dir`</b>: A directory to write outputs to.
-* <b>`train_op`</b>: An op that performs one training step when run.
-* <b>`loss_op`</b>: A scalar loss tensor.
-* <b>`global_step_tensor`</b>: A tensor representing the global step. If none is given,
- one is extracted from the graph using the same logic as in `Supervisor`.
-* <b>`init_op`</b>: An op that initializes the graph. If `None`, use `Supervisor`'s
- default.
-* <b>`init_feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
- This feed dictionary will be used when `init_op` is evaluated.
-* <b>`init_fn`</b>: Optional callable passed to Supervisor to initialize the model.
-* <b>`log_every_steps`</b>: Output logs regularly. The logs contain timing data and the
- current loss.
-* <b>`supervisor_is_chief`</b>: Whether the current process is the chief supervisor in
- charge of restoring the model and running standard services.
-* <b>`supervisor_master`</b>: The master string to use when preparing the session.
-* <b>`supervisor_save_model_secs`</b>: Save a checkpoint every
- `supervisor_save_model_secs` seconds when training.
-* <b>`keep_checkpoint_max`</b>: The maximum number of recent checkpoint files to
- keep. As new files are created, older files are deleted. If None or 0,
- all checkpoint files are kept. This is simply passed as the max_to_keep
- arg to tf.Saver constructor.
-* <b>`supervisor_save_summaries_steps`</b>: Save summaries every
- `supervisor_save_summaries_steps` seconds when training.
-* <b>`feed_fn`</b>: A function that is called every iteration to produce a `feed_dict`
- passed to `session.run` calls. Optional.
-* <b>`steps`</b>: Trains for this many steps (e.g. current global step + `steps`).
-* <b>`fail_on_nan_loss`</b>: If true, raise `NanLossDuringTrainingError` if `loss_op`
- evaluates to `NaN`. If false, continue training as if nothing happened.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-* <b>`max_steps`</b>: Number of total steps for which to train model. If `None`,
- train forever. Two calls fit(steps=100) means 200 training iterations.
- On the other hand two calls of fit(max_steps=100) means, second call
- will not do any iteration since first call did all 100 steps.
-
-##### Returns:
-
- The final loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `output_dir`, `train_op`, `loss_op`, or `global_step_tensor`
- is not provided. See `tf.contrib.framework.get_global_step` for how we
- look up the latter if not provided explicitly.
-* <b>`NanLossDuringTrainingError`</b>: If `fail_on_nan_loss` is `True`, and loss ever
- evaluates to `NaN`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-
-- - -
-
-### `tf.contrib.learn.extract_dask_data(data)` {#extract_dask_data}
-
-Extract data from dask.Series or dask.DataFrame for predictors.
-
-Given a distributed dask.DataFrame or dask.Series containing columns or names
-for one or more predictors, this operation returns a single dask.DataFrame or
-dask.Series that can be iterated over.
-
-##### Args:
-
-
-* <b>`data`</b>: A distributed dask.DataFrame or dask.Series.
-
-##### Returns:
-
- A dask.DataFrame or dask.Series that can be iterated over.
- If the supplied argument is neither a dask.DataFrame nor a dask.Series this
- operation returns it without modification.
-
-
-- - -
-
-### `tf.contrib.learn.extract_dask_labels(labels)` {#extract_dask_labels}
-
-Extract data from dask.Series or dask.DataFrame for labels.
-
-Given a distributed dask.DataFrame or dask.Series containing exactly one
-column or name, this operation returns a single dask.DataFrame or dask.Series
-that can be iterated over.
-
-##### Args:
-
-
-* <b>`labels`</b>: A distributed dask.DataFrame or dask.Series with exactly one
- column or name.
-
-##### Returns:
-
- A dask.DataFrame or dask.Series that can be iterated over.
- If the supplied argument is neither a dask.DataFrame nor a dask.Series this
- operation returns it without modification.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the supplied dask.DataFrame contains more than one
- column or the supplied dask.Series contains more than
- one name.
-
-
-- - -
-
-### `tf.contrib.learn.extract_pandas_data(data)` {#extract_pandas_data}
-
-Extract data from pandas.DataFrame for predictors.
-
-Given a DataFrame, will extract the values and cast them to float. The
-DataFrame is expected to contain values of type int, float or bool.
-
-##### Args:
-
-
-* <b>`data`</b>: `pandas.DataFrame` containing the data to be extracted.
-
-##### Returns:
-
- A numpy `ndarray` of the DataFrame's values as floats.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if data contains types other than int, float or bool.
-
-
-- - -
-
-### `tf.contrib.learn.extract_pandas_labels(labels)` {#extract_pandas_labels}
-
-Extract data from pandas.DataFrame for labels.
-
-##### Args:
-
-
-* <b>`labels`</b>: `pandas.DataFrame` or `pandas.Series` containing one column of
- labels to be extracted.
-
-##### Returns:
-
- A numpy `ndarray` of labels from the DataFrame.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if more than one column is found or type is not int, float or
- bool.
-
-
-- - -
-
-### `tf.contrib.learn.extract_pandas_matrix(data)` {#extract_pandas_matrix}
-
-Extracts numpy matrix from pandas DataFrame.
-
-##### Args:
-
-
-* <b>`data`</b>: `pandas.DataFrame` containing the data to be extracted.
-
-##### Returns:
-
- A numpy `ndarray` of the DataFrame's values.
-
-
-- - -
-
-### `tf.contrib.learn.infer_real_valued_columns_from_input(x)` {#infer_real_valued_columns_from_input}
-
-Creates `FeatureColumn` objects for inputs defined by input `x`.
-
-This interprets all inputs as dense, fixed-length float values.
-
-##### Args:
-
-
-* <b>`x`</b>: Real-valued matrix of shape [n_samples, n_features...]. Can be
- iterator that returns arrays of features.
-
-##### Returns:
-
- List of `FeatureColumn` objects.
-
-
-- - -
-
-### `tf.contrib.learn.infer_real_valued_columns_from_input_fn(input_fn)` {#infer_real_valued_columns_from_input_fn}
-
-Creates `FeatureColumn` objects for inputs defined by `input_fn`.
-
-This interprets all inputs as dense, fixed-length float values. This creates
-a local graph in which it calls `input_fn` to build the tensors, then discards
-it.
-
-##### Args:
-
-
-* <b>`input_fn`</b>: Input function returning a tuple of:
- features - Dictionary of string feature name to `Tensor` or `Tensor`.
- labels - `Tensor` of label values.
-
-##### Returns:
-
- List of `FeatureColumn` objects.
-
-
-- - -
-
-### `tf.contrib.learn.read_batch_examples(file_pattern, batch_size, reader, randomize_input=True, num_epochs=None, queue_capacity=10000, num_threads=1, read_batch_size=1, parse_fn=None, name=None, seed=None)` {#read_batch_examples}
-
-Adds operations to read, queue, batch `Example` protos.
-
-Given file pattern (or list of files), will setup a queue for file names,
-read `Example` proto using provided `reader`, use batch queue to create
-batches of examples of size `batch_size`.
-
-All queue runners are added to the queue runners collection, and may be
-started via `start_queue_runners`.
-
-All ops are added to the default graph.
-
-Use `parse_fn` if you need to do parsing / processing on single examples.
-
-##### Args:
-
-
-* <b>`file_pattern`</b>: List of files or pattern of file paths containing
- `Example` records. See `tf.gfile.Glob` for pattern rules.
-* <b>`batch_size`</b>: An int or scalar `Tensor` specifying the batch size to use.
-* <b>`reader`</b>: A function or class that returns an object with
- `read` method, (filename tensor) -> (example tensor).
-* <b>`randomize_input`</b>: Whether the input should be randomized.
-* <b>`num_epochs`</b>: Integer specifying the number of times to read through the
- dataset. If `None`, cycles through the dataset forever.
- NOTE - If specified, creates a variable that must be initialized, so call
- `tf.global_variables_initializer()` and run the op in a session.
-* <b>`queue_capacity`</b>: Capacity for input queue.
-* <b>`num_threads`</b>: The number of threads enqueuing examples.
-* <b>`read_batch_size`</b>: An int or scalar `Tensor` specifying the number of
- records to read at once
-* <b>`parse_fn`</b>: Parsing function, takes `Example` Tensor returns parsed
- representation. If `None`, no parsing is done.
-* <b>`name`</b>: Name of resulting op.
-* <b>`seed`</b>: An integer (optional). Seed used if randomize_input == True.
-
-##### Returns:
-
- String `Tensor` of batched `Example` proto.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: for invalid inputs.
-
-
-- - -
-
-### `tf.contrib.learn.read_batch_features(file_pattern, batch_size, features, reader, randomize_input=True, num_epochs=None, queue_capacity=10000, feature_queue_capacity=100, reader_num_threads=1, parse_fn=None, name=None)` {#read_batch_features}
-
-Adds operations to read, queue, batch and parse `Example` protos.
-
-Given file pattern (or list of files), will setup a queue for file names,
-read `Example` proto using provided `reader`, use batch queue to create
-batches of examples of size `batch_size` and parse example given `features`
-specification.
-
-All queue runners are added to the queue runners collection, and may be
-started via `start_queue_runners`.
-
-All ops are added to the default graph.
-
-##### Args:
-
-
-* <b>`file_pattern`</b>: List of files or pattern of file paths containing
- `Example` records. See `tf.gfile.Glob` for pattern rules.
-* <b>`batch_size`</b>: An int or scalar `Tensor` specifying the batch size to use.
-* <b>`features`</b>: A `dict` mapping feature keys to `FixedLenFeature` or
- `VarLenFeature` values.
-* <b>`reader`</b>: A function or class that returns an object with
- `read` method, (filename tensor) -> (example tensor).
-* <b>`randomize_input`</b>: Whether the input should be randomized.
-* <b>`num_epochs`</b>: Integer specifying the number of times to read through the
- dataset. If None, cycles through the dataset forever. NOTE - If specified,
- creates a variable that must be initialized, so call
- tf.local_variables_initializer() and run the op in a session.
-* <b>`queue_capacity`</b>: Capacity for input queue.
-* <b>`feature_queue_capacity`</b>: Capacity of the parsed features queue. Set this
- value to a small number, for example 5 if the parsed features are large.
-* <b>`reader_num_threads`</b>: The number of threads to read examples.
-* <b>`parse_fn`</b>: Parsing function, takes `Example` Tensor returns parsed
- representation. If `None`, no parsing is done.
-* <b>`name`</b>: Name of resulting op.
-
-##### Returns:
-
- A dict of `Tensor` or `SparseTensor` objects for each in `features`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: for invalid inputs.
-
-
-- - -
-
-### `tf.contrib.learn.read_batch_record_features(file_pattern, batch_size, features, randomize_input=True, num_epochs=None, queue_capacity=10000, reader_num_threads=1, name='dequeue_record_examples')` {#read_batch_record_features}
-
-Reads TFRecord, queues, batches and parses `Example` proto.
-
-See more detailed description in `read_examples`.
-
-##### Args:
-
-
-* <b>`file_pattern`</b>: List of files or pattern of file paths containing
- `Example` records. See `tf.gfile.Glob` for pattern rules.
-* <b>`batch_size`</b>: An int or scalar `Tensor` specifying the batch size to use.
-* <b>`features`</b>: A `dict` mapping feature keys to `FixedLenFeature` or
- `VarLenFeature` values.
-* <b>`randomize_input`</b>: Whether the input should be randomized.
-* <b>`num_epochs`</b>: Integer specifying the number of times to read through the
- dataset. If None, cycles through the dataset forever. NOTE - If specified,
- creates a variable that must be initialized, so call
- tf.local_variables_initializer() and run the op in a session.
-* <b>`queue_capacity`</b>: Capacity for input queue.
-* <b>`reader_num_threads`</b>: The number of threads to read examples.
-* <b>`name`</b>: Name of resulting op.
-
-##### Returns:
-
- A dict of `Tensor` or `SparseTensor` objects for each in `features`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: for invalid inputs.
-
-
-
-- - -
-
-### `class tf.contrib.learn.InputFnOps` {#InputFnOps}
-
-A return type for an input_fn.
-
-This return type is currently only supported for serving input_fn.
-Training and eval input_fn should return a `(features, labels)` tuple.
-
-The expected return values are:
- features: A dict of string to `Tensor` or `SparseTensor`, specifying the
- features to be passed to the model.
- labels: A `Tensor`, `SparseTensor`, or a dict of string to `Tensor` or
- `SparseTensor`, specifying labels for training or eval. For serving, set
- `labels` to `None`.
- default_inputs: a dict of string to `Tensor` or `SparseTensor`, specifying
- the input placeholders (if any) that this input_fn expects to be fed.
- Typically, this is used by a serving input_fn, which expects to be fed
- serialized `tf.Example` protos.
-- - -
-
-#### `tf.contrib.learn.InputFnOps.__getnewargs__()` {#InputFnOps.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.contrib.learn.InputFnOps.__getstate__()` {#InputFnOps.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.contrib.learn.InputFnOps.__new__(_cls, features, labels, default_inputs)` {#InputFnOps.__new__}
-
-Create new instance of InputFnOps(features, labels, default_inputs)
-
-
-- - -
-
-#### `tf.contrib.learn.InputFnOps.__repr__()` {#InputFnOps.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.contrib.learn.InputFnOps.default_inputs` {#InputFnOps.default_inputs}
-
-Alias for field number 2
-
-
-- - -
-
-#### `tf.contrib.learn.InputFnOps.features` {#InputFnOps.features}
-
-Alias for field number 0
-
-
-- - -
-
-#### `tf.contrib.learn.InputFnOps.labels` {#InputFnOps.labels}
-
-Alias for field number 1
-
-
-
-- - -
-
-### `class tf.contrib.learn.ProblemType` {#ProblemType}
-
-Enum-like values for the type of problem that the model solves.
-
-These values are used when exporting the model to produce the appropriate
-signature function for serving.
-
-The following values are supported:
- UNSPECIFIED: Produces a predict signature_fn.
- CLASSIFICATION: Produces a classify signature_fn.
- LINEAR_REGRESSION: Produces a regression signature_fn.
- LOGISTIC_REGRESSION: Produces a classify signature_fn.
-
-- - -
-
-### `tf.contrib.learn.build_parsing_serving_input_fn(feature_spec, default_batch_size=None)` {#build_parsing_serving_input_fn}
-
-Build an input_fn appropriate for serving, expecting fed tf.Examples.
-
-Creates an input_fn that expects a serialized tf.Example fed into a string
-placeholder. The function parses the tf.Example according to the provided
-feature_spec, and returns all parsed Tensors as features. This input_fn is
-for use at serving time, so the labels return value is always None.
-
-##### Args:
-
-
-* <b>`feature_spec`</b>: a dict of string to `VarLenFeature`/`FixedLenFeature`.
-* <b>`default_batch_size`</b>: the number of query examples expected per batch.
- Leave unset for variable batch size (recommended).
-
-##### Returns:
-
- An input_fn suitable for use in serving.
-
-
-- - -
-
-### `tf.contrib.learn.make_export_strategy(serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, exports_to_keep=5)` {#make_export_strategy}
-
-Create an ExportStrategy for use with Experiment.
-
-##### Args:
-
-
-* <b>`serving_input_fn`</b>: A function that takes no arguments and returns an
- `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when an
- incoming serving request does not explicitly request a specific head.
- Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`exports_to_keep`</b>: Number of exports to keep. Older exports will be
- garbage-collected. Defaults to 5. Set to None to disable garbage
- collection.
-
-##### Returns:
-
- An ExportStrategy that can be passed to the Experiment constructor.
-
-
-
-## Other Functions and Classes
-- - -
-
-### `class tf.contrib.learn.NotFittedError` {#NotFittedError}
-
-Exception class to raise if estimator is used before fitting.
-
-This class inherits from both ValueError and AttributeError to help with
-exception handling and backward compatibility.
-
-Examples:
->>> from sklearn.svm import LinearSVC
->>> from sklearn.exceptions import NotFittedError
->>> try:
-... LinearSVC().predict([[1, 2], [2, 3], [3, 4]])
-... except NotFittedError as e:
-... print(repr(e))
-... # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS
-NotFittedError('This LinearSVC instance is not fitted yet',)
-
-Copied from
-https://github.com/scikit-learn/scikit-learn/master/sklearn/exceptions.py
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.learn.monitors.md b/tensorflow/g3doc/api_docs/python/contrib.learn.monitors.md
deleted file mode 100644
index 58b2758c36..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.learn.monitors.md
+++ /dev/null
@@ -1,2684 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Monitors (contrib)
-[TOC]
-
-Monitors instrument the training process.
-
-See the @{$python/contrib.learn.monitors} guide.
-
-- - -
-
-### `tf.contrib.learn.monitors.get_default_monitors(loss_op=None, summary_op=None, save_summary_steps=100, output_dir=None, summary_writer=None)` {#get_default_monitors}
-
-Returns a default set of typically-used monitors.
-
-##### Args:
-
-
-* <b>`loss_op`</b>: `Tensor`, the loss tensor. This will be printed using `PrintTensor`
- at the default interval.
-* <b>`summary_op`</b>: See `SummarySaver`.
-* <b>`save_summary_steps`</b>: See `SummarySaver`.
-* <b>`output_dir`</b>: See `SummarySaver`.
-* <b>`summary_writer`</b>: See `SummarySaver`.
-
-##### Returns:
-
- `list` of monitors.
-
-
-- - -
-
-### `class tf.contrib.learn.monitors.BaseMonitor` {#BaseMonitor}
-
-Base class for Monitors.
-
-Defines basic interfaces of Monitors.
-Monitors can either be run on all workers or, more commonly, restricted
-to run exclusively on the elected chief worker.
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.__init__(*args, **kwargs)` {#BaseMonitor.__init__}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-05.
-Instructions for updating:
-Monitors are deprecated. Please use tf.train.SessionRunHook.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.begin(max_steps=None)` {#BaseMonitor.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.end(session=None)` {#BaseMonitor.end}
-
-Callback at the end of training/evaluation.
-
-##### Args:
-
-
-* <b>`session`</b>: A `tf.Session` object that can be used to run ops.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.epoch_begin(epoch)` {#BaseMonitor.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.epoch_end(epoch)` {#BaseMonitor.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.post_step(step, session)` {#BaseMonitor.post_step}
-
-Callback after the step is finished.
-
-Called after step_end and receives session to perform extra session.run
-calls. If failure occurred in the process, will be called as well.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, global step of the model.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.run_on_all_workers` {#BaseMonitor.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.set_estimator(estimator)` {#BaseMonitor.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.step_begin(step)` {#BaseMonitor.step_begin}
-
-Callback before training step begins.
-
-You may use this callback to request evaluation of additional tensors
-in the graph.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- List of `Tensor` objects or string tensor names to be run.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a step, or `step` < 0, or
- `step` > `max_steps`.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.step_end(step, output)` {#BaseMonitor.step_end}
-
-Callback after training step finished.
-
-This callback provides access to the tensors/ops evaluated at this step,
-including the additional tensors for which evaluation was requested in
-`step_begin`.
-
-In addition, the callback has the opportunity to stop training by returning
-`True`. This is useful for early stopping, for example.
-
-Note that this method is not called if the call to `Session.run()` that
-followed the last call to `step_begin()` failed.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`. True if training should stop.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun a step, or `step` number does not match.
-
-
-
-- - -
-
-### `class tf.contrib.learn.monitors.CaptureVariable` {#CaptureVariable}
-
-Captures a variable's values into a collection.
-
-This monitor is useful for unit testing. You should exercise caution when
-using this monitor in production, since it never discards values.
-
-This is an `EveryN` monitor and has consistent semantic for `every_n`
-and `first_n`.
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.__init__(var_name, every_n=100, first_n=1)` {#CaptureVariable.__init__}
-
-Initializes a CaptureVariable monitor.
-
-##### Args:
-
-
-* <b>`var_name`</b>: `string`. The variable name, including suffix (typically ":0").
-* <b>`every_n`</b>: `int`, print every N steps. See `PrintN.`
-* <b>`first_n`</b>: `int`, also print the first N steps. See `PrintN.`
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.begin(max_steps=None)` {#CaptureVariable.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.end(session=None)` {#CaptureVariable.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.epoch_begin(epoch)` {#CaptureVariable.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.epoch_end(epoch)` {#CaptureVariable.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.every_n_post_step(step, session)` {#CaptureVariable.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.every_n_step_begin(step)` {#CaptureVariable.every_n_step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.every_n_step_end(step, outputs)` {#CaptureVariable.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.post_step(step, session)` {#CaptureVariable.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.run_on_all_workers` {#CaptureVariable.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.set_estimator(estimator)` {#CaptureVariable.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.step_begin(step)` {#CaptureVariable.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.step_end(step, output)` {#CaptureVariable.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.values` {#CaptureVariable.values}
-
-Returns the values captured so far.
-
-##### Returns:
-
- `dict` mapping `int` step numbers to that values of the variable at the
- respective step.
-
-
-
-- - -
-
-### `class tf.contrib.learn.monitors.CheckpointSaver` {#CheckpointSaver}
-
-Saves checkpoints every N steps or N seconds.
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.__init__(checkpoint_dir, save_secs=None, save_steps=None, saver=None, checkpoint_basename='model.ckpt', scaffold=None)` {#CheckpointSaver.__init__}
-
-Initialize CheckpointSaver monitor.
-
-##### Args:
-
-
-* <b>`checkpoint_dir`</b>: `str`, base directory for the checkpoint files.
-* <b>`save_secs`</b>: `int`, save every N secs.
-* <b>`save_steps`</b>: `int`, save every N steps.
-* <b>`saver`</b>: `Saver` object, used for saving.
-* <b>`checkpoint_basename`</b>: `str`, base name for the checkpoint files.
-* <b>`scaffold`</b>: `Scaffold`, use to get saver object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both `save_steps` and `save_secs` are not `None`.
-* <b>`ValueError`</b>: If both `save_steps` and `save_secs` are `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.begin(max_steps=None)` {#CheckpointSaver.begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.end(session=None)` {#CheckpointSaver.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.epoch_begin(epoch)` {#CheckpointSaver.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.epoch_end(epoch)` {#CheckpointSaver.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.post_step(step, session)` {#CheckpointSaver.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.run_on_all_workers` {#CheckpointSaver.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.set_estimator(estimator)` {#CheckpointSaver.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.step_begin(step)` {#CheckpointSaver.step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.step_end(step, output)` {#CheckpointSaver.step_end}
-
-Callback after training step finished.
-
-This callback provides access to the tensors/ops evaluated at this step,
-including the additional tensors for which evaluation was requested in
-`step_begin`.
-
-In addition, the callback has the opportunity to stop training by returning
-`True`. This is useful for early stopping, for example.
-
-Note that this method is not called if the call to `Session.run()` that
-followed the last call to `step_begin()` failed.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`. True if training should stop.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun a step, or `step` number does not match.
-
-
-
-- - -
-
-### `class tf.contrib.learn.monitors.EveryN` {#EveryN}
-
-Base class for monitors that execute callbacks every N steps.
-
-This class adds three new callbacks:
- - every_n_step_begin
- - every_n_step_end
- - every_n_post_step
-
-The callbacks are executed every n steps, or optionally every step for the
-first m steps, where m and n can both be user-specified.
-
-When extending this class, note that if you wish to use any of the
-`BaseMonitor` callbacks, you must call their respective super implementation:
-
- def step_begin(self, step):
- super(ExampleMonitor, self).step_begin(step)
- return []
-
-Failing to call the super implementation will cause unpredictable behavior.
-
-The `every_n_post_step()` callback is also called after the last step if it
-was not already called through the regular conditions. Note that
-`every_n_step_begin()` and `every_n_step_end()` do not receive that special
-treatment.
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.__init__(every_n_steps=100, first_n_steps=1)` {#EveryN.__init__}
-
-Initializes an `EveryN` monitor.
-
-##### Args:
-
-
-* <b>`every_n_steps`</b>: `int`, the number of steps to allow between callbacks.
-* <b>`first_n_steps`</b>: `int`, specifying the number of initial steps during
- which the callbacks will always be executed, regardless of the value
- of `every_n_steps`. Note that this value is relative to the global step
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.begin(max_steps=None)` {#EveryN.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.end(session=None)` {#EveryN.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.epoch_begin(epoch)` {#EveryN.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.epoch_end(epoch)` {#EveryN.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.every_n_post_step(step, session)` {#EveryN.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.every_n_step_begin(step)` {#EveryN.every_n_step_begin}
-
-Callback before every n'th step begins.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list` of tensors that will be evaluated at this step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.every_n_step_end(step, outputs)` {#EveryN.every_n_step_end}
-
-Callback after every n'th step finished.
-
-This callback provides access to the tensors/ops evaluated at this step,
-including the additional tensors for which evaluation was requested in
-`step_begin`.
-
-In addition, the callback has the opportunity to stop training by returning
-`True`. This is useful for early stopping, for example.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`outputs`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`. True if training should stop.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.post_step(step, session)` {#EveryN.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.run_on_all_workers` {#EveryN.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.set_estimator(estimator)` {#EveryN.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.step_begin(step)` {#EveryN.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.step_end(step, output)` {#EveryN.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
-
-- - -
-
-### `class tf.contrib.learn.monitors.ExportMonitor` {#ExportMonitor}
-
-Monitor that exports Estimator every N steps.
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.__init__(*args, **kwargs)` {#ExportMonitor.__init__}
-
-Initializes ExportMonitor. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-23.
-Instructions for updating:
-The signature of the input_fn accepted by export is changing to be consistent with what's used by tf.Learn Estimator's train/evaluate. input_fn (and in most cases, input_feature_key) will both become required args.
-
-##### Args:
-
-
-* <b>`every_n_steps`</b>: Run monitor every N steps.
-* <b>`export_dir`</b>: str, folder to export.
-* <b>`input_fn`</b>: A function that takes no argument and returns a tuple of
- (features, labels), where features is a dict of string key to `Tensor`
- and labels is a `Tensor` that's currently not used (and so can be
- `None`).
-* <b>`input_feature_key`</b>: String key into the features dict returned by
- `input_fn` that corresponds to the raw `Example` strings `Tensor` that
- the exported model will take as input. Should be `None` if and only if
- you're passing in a `signature_fn` that does not use the first arg
- (`Tensor` of `Example` strings).
-* <b>`exports_to_keep`</b>: int, number of exports to keep.
-* <b>`signature_fn`</b>: Function that returns a default signature and a named
- signature map, given `Tensor` of `Example` strings, `dict` of `Tensor`s
- for features and `dict` of `Tensor`s for predictions.
-* <b>`default_batch_size`</b>: Default batch size of the `Example` placeholder.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `input_fn` and `input_feature_key` are not both defined or
- are not both `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.begin(max_steps=None)` {#ExportMonitor.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.end(session=None)` {#ExportMonitor.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.epoch_begin(epoch)` {#ExportMonitor.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.epoch_end(epoch)` {#ExportMonitor.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.every_n_post_step(step, session)` {#ExportMonitor.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.every_n_step_begin(step)` {#ExportMonitor.every_n_step_begin}
-
-Callback before every n'th step begins.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list` of tensors that will be evaluated at this step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.every_n_step_end(step, outputs)` {#ExportMonitor.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.export_dir` {#ExportMonitor.export_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.exports_to_keep` {#ExportMonitor.exports_to_keep}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.last_export_dir` {#ExportMonitor.last_export_dir}
-
-Returns the directory containing the last completed export.
-
-##### Returns:
-
- The string path to the exported directory. NB: this functionality was
- added on 2016/09/25; clients that depend on the return value may need
- to handle the case where this function returns None because the
- estimator being fitted does not yet return a value during export.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.post_step(step, session)` {#ExportMonitor.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.run_on_all_workers` {#ExportMonitor.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.set_estimator(estimator)` {#ExportMonitor.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.signature_fn` {#ExportMonitor.signature_fn}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.step_begin(step)` {#ExportMonitor.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.step_end(step, output)` {#ExportMonitor.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
-
-- - -
-
-### `class tf.contrib.learn.monitors.GraphDump` {#GraphDump}
-
-Dumps almost all tensors in the graph at every step.
-
-Note, this is very expensive, prefer `PrintTensor` in production.
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.__init__(ignore_ops=None)` {#GraphDump.__init__}
-
-Initializes GraphDump monitor.
-
-##### Args:
-
-
-* <b>`ignore_ops`</b>: `list` of `string`. Names of ops to ignore.
- If None, `GraphDump.IGNORE_OPS` is used.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.begin(max_steps=None)` {#GraphDump.begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.compare(other_dump, step, atol=1e-06)` {#GraphDump.compare}
-
-Compares two `GraphDump` monitors and returns differences.
-
-##### Args:
-
-
-* <b>`other_dump`</b>: Another `GraphDump` monitor.
-* <b>`step`</b>: `int`, step to compare on.
-* <b>`atol`</b>: `float`, absolute tolerance in comparison of floating arrays.
-
-##### Returns:
-
- Returns tuple:
-
-* <b>`matched`</b>: `list` of keys that matched.
-* <b>`non_matched`</b>: `dict` of keys to tuple of 2 mismatched values.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if a key in `data` is missing from `other_dump` at `step`.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.data` {#GraphDump.data}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.end(session=None)` {#GraphDump.end}
-
-Callback at the end of training/evaluation.
-
-##### Args:
-
-
-* <b>`session`</b>: A `tf.Session` object that can be used to run ops.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.epoch_begin(epoch)` {#GraphDump.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.epoch_end(epoch)` {#GraphDump.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.post_step(step, session)` {#GraphDump.post_step}
-
-Callback after the step is finished.
-
-Called after step_end and receives session to perform extra session.run
-calls. If failure occurred in the process, will be called as well.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, global step of the model.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.run_on_all_workers` {#GraphDump.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.set_estimator(estimator)` {#GraphDump.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.step_begin(step)` {#GraphDump.step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.step_end(step, output)` {#GraphDump.step_end}
-
-
-
-
-
-- - -
-
-### `class tf.contrib.learn.monitors.LoggingTrainable` {#LoggingTrainable}
-
-Writes trainable variable values into log every N steps.
-
-Write the tensors in trainable variables `every_n` steps,
-starting with the `first_n`th step.
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.__init__(scope=None, every_n=100, first_n=1)` {#LoggingTrainable.__init__}
-
-Initializes LoggingTrainable monitor.
-
-##### Args:
-
-
-* <b>`scope`</b>: An optional string to match variable names using re.match.
-* <b>`every_n`</b>: Print every N steps.
-* <b>`first_n`</b>: Print first N steps.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.begin(max_steps=None)` {#LoggingTrainable.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.end(session=None)` {#LoggingTrainable.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.epoch_begin(epoch)` {#LoggingTrainable.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.epoch_end(epoch)` {#LoggingTrainable.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.every_n_post_step(step, session)` {#LoggingTrainable.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.every_n_step_begin(step)` {#LoggingTrainable.every_n_step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.every_n_step_end(step, outputs)` {#LoggingTrainable.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.post_step(step, session)` {#LoggingTrainable.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.run_on_all_workers` {#LoggingTrainable.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.set_estimator(estimator)` {#LoggingTrainable.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.step_begin(step)` {#LoggingTrainable.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.step_end(step, output)` {#LoggingTrainable.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
-
-- - -
-
-### `class tf.contrib.learn.monitors.NanLoss` {#NanLoss}
-
-NaN Loss monitor.
-
-Monitors loss and stops training if loss is NaN.
-Can either fail with exception or just stop training.
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.__init__(loss_tensor, every_n_steps=100, fail_on_nan_loss=True)` {#NanLoss.__init__}
-
-Initializes NanLoss monitor.
-
-##### Args:
-
-
-* <b>`loss_tensor`</b>: `Tensor`, the loss tensor.
-* <b>`every_n_steps`</b>: `int`, run check every this many steps.
-* <b>`fail_on_nan_loss`</b>: `bool`, whether to raise exception when loss is NaN.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.begin(max_steps=None)` {#NanLoss.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.end(session=None)` {#NanLoss.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.epoch_begin(epoch)` {#NanLoss.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.epoch_end(epoch)` {#NanLoss.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.every_n_post_step(step, session)` {#NanLoss.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.every_n_step_begin(step)` {#NanLoss.every_n_step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.every_n_step_end(step, outputs)` {#NanLoss.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.post_step(step, session)` {#NanLoss.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.run_on_all_workers` {#NanLoss.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.set_estimator(estimator)` {#NanLoss.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.step_begin(step)` {#NanLoss.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.step_end(step, output)` {#NanLoss.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
-
-- - -
-
-### `class tf.contrib.learn.monitors.PrintTensor` {#PrintTensor}
-
-Prints given tensors every N steps.
-
-This is an `EveryN` monitor and has consistent semantic for `every_n`
-and `first_n`.
-
-The tensors will be printed to the log, with `INFO` severity.
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.__init__(tensor_names, every_n=100, first_n=1)` {#PrintTensor.__init__}
-
-Initializes a PrintTensor monitor.
-
-##### Args:
-
-
-* <b>`tensor_names`</b>: `dict` of tag to tensor names or
- `iterable` of tensor names (strings).
-* <b>`every_n`</b>: `int`, print every N steps. See `PrintN.`
-* <b>`first_n`</b>: `int`, also print the first N steps. See `PrintN.`
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.begin(max_steps=None)` {#PrintTensor.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.end(session=None)` {#PrintTensor.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.epoch_begin(epoch)` {#PrintTensor.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.epoch_end(epoch)` {#PrintTensor.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.every_n_post_step(step, session)` {#PrintTensor.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.every_n_step_begin(step)` {#PrintTensor.every_n_step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.every_n_step_end(step, outputs)` {#PrintTensor.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.post_step(step, session)` {#PrintTensor.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.run_on_all_workers` {#PrintTensor.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.set_estimator(estimator)` {#PrintTensor.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.step_begin(step)` {#PrintTensor.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.step_end(step, output)` {#PrintTensor.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
-
-- - -
-
-### `class tf.contrib.learn.monitors.StepCounter` {#StepCounter}
-
-Steps per second monitor.
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.__init__(every_n_steps=100, output_dir=None, summary_writer=None)` {#StepCounter.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.begin(max_steps=None)` {#StepCounter.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.end(session=None)` {#StepCounter.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.epoch_begin(epoch)` {#StepCounter.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.epoch_end(epoch)` {#StepCounter.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.every_n_post_step(step, session)` {#StepCounter.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.every_n_step_begin(step)` {#StepCounter.every_n_step_begin}
-
-Callback before every n'th step begins.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list` of tensors that will be evaluated at this step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.every_n_step_end(current_step, outputs)` {#StepCounter.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.post_step(step, session)` {#StepCounter.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.run_on_all_workers` {#StepCounter.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.set_estimator(estimator)` {#StepCounter.set_estimator}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.step_begin(step)` {#StepCounter.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.step_end(step, output)` {#StepCounter.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
-
-- - -
-
-### `class tf.contrib.learn.monitors.StopAtStep` {#StopAtStep}
-
-Monitor to request stop at a specified step.
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.__init__(num_steps=None, last_step=None)` {#StopAtStep.__init__}
-
-Create a StopAtStep monitor.
-
-This monitor requests stop after either a number of steps have been
-executed or a last step has been reached. Only of the two options can be
-specified.
-
-if `num_steps` is specified, it indicates the number of steps to execute
-after `begin()` is called. If instead `last_step` is specified, it
-indicates the last step we want to execute, as passed to the `step_begin()`
-call.
-
-##### Args:
-
-
-* <b>`num_steps`</b>: Number of steps to execute.
-* <b>`last_step`</b>: Step after which to stop.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If one of the arguments is invalid.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.begin(max_steps=None)` {#StopAtStep.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.end(session=None)` {#StopAtStep.end}
-
-Callback at the end of training/evaluation.
-
-##### Args:
-
-
-* <b>`session`</b>: A `tf.Session` object that can be used to run ops.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.epoch_begin(epoch)` {#StopAtStep.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.epoch_end(epoch)` {#StopAtStep.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.post_step(step, session)` {#StopAtStep.post_step}
-
-Callback after the step is finished.
-
-Called after step_end and receives session to perform extra session.run
-calls. If failure occurred in the process, will be called as well.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, global step of the model.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.run_on_all_workers` {#StopAtStep.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.set_estimator(estimator)` {#StopAtStep.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.step_begin(step)` {#StopAtStep.step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.step_end(step, output)` {#StopAtStep.step_end}
-
-
-
-
-
-- - -
-
-### `class tf.contrib.learn.monitors.SummarySaver` {#SummarySaver}
-
-Saves summaries every N steps.
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.__init__(summary_op, save_steps=100, output_dir=None, summary_writer=None, scaffold=None)` {#SummarySaver.__init__}
-
-Initializes a `SummarySaver` monitor.
-
-##### Args:
-
-
-* <b>`summary_op`</b>: `Tensor` of type `string`. A serialized `Summary` protocol
- buffer, as output by TF summary methods like `summary.scalar` or
- `summary.merge_all`.
-* <b>`save_steps`</b>: `int`, save summaries every N steps. See `EveryN`.
-* <b>`output_dir`</b>: `string`, the directory to save the summaries to. Only used
- if no `summary_writer` is supplied.
-* <b>`summary_writer`</b>: `SummaryWriter`. If `None` and an `output_dir` was passed,
- one will be created accordingly.
-* <b>`scaffold`</b>: `Scaffold` to get summary_op if it's not provided.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.begin(max_steps=None)` {#SummarySaver.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.end(session=None)` {#SummarySaver.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.epoch_begin(epoch)` {#SummarySaver.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.epoch_end(epoch)` {#SummarySaver.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.every_n_post_step(step, session)` {#SummarySaver.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.every_n_step_begin(step)` {#SummarySaver.every_n_step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.every_n_step_end(step, outputs)` {#SummarySaver.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.post_step(step, session)` {#SummarySaver.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.run_on_all_workers` {#SummarySaver.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.set_estimator(estimator)` {#SummarySaver.set_estimator}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.step_begin(step)` {#SummarySaver.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.step_end(step, output)` {#SummarySaver.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
-
-- - -
-
-### `class tf.contrib.learn.monitors.ValidationMonitor` {#ValidationMonitor}
-
-Runs evaluation of a given estimator, at most every N steps.
-
-Note that the evaluation is done based on the saved checkpoint, which will
-usually be older than the current step.
-
-Can do early stopping on validation metrics if `early_stopping_rounds` is
-provided.
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.__init__(x=None, y=None, input_fn=None, batch_size=None, eval_steps=None, every_n_steps=100, metrics=None, hooks=None, early_stopping_rounds=None, early_stopping_metric='loss', early_stopping_metric_minimize=True, name=None)` {#ValidationMonitor.__init__}
-
-Initializes a ValidationMonitor.
-
-##### Args:
-
-
-* <b>`x`</b>: See `BaseEstimator.evaluate`.
-* <b>`y`</b>: See `BaseEstimator.evaluate`.
-* <b>`input_fn`</b>: See `BaseEstimator.evaluate`.
-* <b>`batch_size`</b>: See `BaseEstimator.evaluate`.
-* <b>`eval_steps`</b>: See `BaseEstimator.evaluate`.
-* <b>`every_n_steps`</b>: Check for new checkpoints to evaluate every N steps. If a
- new checkpoint is found, it is evaluated. See `EveryN`.
-* <b>`metrics`</b>: See `BaseEstimator.evaluate`.
-* <b>`hooks`</b>: A list of `SessionRunHook` hooks to pass to the
- `Estimator`'s `evaluate` function.
-* <b>`early_stopping_rounds`</b>: `int`. If the metric indicated by
- `early_stopping_metric` does not change according to
- `early_stopping_metric_minimize` for this many steps, then training
- will be stopped.
-* <b>`early_stopping_metric`</b>: `string`, name of the metric to check for early
- stopping.
-* <b>`early_stopping_metric_minimize`</b>: `bool`, True if `early_stopping_metric` is
- expected to decrease (thus early stopping occurs when this metric
- stops decreasing), False if `early_stopping_metric` is expected to
- increase. Typically, `early_stopping_metric_minimize` is True for
- loss metrics like mean squared error, and False for performance
- metrics like accuracy.
-* <b>`name`</b>: See `BaseEstimator.evaluate`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both x and input_fn are provided.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.begin(max_steps=None)` {#ValidationMonitor.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.best_step` {#ValidationMonitor.best_step}
-
-Returns the step at which the best early stopping metric was found.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.best_value` {#ValidationMonitor.best_value}
-
-Returns the best early stopping metric value found so far.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.early_stopped` {#ValidationMonitor.early_stopped}
-
-Returns True if this monitor caused an early stop.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.end(session=None)` {#ValidationMonitor.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.epoch_begin(epoch)` {#ValidationMonitor.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.epoch_end(epoch)` {#ValidationMonitor.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.every_n_post_step(step, session)` {#ValidationMonitor.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.every_n_step_begin(step)` {#ValidationMonitor.every_n_step_begin}
-
-Callback before every n'th step begins.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list` of tensors that will be evaluated at this step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.every_n_step_end(step, outputs)` {#ValidationMonitor.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.post_step(step, session)` {#ValidationMonitor.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.run_on_all_workers` {#ValidationMonitor.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.set_estimator(estimator)` {#ValidationMonitor.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.step_begin(step)` {#ValidationMonitor.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.step_end(step, output)` {#ValidationMonitor.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
-
-
-## Other Functions and Classes
-- - -
-
-### `class tf.contrib.learn.monitors.RunHookAdapterForMonitors` {#RunHookAdapterForMonitors}
-
-Wraps monitors into a SessionRunHook.
-- - -
-
-#### `tf.contrib.learn.monitors.RunHookAdapterForMonitors.__init__(monitors)` {#RunHookAdapterForMonitors.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.RunHookAdapterForMonitors.after_create_session(session, coord)` {#RunHookAdapterForMonitors.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.RunHookAdapterForMonitors.after_run(run_context, run_values)` {#RunHookAdapterForMonitors.after_run}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.RunHookAdapterForMonitors.before_run(run_context)` {#RunHookAdapterForMonitors.before_run}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.RunHookAdapterForMonitors.begin()` {#RunHookAdapterForMonitors.begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.RunHookAdapterForMonitors.end(session)` {#RunHookAdapterForMonitors.end}
-
-
-
-
-
-- - -
-
-### `class tf.contrib.learn.monitors.SummaryWriterCache` {#SummaryWriterCache}
-
-Cache for file writers.
-
-This class caches file writers, one per directory.
-- - -
-
-#### `tf.contrib.learn.monitors.SummaryWriterCache.clear()` {#SummaryWriterCache.clear}
-
-Clear cached summary writers. Currently only used for unit tests.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummaryWriterCache.get(logdir)` {#SummaryWriterCache.get}
-
-Returns the FileWriter for the specified directory.
-
-##### Args:
-
-
-* <b>`logdir`</b>: str, name of the directory.
-
-##### Returns:
-
- A `FileWriter`.
-
-
-
-- - -
-
-### `tf.contrib.learn.monitors.replace_monitors_with_hooks(monitors_or_hooks, estimator)` {#replace_monitors_with_hooks}
-
-Wraps monitors with a hook.
-
-`Monitor` is deprecated in favor of `SessionRunHook`. If you're using a
-monitor, you can wrap it with a hook using function. It is recommended to
-implement hook version of your monitor.
-
-##### Args:
-
-
-* <b>`monitors_or_hooks`</b>: A `list` may contain both monitors and hooks.
-* <b>`estimator`</b>: An `Estimator` that monitor will be used with.
-
-##### Returns:
-
- Returns a list of hooks. If there is any monitor in the given list, it is
- replaced by a hook.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.legacy_seq2seq.md b/tensorflow/g3doc/api_docs/python/contrib.legacy_seq2seq.md
deleted file mode 100644
index 93fc775143..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.legacy_seq2seq.md
+++ /dev/null
@@ -1,587 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Sequence to Sequence (contrib)
-[TOC]
-
-Deprecated library for creating sequence-to-sequence models in TensorFlow.
-
-- - -
-
-### `tf.contrib.legacy_seq2seq.attention_decoder(decoder_inputs, initial_state, attention_states, cell, output_size=None, num_heads=1, loop_function=None, dtype=None, scope=None, initial_state_attention=False)` {#attention_decoder}
-
-RNN decoder with attention for the sequence-to-sequence model.
-
-In this context "attention" means that, during decoding, the RNN can look up
-information in the additional tensor attention_states, and it does this by
-focusing on a few entries from the tensor. This model has proven to yield
-especially good results in a number of sequence-to-sequence tasks. This
-implementation is based on http://arxiv.org/abs/1412.7449 (see below for
-details). It is recommended for complex sequence-to-sequence tasks.
-
-##### Args:
-
-
-* <b>`decoder_inputs`</b>: A list of 2D Tensors [batch_size x input_size].
-* <b>`initial_state`</b>: 2D Tensor [batch_size x cell.state_size].
-* <b>`attention_states`</b>: 3D Tensor [batch_size x attn_length x attn_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function and size.
-* <b>`output_size`</b>: Size of the output vectors; if None, we use cell.output_size.
-* <b>`num_heads`</b>: Number of attention heads that read from attention_states.
-* <b>`loop_function`</b>: If not None, this function will be applied to i-th output
- in order to generate i+1-th input, and decoder_inputs will be ignored,
- except for the first element ("GO" symbol). This can be used for decoding,
- but also for training to emulate http://arxiv.org/abs/1506.03099.
- Signature -- loop_function(prev, i) = next
- * prev is a 2D Tensor of shape [batch_size x output_size],
- * i is an integer, the step number (when advanced control is needed),
- * next is a 2D Tensor of shape [batch_size x input_size].
-* <b>`dtype`</b>: The dtype to use for the RNN initial state (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; default: "attention_decoder".
-* <b>`initial_state_attention`</b>: If False (default), initial attentions are zero.
- If True, initialize the attentions from the initial state and attention
- states -- useful when we wish to resume decoding from a previously
- stored decoder state and attention states.
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors of
- shape [batch_size x output_size]. These represent the generated outputs.
- Output i is computed from input i (which is either the i-th element
- of decoder_inputs or loop_function(output {i-1}, i)) as follows.
- First, we run the cell on a combination of the input and previous
- attention masks:
- cell_output, new_state = cell(linear(input, prev_attn), prev_state).
- Then, we calculate new attention masks:
- new_attn = softmax(V^T * tanh(W * attention_states + U * new_state))
- and then we calculate the output:
- output = linear(cell_output, new_attn).
-* <b>`state`</b>: The state of each decoder cell the final time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: when num_heads is not positive, there are no inputs, shapes
- of attention_states are not set, or input size cannot be inferred
- from the input.
-
-
-- - -
-
-### `tf.contrib.legacy_seq2seq.basic_rnn_seq2seq(encoder_inputs, decoder_inputs, cell, dtype=tf.float32, scope=None)` {#basic_rnn_seq2seq}
-
-Basic RNN sequence-to-sequence model.
-
-This model first runs an RNN to encode encoder_inputs into a state vector,
-then runs decoder, initialized with the last encoder state, on decoder_inputs.
-Encoder and decoder use the same RNN cell type, but don't share parameters.
-
-##### Args:
-
-
-* <b>`encoder_inputs`</b>: A list of 2D Tensors [batch_size x input_size].
-* <b>`decoder_inputs`</b>: A list of 2D Tensors [batch_size x input_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function and size.
-* <b>`dtype`</b>: The dtype of the initial state of the RNN cell (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; default: "basic_rnn_seq2seq".
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors with
- shape [batch_size x output_size] containing the generated outputs.
-* <b>`state`</b>: The state of each decoder cell in the final time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
-
-- - -
-
-### `tf.contrib.legacy_seq2seq.embedding_attention_decoder(decoder_inputs, initial_state, attention_states, cell, num_symbols, embedding_size, num_heads=1, output_size=None, output_projection=None, feed_previous=False, update_embedding_for_previous=True, dtype=None, scope=None, initial_state_attention=False)` {#embedding_attention_decoder}
-
-RNN decoder with embedding and attention and a pure-decoding option.
-
-##### Args:
-
-
-* <b>`decoder_inputs`</b>: A list of 1D batch-sized int32 Tensors (decoder inputs).
-* <b>`initial_state`</b>: 2D Tensor [batch_size x cell.state_size].
-* <b>`attention_states`</b>: 3D Tensor [batch_size x attn_length x attn_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function.
-* <b>`num_symbols`</b>: Integer, how many symbols come into the embedding.
-* <b>`embedding_size`</b>: Integer, the length of the embedding vector for each symbol.
-* <b>`num_heads`</b>: Number of attention heads that read from attention_states.
-* <b>`output_size`</b>: Size of the output vectors; if None, use output_size.
-* <b>`output_projection`</b>: None or a pair (W, B) of output projection weights and
- biases; W has shape [output_size x num_symbols] and B has shape
- [num_symbols]; if provided and feed_previous=True, each fed previous
- output will first be multiplied by W and added B.
-* <b>`feed_previous`</b>: Boolean; if True, only the first of decoder_inputs will be
- used (the "GO" symbol), and all other decoder inputs will be generated by:
- next = embedding_lookup(embedding, argmax(previous_output)),
- In effect, this implements a greedy decoder. It can also be used
- during training to emulate http://arxiv.org/abs/1506.03099.
- If False, decoder_inputs are used as given (the standard decoder case).
-* <b>`update_embedding_for_previous`</b>: Boolean; if False and feed_previous=True,
- only the embedding for the first symbol of decoder_inputs (the "GO"
- symbol) will be updated by back propagation. Embeddings for the symbols
- generated from the decoder itself remain unchanged. This parameter has
- no effect if feed_previous=False.
-* <b>`dtype`</b>: The dtype to use for the RNN initial states (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "embedding_attention_decoder".
-* <b>`initial_state_attention`</b>: If False (default), initial attentions are zero.
- If True, initialize the attentions from the initial state and attention
- states -- useful when we wish to resume decoding from a previously
- stored decoder state and attention states.
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors with
- shape [batch_size x output_size] containing the generated outputs.
-* <b>`state`</b>: The state of each decoder cell at the final time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: When output_projection has the wrong shape.
-
-
-- - -
-
-### `tf.contrib.legacy_seq2seq.embedding_attention_seq2seq(encoder_inputs, decoder_inputs, cell, num_encoder_symbols, num_decoder_symbols, embedding_size, num_heads=1, output_projection=None, feed_previous=False, dtype=None, scope=None, initial_state_attention=False)` {#embedding_attention_seq2seq}
-
-Embedding sequence-to-sequence model with attention.
-
-This model first embeds encoder_inputs by a newly created embedding (of shape
-[num_encoder_symbols x input_size]). Then it runs an RNN to encode
-embedded encoder_inputs into a state vector. It keeps the outputs of this
-RNN at every step to use for attention later. Next, it embeds decoder_inputs
-by another newly created embedding (of shape [num_decoder_symbols x
-input_size]). Then it runs attention decoder, initialized with the last
-encoder state, on embedded decoder_inputs and attending to encoder outputs.
-
-Warning: when output_projection is None, the size of the attention vectors
-and variables will be made proportional to num_decoder_symbols, can be large.
-
-##### Args:
-
-
-* <b>`encoder_inputs`</b>: A list of 1D int32 Tensors of shape [batch_size].
-* <b>`decoder_inputs`</b>: A list of 1D int32 Tensors of shape [batch_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function and size.
-* <b>`num_encoder_symbols`</b>: Integer; number of symbols on the encoder side.
-* <b>`num_decoder_symbols`</b>: Integer; number of symbols on the decoder side.
-* <b>`embedding_size`</b>: Integer, the length of the embedding vector for each symbol.
-* <b>`num_heads`</b>: Number of attention heads that read from attention_states.
-* <b>`output_projection`</b>: None or a pair (W, B) of output projection weights and
- biases; W has shape [output_size x num_decoder_symbols] and B has
- shape [num_decoder_symbols]; if provided and feed_previous=True, each
- fed previous output will first be multiplied by W and added B.
-* <b>`feed_previous`</b>: Boolean or scalar Boolean Tensor; if True, only the first
- of decoder_inputs will be used (the "GO" symbol), and all other decoder
- inputs will be taken from previous outputs (as in embedding_rnn_decoder).
- If False, decoder_inputs are used as given (the standard decoder case).
-* <b>`dtype`</b>: The dtype of the initial RNN state (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "embedding_attention_seq2seq".
-* <b>`initial_state_attention`</b>: If False (default), initial attentions are zero.
- If True, initialize the attentions from the initial state and attention
- states.
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors with
- shape [batch_size x num_decoder_symbols] containing the generated
- outputs.
-* <b>`state`</b>: The state of each decoder cell at the final time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
-
-- - -
-
-### `tf.contrib.legacy_seq2seq.embedding_rnn_decoder(decoder_inputs, initial_state, cell, num_symbols, embedding_size, output_projection=None, feed_previous=False, update_embedding_for_previous=True, scope=None)` {#embedding_rnn_decoder}
-
-RNN decoder with embedding and a pure-decoding option.
-
-##### Args:
-
-
-* <b>`decoder_inputs`</b>: A list of 1D batch-sized int32 Tensors (decoder inputs).
-* <b>`initial_state`</b>: 2D Tensor [batch_size x cell.state_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function.
-* <b>`num_symbols`</b>: Integer, how many symbols come into the embedding.
-* <b>`embedding_size`</b>: Integer, the length of the embedding vector for each symbol.
-* <b>`output_projection`</b>: None or a pair (W, B) of output projection weights and
- biases; W has shape [output_size x num_symbols] and B has
- shape [num_symbols]; if provided and feed_previous=True, each fed
- previous output will first be multiplied by W and added B.
-* <b>`feed_previous`</b>: Boolean; if True, only the first of decoder_inputs will be
- used (the "GO" symbol), and all other decoder inputs will be generated by:
- next = embedding_lookup(embedding, argmax(previous_output)),
- In effect, this implements a greedy decoder. It can also be used
- during training to emulate http://arxiv.org/abs/1506.03099.
- If False, decoder_inputs are used as given (the standard decoder case).
-* <b>`update_embedding_for_previous`</b>: Boolean; if False and feed_previous=True,
- only the embedding for the first symbol of decoder_inputs (the "GO"
- symbol) will be updated by back propagation. Embeddings for the symbols
- generated from the decoder itself remain unchanged. This parameter has
- no effect if feed_previous=False.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "embedding_rnn_decoder".
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors. The
- output is of shape [batch_size x cell.output_size] when
- output_projection is not None (and represents the dense representation
- of predicted tokens). It is of shape [batch_size x num_decoder_symbols]
- when output_projection is None.
-* <b>`state`</b>: The state of each decoder cell in each time-step. This is a list
- with length len(decoder_inputs) -- one item for each time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: When output_projection has the wrong shape.
-
-
-- - -
-
-### `tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq(encoder_inputs, decoder_inputs, cell, num_encoder_symbols, num_decoder_symbols, embedding_size, output_projection=None, feed_previous=False, dtype=None, scope=None)` {#embedding_rnn_seq2seq}
-
-Embedding RNN sequence-to-sequence model.
-
-This model first embeds encoder_inputs by a newly created embedding (of shape
-[num_encoder_symbols x input_size]). Then it runs an RNN to encode
-embedded encoder_inputs into a state vector. Next, it embeds decoder_inputs
-by another newly created embedding (of shape [num_decoder_symbols x
-input_size]). Then it runs RNN decoder, initialized with the last
-encoder state, on embedded decoder_inputs.
-
-##### Args:
-
-
-* <b>`encoder_inputs`</b>: A list of 1D int32 Tensors of shape [batch_size].
-* <b>`decoder_inputs`</b>: A list of 1D int32 Tensors of shape [batch_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function and size.
-* <b>`num_encoder_symbols`</b>: Integer; number of symbols on the encoder side.
-* <b>`num_decoder_symbols`</b>: Integer; number of symbols on the decoder side.
-* <b>`embedding_size`</b>: Integer, the length of the embedding vector for each symbol.
-* <b>`output_projection`</b>: None or a pair (W, B) of output projection weights and
- biases; W has shape [output_size x num_decoder_symbols] and B has
- shape [num_decoder_symbols]; if provided and feed_previous=True, each
- fed previous output will first be multiplied by W and added B.
-* <b>`feed_previous`</b>: Boolean or scalar Boolean Tensor; if True, only the first
- of decoder_inputs will be used (the "GO" symbol), and all other decoder
- inputs will be taken from previous outputs (as in embedding_rnn_decoder).
- If False, decoder_inputs are used as given (the standard decoder case).
-* <b>`dtype`</b>: The dtype of the initial state for both the encoder and encoder
- rnn cells (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "embedding_rnn_seq2seq"
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors. The
- output is of shape [batch_size x cell.output_size] when
- output_projection is not None (and represents the dense representation
- of predicted tokens). It is of shape [batch_size x num_decoder_symbols]
- when output_projection is None.
-* <b>`state`</b>: The state of each decoder cell in each time-step. This is a list
- with length len(decoder_inputs) -- one item for each time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
-
-- - -
-
-### `tf.contrib.legacy_seq2seq.embedding_tied_rnn_seq2seq(encoder_inputs, decoder_inputs, cell, num_symbols, embedding_size, num_decoder_symbols=None, output_projection=None, feed_previous=False, dtype=None, scope=None)` {#embedding_tied_rnn_seq2seq}
-
-Embedding RNN sequence-to-sequence model with tied (shared) parameters.
-
-This model first embeds encoder_inputs by a newly created embedding (of shape
-[num_symbols x input_size]). Then it runs an RNN to encode embedded
-encoder_inputs into a state vector. Next, it embeds decoder_inputs using
-the same embedding. Then it runs RNN decoder, initialized with the last
-encoder state, on embedded decoder_inputs. The decoder output is over symbols
-from 0 to num_decoder_symbols - 1 if num_decoder_symbols is none; otherwise it
-is over 0 to num_symbols - 1.
-
-##### Args:
-
-
-* <b>`encoder_inputs`</b>: A list of 1D int32 Tensors of shape [batch_size].
-* <b>`decoder_inputs`</b>: A list of 1D int32 Tensors of shape [batch_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function and size.
-* <b>`num_symbols`</b>: Integer; number of symbols for both encoder and decoder.
-* <b>`embedding_size`</b>: Integer, the length of the embedding vector for each symbol.
-* <b>`num_decoder_symbols`</b>: Integer; number of output symbols for decoder. If
- provided, the decoder output is over symbols 0 to num_decoder_symbols - 1.
- Otherwise, decoder output is over symbols 0 to num_symbols - 1. Note that
- this assumes that the vocabulary is set up such that the first
- num_decoder_symbols of num_symbols are part of decoding.
-* <b>`output_projection`</b>: None or a pair (W, B) of output projection weights and
- biases; W has shape [output_size x num_symbols] and B has
- shape [num_symbols]; if provided and feed_previous=True, each
- fed previous output will first be multiplied by W and added B.
-* <b>`feed_previous`</b>: Boolean or scalar Boolean Tensor; if True, only the first
- of decoder_inputs will be used (the "GO" symbol), and all other decoder
- inputs will be taken from previous outputs (as in embedding_rnn_decoder).
- If False, decoder_inputs are used as given (the standard decoder case).
-* <b>`dtype`</b>: The dtype to use for the initial RNN states (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "embedding_tied_rnn_seq2seq".
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors with
- shape [batch_size x output_symbols] containing the generated
- outputs where output_symbols = num_decoder_symbols if
- num_decoder_symbols is not None otherwise output_symbols = num_symbols.
-* <b>`state`</b>: The state of each decoder cell at the final time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: When output_projection has the wrong shape.
-
-
-- - -
-
-### `tf.contrib.legacy_seq2seq.model_with_buckets(encoder_inputs, decoder_inputs, targets, weights, buckets, seq2seq, softmax_loss_function=None, per_example_loss=False, name=None)` {#model_with_buckets}
-
-Create a sequence-to-sequence model with support for bucketing.
-
-The seq2seq argument is a function that defines a sequence-to-sequence model,
-e.g., seq2seq = lambda x, y: basic_rnn_seq2seq(
- x, y, core_rnn_cell.GRUCell(24))
-
-##### Args:
-
-
-* <b>`encoder_inputs`</b>: A list of Tensors to feed the encoder; first seq2seq input.
-* <b>`decoder_inputs`</b>: A list of Tensors to feed the decoder; second seq2seq input.
-* <b>`targets`</b>: A list of 1D batch-sized int32 Tensors (desired output sequence).
-* <b>`weights`</b>: List of 1D batch-sized float-Tensors to weight the targets.
-* <b>`buckets`</b>: A list of pairs of (input size, output size) for each bucket.
-* <b>`seq2seq`</b>: A sequence-to-sequence model function; it takes 2 input that
- agree with encoder_inputs and decoder_inputs, and returns a pair
- consisting of outputs and states (as, e.g., basic_rnn_seq2seq).
-* <b>`softmax_loss_function`</b>: Function (inputs-batch, labels-batch) -> loss-batch
- to be used instead of the standard softmax (the default if this is None).
-* <b>`per_example_loss`</b>: Boolean. If set, the returned loss will be a batch-sized
- tensor of losses for each sequence in the batch. If unset, it will be
- a scalar with the averaged loss from all examples.
-* <b>`name`</b>: Optional name for this operation, defaults to "model_with_buckets".
-
-##### Returns:
-
- A tuple of the form (outputs, losses), where:
-
-* <b>`outputs`</b>: The outputs for each bucket. Its j'th element consists of a list
- of 2D Tensors. The shape of output tensors can be either
- [batch_size x output_size] or [batch_size x num_decoder_symbols]
- depending on the seq2seq model used.
-* <b>`losses`</b>: List of scalar Tensors, representing losses for each bucket, or,
- if per_example_loss is set, a list of 1D batch-sized float Tensors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If length of encoder_inputsut, targets, or weights is smaller
- than the largest (last) bucket.
-
-
-- - -
-
-### `tf.contrib.legacy_seq2seq.one2many_rnn_seq2seq(encoder_inputs, decoder_inputs_dict, enc_cell, dec_cells_dict, num_encoder_symbols, num_decoder_symbols_dict, embedding_size, feed_previous=False, dtype=None, scope=None)` {#one2many_rnn_seq2seq}
-
-One-to-many RNN sequence-to-sequence model (multi-task).
-
-This is a multi-task sequence-to-sequence model with one encoder and multiple
-decoders. Reference to multi-task sequence-to-sequence learning can be found
-here: http://arxiv.org/abs/1511.06114
-
-##### Args:
-
-
-* <b>`encoder_inputs`</b>: A list of 1D int32 Tensors of shape [batch_size].
-* <b>`decoder_inputs_dict`</b>: A dictionany mapping decoder name (string) to
- the corresponding decoder_inputs; each decoder_inputs is a list of 1D
- Tensors of shape [batch_size]; num_decoders is defined as
- len(decoder_inputs_dict).
-* <b>`enc_cell`</b>: core_rnn_cell.RNNCell defining the encoder cell function and size.
-* <b>`dec_cells_dict`</b>: A dictionary mapping encoder name (string) to an
- instance of core_rnn_cell.RNNCell.
-* <b>`num_encoder_symbols`</b>: Integer; number of symbols on the encoder side.
-* <b>`num_decoder_symbols_dict`</b>: A dictionary mapping decoder name (string) to an
- integer specifying number of symbols for the corresponding decoder;
- len(num_decoder_symbols_dict) must be equal to num_decoders.
-* <b>`embedding_size`</b>: Integer, the length of the embedding vector for each symbol.
-* <b>`feed_previous`</b>: Boolean or scalar Boolean Tensor; if True, only the first of
- decoder_inputs will be used (the "GO" symbol), and all other decoder
- inputs will be taken from previous outputs (as in embedding_rnn_decoder).
- If False, decoder_inputs are used as given (the standard decoder case).
-* <b>`dtype`</b>: The dtype of the initial state for both the encoder and encoder
- rnn cells (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "one2many_rnn_seq2seq"
-
-##### Returns:
-
- A tuple of the form (outputs_dict, state_dict), where:
-
-* <b>`outputs_dict`</b>: A mapping from decoder name (string) to a list of the same
- length as decoder_inputs_dict[name]; each element in the list is a 2D
- Tensors with shape [batch_size x num_decoder_symbol_list[name]]
- containing the generated outputs.
-* <b>`state_dict`</b>: A mapping from decoder name (string) to the final state of the
- corresponding decoder RNN; it is a 2D Tensor of shape
- [batch_size x cell.state_size].
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if enc_cell or any of the dec_cells are not instances of RNNCell.
-* <b>`ValueError`</b>: if len(dec_cells) != len(decoder_inputs_dict).
-
-
-- - -
-
-### `tf.contrib.legacy_seq2seq.rnn_decoder(decoder_inputs, initial_state, cell, loop_function=None, scope=None)` {#rnn_decoder}
-
-RNN decoder for the sequence-to-sequence model.
-
-##### Args:
-
-
-* <b>`decoder_inputs`</b>: A list of 2D Tensors [batch_size x input_size].
-* <b>`initial_state`</b>: 2D Tensor with shape [batch_size x cell.state_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function and size.
-* <b>`loop_function`</b>: If not None, this function will be applied to the i-th output
- in order to generate the i+1-st input, and decoder_inputs will be ignored,
- except for the first element ("GO" symbol). This can be used for decoding,
- but also for training to emulate http://arxiv.org/abs/1506.03099.
- Signature -- loop_function(prev, i) = next
- * prev is a 2D Tensor of shape [batch_size x output_size],
- * i is an integer, the step number (when advanced control is needed),
- * next is a 2D Tensor of shape [batch_size x input_size].
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn_decoder".
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors with
- shape [batch_size x output_size] containing generated outputs.
-* <b>`state`</b>: The state of each cell at the final time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
- (Note that in some cases, like basic RNN cell or GRU cell, outputs and
- states can be the same. They are different for LSTM cells though.)
-
-
-- - -
-
-### `tf.contrib.legacy_seq2seq.sequence_loss(logits, targets, weights, average_across_timesteps=True, average_across_batch=True, softmax_loss_function=None, name=None)` {#sequence_loss}
-
-Weighted cross-entropy loss for a sequence of logits, batch-collapsed.
-
-##### Args:
-
-
-* <b>`logits`</b>: List of 2D Tensors of shape [batch_size x num_decoder_symbols].
-* <b>`targets`</b>: List of 1D batch-sized int32 Tensors of the same length as logits.
-* <b>`weights`</b>: List of 1D batch-sized float-Tensors of the same length as logits.
-* <b>`average_across_timesteps`</b>: If set, divide the returned cost by the total
- label weight.
-* <b>`average_across_batch`</b>: If set, divide the returned cost by the batch size.
-* <b>`softmax_loss_function`</b>: Function (inputs-batch, labels-batch) -> loss-batch
- to be used instead of the standard softmax (the default if this is None).
-* <b>`name`</b>: Optional name for this operation, defaults to "sequence_loss".
-
-##### Returns:
-
- A scalar float Tensor: The average log-perplexity per symbol (weighted).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If len(logits) is different from len(targets) or len(weights).
-
-
-- - -
-
-### `tf.contrib.legacy_seq2seq.sequence_loss_by_example(logits, targets, weights, average_across_timesteps=True, softmax_loss_function=None, name=None)` {#sequence_loss_by_example}
-
-Weighted cross-entropy loss for a sequence of logits (per example).
-
-##### Args:
-
-
-* <b>`logits`</b>: List of 2D Tensors of shape [batch_size x num_decoder_symbols].
-* <b>`targets`</b>: List of 1D batch-sized int32 Tensors of the same length as logits.
-* <b>`weights`</b>: List of 1D batch-sized float-Tensors of the same length as logits.
-* <b>`average_across_timesteps`</b>: If set, divide the returned cost by the total
- label weight.
-* <b>`softmax_loss_function`</b>: Function (labels-batch, inputs-batch) -> loss-batch
- to be used instead of the standard softmax (the default if this is None).
-* <b>`name`</b>: Optional name for this operation, default: "sequence_loss_by_example".
-
-##### Returns:
-
- 1D batch-sized float Tensor: The log-perplexity for each sequence.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If len(logits) is different from len(targets) or len(weights).
-
-
-- - -
-
-### `tf.contrib.legacy_seq2seq.tied_rnn_seq2seq(encoder_inputs, decoder_inputs, cell, loop_function=None, dtype=tf.float32, scope=None)` {#tied_rnn_seq2seq}
-
-RNN sequence-to-sequence model with tied encoder and decoder parameters.
-
-This model first runs an RNN to encode encoder_inputs into a state vector, and
-then runs decoder, initialized with the last encoder state, on decoder_inputs.
-Encoder and decoder use the same RNN cell and share parameters.
-
-##### Args:
-
-
-* <b>`encoder_inputs`</b>: A list of 2D Tensors [batch_size x input_size].
-* <b>`decoder_inputs`</b>: A list of 2D Tensors [batch_size x input_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function and size.
-* <b>`loop_function`</b>: If not None, this function will be applied to i-th output
- in order to generate i+1-th input, and decoder_inputs will be ignored,
- except for the first element ("GO" symbol), see rnn_decoder for details.
-* <b>`dtype`</b>: The dtype of the initial state of the rnn cell (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; default: "tied_rnn_seq2seq".
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors with
- shape [batch_size x output_size] containing the generated outputs.
-* <b>`state`</b>: The state of each decoder cell in each time-step. This is a list
- with length len(decoder_inputs) -- one item for each time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.linalg.md b/tensorflow/g3doc/api_docs/python/contrib.linalg.md
deleted file mode 100644
index 2060e85211..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.linalg.md
+++ /dev/null
@@ -1,4413 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Linear Algebra (contrib)
-[TOC]
-
-Linear algebra libraries. See the @{$python/contrib.linalg} guide.
-
-- - -
-
-### `class tf.contrib.linalg.LinearOperator` {#LinearOperator}
-
-Base class defining a [batch of] linear operator[s].
-
-Subclasses of `LinearOperator` provide a access to common methods on a
-(batch) matrix, without the need to materialize the matrix. This allows:
-
-* Matrix free computations
-* Operators that take advantage of special structure, while providing a
- consistent API to users.
-
-#### Subclassing
-
-To enable a public method, subclasses should implement the leading-underscore
-version of the method. The argument signature should be identical except for
-the omission of `name="..."`. For example, to enable
-`apply(x, adjoint=False, name="apply")` a subclass should implement
-`_apply(x, adjoint=False)`.
-
-#### Performance contract
-
-Subclasses should implement a method only if it can be done with a reasonable
-performance increase over generic dense operations, either in time, parallel
-scalability, or memory usage. For example, if the determinant can only be
-computed using `tf.matrix_determinant(self.to_dense())`, then determinants
-should not be implemented.
-
-Class docstrings should contain an explanation of computational complexity.
-Since this is a high-performance library, attention should be paid to detail,
-and explanations can include constants as well as Big-O notation.
-
-#### Shape compatibility
-
-`LinearOperator` sub classes should operate on a [batch] matrix with
-compatible shape. Class docstrings should define what is meant by compatible
-shape. Some sub-classes may not support batching.
-
-An example is:
-
-`x` is a batch matrix with compatible shape for `apply` if
-
-```
-operator.shape = [B1,...,Bb] + [M, N], b >= 0,
-x.shape = [B1,...,Bb] + [N, R]
-```
-
-`rhs` is a batch matrix with compatible shape for `solve` if
-
-```
-operator.shape = [B1,...,Bb] + [M, N], b >= 0,
-rhs.shape = [B1,...,Bb] + [M, R]
-```
-
-#### Example docstring for subclasses.
-
-This operator acts like a (batch) matrix `A` with shape
-`[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a
-batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is
-an `m x n` matrix. Again, this matrix `A` may not be materialized, but for
-purposes of identifying and working with compatible arguments the shape is
-relevant.
-
-Examples:
-
-```python
-some_tensor = ... shape = ????
-operator = MyLinOp(some_tensor)
-
-operator.shape()
-==> [2, 4, 4]
-
-operator.log_determinant()
-==> Shape [2] Tensor
-
-x = ... Shape [2, 4, 5] Tensor
-
-operator.apply(x)
-==> Shape [2, 4, 5] Tensor
-```
-
-#### Shape compatibility
-
-This operator acts on batch matrices with compatible shape.
-FILL IN WHAT IS MEANT BY COMPATIBLE SHAPE
-
-#### Performance
-
-FILL THIS IN
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite, square`.
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.__init__(dtype, graph_parents=None, is_non_singular=None, is_self_adjoint=None, is_positive_definite=None, is_square=None, name=None)` {#LinearOperator.__init__}
-
-Initialize the `LinearOperator`.
-
-**This is a private method for subclass use.**
-**Subclasses should copy-paste this `__init__` documentation.**
-
-##### Args:
-
-
-* <b>`dtype`</b>: The type of the this `LinearOperator`. Arguments to `apply` and
- `solve` will have to be this type.
-* <b>`graph_parents`</b>: Python list of graph prerequisites of this `LinearOperator`
- Typically tensors that are passed during initialization.
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose. If `dtype` is real, this is equivalent to being symmetric.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite,
- meaning the real part of all eigenvalues is positive. We do not require
- the operator to be self-adjoint to be positive-definite. See:
-* <b>`https`</b>: //en.wikipedia.org/wiki/Positive-definite_matrix\
- #Extension_for_non_symmetric_matrices
-* <b>`is_square`</b>: Expect that this operator acts like square [batch] matrices.
-* <b>`name`</b>: A name for this `LinearOperator`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If any member of graph_parents is `None` or not a `Tensor`.
-* <b>`ValueError`</b>: If hints are set incorrectly.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.add_to_tensor(x, name='add_to_tensor')` {#LinearOperator.add_to_tensor}
-
-Add matrix represented by this operator to `x`. Equivalent to `A + x`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with same `dtype` and shape broadcastable to `self.shape`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.apply(x, adjoint=False, name='apply')` {#LinearOperator.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.assert_non_singular(name='assert_non_singular')` {#LinearOperator.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.assert_positive_definite(name='assert_positive_definite')` {#LinearOperator.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperator.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.batch_shape` {#LinearOperator.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperator.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.determinant(name='det')` {#LinearOperator.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.diag_part(name='diag_part')` {#LinearOperator.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.domain_dimension` {#LinearOperator.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperator.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.dtype` {#LinearOperator.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.graph_parents` {#LinearOperator.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.is_non_singular` {#LinearOperator.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.is_positive_definite` {#LinearOperator.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.is_self_adjoint` {#LinearOperator.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.is_square` {#LinearOperator.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.log_abs_determinant(name='log_abs_det')` {#LinearOperator.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.name` {#LinearOperator.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.range_dimension` {#LinearOperator.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperator.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.shape` {#LinearOperator.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.shape_tensor(name='shape_tensor')` {#LinearOperator.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.solve(rhs, adjoint=False, name='solve')` {#LinearOperator.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.tensor_rank` {#LinearOperator.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperator.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.to_dense(name='to_dense')` {#LinearOperator.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
-
-- - -
-
-### `class tf.contrib.linalg.LinearOperatorDiag` {#LinearOperatorDiag}
-
-`LinearOperator` acting like a [batch] square diagonal matrix.
-
-This operator acts like a [batch] diagonal matrix `A` with shape
-`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a
-batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is
-an `N x N` matrix. This matrix `A` is not materialized, but for
-purposes of broadcasting this shape will be relevant.
-
-`LinearOperatorDiag` is initialized with a (batch) vector.
-
-```python
-# Create a 2 x 2 diagonal linear operator.
-diag = [1., -1.]
-operator = LinearOperatorDiag(diag)
-
-operator.to_dense()
-==> [[1., 0.]
- [0., -1.]]
-
-operator.shape
-==> [2, 2]
-
-operator.log_determinant()
-==> scalar Tensor
-
-x = ... Shape [2, 4] Tensor
-operator.apply(x)
-==> Shape [2, 4] Tensor
-
-# Create a [2, 3] batch of 4 x 4 linear operators.
-diag = tf.random_normal(shape=[2, 3, 4])
-operator = LinearOperatorDiag(diag)
-
-# Create a shape [2, 1, 4, 2] vector. Note that this shape is compatible
-# since the batch dimensions, [2, 1], are brodcast to
-# operator.batch_shape = [2, 3].
-y = tf.random_normal(shape=[2, 1, 4, 2])
-x = operator.solve(y)
-==> operator.apply(x) = y
-```
-
-#### Shape compatibility
-
-This operator acts on [batch] matrix with compatible shape.
-`x` is a batch matrix with compatible shape for `apply` and `solve` if
-
-```
-operator.shape = [B1,...,Bb] + [N, N], with b >= 0
-x.shape = [C1,...,Cc] + [N, R],
-and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]
-```
-
-#### Performance
-
-Suppose `operator` is a `LinearOperatorDiag` of shape `[N, N]`,
-and `x.shape = [N, R]`. Then
-
-* `operator.apply(x)` involves `N * R` multiplications.
-* `operator.solve(x)` involves `N` divisions and `N * R` multiplications.
-* `operator.determinant()` involves a size `N` `reduce_prod`.
-
-If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and
-`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite`.
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.__init__(diag, is_non_singular=None, is_self_adjoint=None, is_positive_definite=None, name='LinearOperatorDiag')` {#LinearOperatorDiag.__init__}
-
-Initialize a `LinearOperatorDiag`.
-
-##### Args:
-
-
-* <b>`diag`</b>: Shape `[B1,...,Bb, N]` `Tensor` with `b >= 0` `N >= 0`.
- The diagonal of the operator. Allowed dtypes: `float32`, `float64`,
- `complex64`, `complex128`.
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose. If `diag.dtype` is real, this is auto-set to `True`.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite,
- meaning the real part of all eigenvalues is positive. We do not require
- the operator to be self-adjoint to be positive-definite. See:
-* <b>`https`</b>: //en.wikipedia.org/wiki/Positive-definite_matrix
- #Extension_for_non_symmetric_matrices
-* <b>`name`</b>: A name for this `LinearOperator`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `diag.dtype` is not an allowed type.
-* <b>`ValueError`</b>: If `diag.dtype` is real, and `is_self_adjoint` is not `True`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.add_to_tensor(x, name='add_to_tensor')` {#LinearOperatorDiag.add_to_tensor}
-
-Add matrix represented by this operator to `x`. Equivalent to `A + x`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with same `dtype` and shape broadcastable to `self.shape`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.apply(x, adjoint=False, name='apply')` {#LinearOperatorDiag.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.assert_non_singular(name='assert_non_singular')` {#LinearOperatorDiag.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.assert_positive_definite(name='assert_positive_definite')` {#LinearOperatorDiag.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperatorDiag.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.batch_shape` {#LinearOperatorDiag.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperatorDiag.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.determinant(name='det')` {#LinearOperatorDiag.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.diag` {#LinearOperatorDiag.diag}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.diag_part(name='diag_part')` {#LinearOperatorDiag.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.domain_dimension` {#LinearOperatorDiag.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperatorDiag.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.dtype` {#LinearOperatorDiag.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.graph_parents` {#LinearOperatorDiag.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.is_non_singular` {#LinearOperatorDiag.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.is_positive_definite` {#LinearOperatorDiag.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.is_self_adjoint` {#LinearOperatorDiag.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.is_square` {#LinearOperatorDiag.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.log_abs_determinant(name='log_abs_det')` {#LinearOperatorDiag.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.name` {#LinearOperatorDiag.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.range_dimension` {#LinearOperatorDiag.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperatorDiag.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.shape` {#LinearOperatorDiag.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.shape_tensor(name='shape_tensor')` {#LinearOperatorDiag.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.solve(rhs, adjoint=False, name='solve')` {#LinearOperatorDiag.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.tensor_rank` {#LinearOperatorDiag.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperatorDiag.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.to_dense(name='to_dense')` {#LinearOperatorDiag.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
-
-- - -
-
-### `class tf.contrib.linalg.LinearOperatorIdentity` {#LinearOperatorIdentity}
-
-`LinearOperator` acting like a [batch] square identity matrix.
-
-This operator acts like a [batch] identity matrix `A` with shape
-`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a
-batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is
-an `N x N` matrix. This matrix `A` is not materialized, but for
-purposes of broadcasting this shape will be relevant.
-
-`LinearOperatorIdentity` is initialized with `num_rows`, and optionally
-`batch_shape`, and `dtype` arguments. If `batch_shape` is `None`, this
-operator efficiently passes through all arguments. If `batch_shape` is
-provided, broadcasting may occur, which will require making copies.
-
-```python
-# Create a 2 x 2 identity matrix.
-operator = LinearOperatorIdentity(num_rows=2, dtype=tf.float32)
-
-operator.to_dense()
-==> [[1., 0.]
- [0., 1.]]
-
-operator.shape
-==> [2, 2]
-
-operator.log_determinant()
-==> 0.
-
-x = ... Shape [2, 4] Tensor
-operator.apply(x)
-==> Shape [2, 4] Tensor, same as x.
-
-y = tf.random_normal(shape=[3, 2, 4])
-# Note that y.shape is compatible with operator.shape because operator.shape
-# is broadcast to [3, 2, 2].
-# This broadcast does NOT require copying data, since we can infer that y
-# will be passed through without changing shape. We are always able to infer
-# this if the operator has no batch_shape.
-x = operator.solve(y)
-==> Shape [3, 2, 4] Tensor, same as y.
-
-# Create a 2-batch of 2x2 identity matrices
-operator = LinearOperatorIdentity(num_rows=2, batch_shape=[2])
-operator.to_dense()
-==> [[[1., 0.]
- [0., 1.]],
- [[1., 0.]
- [0., 1.]]]
-
-# Here, even though the operator has a batch shape, the input is the same as
-# the output, so x can be passed through without a copy. The operator is able
-# to detect that no broadcast is necessary because both x and the operator
-# have statically defined shape.
-x = ... Shape [2, 2, 3]
-operator.apply(x)
-==> Shape [2, 2, 3] Tensor, same as x
-
-# Here the operator and x have different batch_shape, and are broadcast.
-# This requires a copy, since the output is different size than the input.
-x = ... Shape [1, 2, 3]
-operator.apply(x)
-==> Shape [2, 2, 3] Tensor, equal to [x, x]
-```
-
-### Shape compatibility
-
-This operator acts on [batch] matrix with compatible shape.
-`x` is a batch matrix with compatible shape for `apply` and `solve` if
-
-```
-operator.shape = [B1,...,Bb] + [N, N], with b >= 0
-x.shape = [C1,...,Cc] + [N, R],
-and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]
-```
-
-### Performance
-
-If `batch_shape` initialization arg is `None`:
-
-* `operator.apply(x)` is `O(1)`
-* `operator.solve(x)` is `O(1)`
-* `operator.determinant()` is `O(1)`
-
-If `batch_shape` initialization arg is provided, and static checks cannot
-rule out the need to broadcast:
-
-* `operator.apply(x)` is `O(D1*...*Dd*N*R)`
-* `operator.solve(x)` is `O(D1*...*Dd*N*R)`
-* `operator.determinant()` is `O(B1*...*Bb)`
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite`.
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.__init__(num_rows, batch_shape=None, dtype=None, is_non_singular=True, is_self_adjoint=True, is_positive_definite=True, assert_proper_shapes=False, name='LinearOperatorIdentity')` {#LinearOperatorIdentity.__init__}
-
-Initialize a `LinearOperatorIdentity`.
-
-The `LinearOperatorIdentity` is initialized with arguments defining `dtype`
-and shape.
-
-This operator is able to broadcast the leading (batch) dimensions, which
-sometimes requires copying data. If `batch_shape` is `None`, the operator
-can take arguments of any batch shape without copying. See examples.
-
-##### Args:
-
-
-* <b>`num_rows`</b>: Scalar non-negative integer `Tensor`. Number of rows in the
- corresponding identity matrix.
-* <b>`batch_shape`</b>: Optional `1-D` integer `Tensor`. The shape of the leading
- dimensions. If `None`, this operator has no leading dimensions.
-* <b>`dtype`</b>: Data type of the matrix that this operator represents.
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite.
-* <b>`assert_proper_shapes`</b>: Python `bool`. If `False`, only perform static
- checks that initialization and method arguments have proper shape.
- If `True`, and static checks are inconclusive, add asserts to the graph.
-* <b>`name`</b>: A name for this `LinearOperator`
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `num_rows` is determined statically to be non-scalar, or
- negative.
-* <b>`ValueError`</b>: If `batch_shape` is determined statically to not be 1-D, or
- negative.
-* <b>`ValueError`</b>: If any of the following is not `True`:
- `{is_self_adjoint, is_non_singular, is_positive_definite}`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.add_to_tensor(mat, name='add_to_tensor')` {#LinearOperatorIdentity.add_to_tensor}
-
-Add matrix represented by this operator to `mat`. Equiv to `I + mat`.
-
-##### Args:
-
-
-* <b>`mat`</b>: `Tensor` with same `dtype` and shape broadcastable to `self`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.apply(x, adjoint=False, name='apply')` {#LinearOperatorIdentity.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.assert_non_singular(name='assert_non_singular')` {#LinearOperatorIdentity.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.assert_positive_definite(name='assert_positive_definite')` {#LinearOperatorIdentity.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperatorIdentity.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.batch_shape` {#LinearOperatorIdentity.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperatorIdentity.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.determinant(name='det')` {#LinearOperatorIdentity.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.diag_part(name='diag_part')` {#LinearOperatorIdentity.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.domain_dimension` {#LinearOperatorIdentity.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperatorIdentity.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.dtype` {#LinearOperatorIdentity.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.graph_parents` {#LinearOperatorIdentity.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.is_non_singular` {#LinearOperatorIdentity.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.is_positive_definite` {#LinearOperatorIdentity.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.is_self_adjoint` {#LinearOperatorIdentity.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.is_square` {#LinearOperatorIdentity.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.log_abs_determinant(name='log_abs_det')` {#LinearOperatorIdentity.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.name` {#LinearOperatorIdentity.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.range_dimension` {#LinearOperatorIdentity.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperatorIdentity.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.shape` {#LinearOperatorIdentity.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.shape_tensor(name='shape_tensor')` {#LinearOperatorIdentity.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.solve(rhs, adjoint=False, name='solve')` {#LinearOperatorIdentity.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.tensor_rank` {#LinearOperatorIdentity.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperatorIdentity.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.to_dense(name='to_dense')` {#LinearOperatorIdentity.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
-
-- - -
-
-### `class tf.contrib.linalg.LinearOperatorScaledIdentity` {#LinearOperatorScaledIdentity}
-
-`LinearOperator` acting like a scaled [batch] identity matrix `A = c I`.
-
-This operator acts like a scaled [batch] identity matrix `A` with shape
-`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a
-batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is
-a scaled version of the `N x N` identity matrix.
-
-`LinearOperatorIdentity` is initialized with `num_rows`, and a `multiplier`
-(a `Tensor`) of shape `[B1,...,Bb]`. `N` is set to `num_rows`, and the
-`multiplier` determines the scale for each batch member.
-
-```python
-# Create a 2 x 2 scaled identity matrix.
-operator = LinearOperatorIdentity(num_rows=2, multiplier=3.)
-
-operator.to_dense()
-==> [[3., 0.]
- [0., 3.]]
-
-operator.shape
-==> [2, 2]
-
-operator.log_determinant()
-==> 2 * Log[3]
-
-x = ... Shape [2, 4] Tensor
-operator.apply(x)
-==> 3 * x
-
-y = tf.random_normal(shape=[3, 2, 4])
-# Note that y.shape is compatible with operator.shape because operator.shape
-# is broadcast to [3, 2, 2].
-x = operator.solve(y)
-==> 3 * x
-
-# Create a 2-batch of 2x2 identity matrices
-operator = LinearOperatorIdentity(num_rows=2, multiplier=5.)
-operator.to_dense()
-==> [[[5., 0.]
- [0., 5.]],
- [[5., 0.]
- [0., 5.]]]
-
-x = ... Shape [2, 2, 3]
-operator.apply(x)
-==> 5 * x
-
-# Here the operator and x have different batch_shape, and are broadcast.
-x = ... Shape [1, 2, 3]
-operator.apply(x)
-==> 5 * x
-```
-
-### Shape compatibility
-
-This operator acts on [batch] matrix with compatible shape.
-`x` is a batch matrix with compatible shape for `apply` and `solve` if
-
-```
-operator.shape = [B1,...,Bb] + [N, N], with b >= 0
-x.shape = [C1,...,Cc] + [N, R],
-and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]
-```
-
-### Performance
-
-* `operator.apply(x)` is `O(D1*...*Dd*N*R)`
-* `operator.solve(x)` is `O(D1*...*Dd*N*R)`
-* `operator.determinant()` is `O(D1*...*Dd)`
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite`.
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.__init__(num_rows, multiplier, is_non_singular=None, is_self_adjoint=None, is_positive_definite=None, assert_proper_shapes=False, name='LinearOperatorScaledIdentity')` {#LinearOperatorScaledIdentity.__init__}
-
-Initialize a `LinearOperatorScaledIdentity`.
-
-The `LinearOperatorScaledIdentity` is initialized with `num_rows`, which
-determines the size of each identity matrix, and a `multiplier`,
-which defines `dtype`, batch shape, and scale of each matrix.
-
-This operator is able to broadcast the leading (batch) dimensions.
-
-##### Args:
-
-
-* <b>`num_rows`</b>: Scalar non-negative integer `Tensor`. Number of rows in the
- corresponding identity matrix.
-* <b>`multiplier`</b>: `Tensor` of shape `[B1,...,Bb]`, or `[]` (a scalar).
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite.
-* <b>`assert_proper_shapes`</b>: Python `bool`. If `False`, only perform static
- checks that initialization and method arguments have proper shape.
- If `True`, and static checks are inconclusive, add asserts to the graph.
-* <b>`name`</b>: A name for this `LinearOperator`
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `num_rows` is determined statically to be non-scalar, or
- negative.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.add_to_tensor(mat, name='add_to_tensor')` {#LinearOperatorScaledIdentity.add_to_tensor}
-
-Add matrix represented by this operator to `mat`. Equiv to `I + mat`.
-
-##### Args:
-
-
-* <b>`mat`</b>: `Tensor` with same `dtype` and shape broadcastable to `self`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.apply(x, adjoint=False, name='apply')` {#LinearOperatorScaledIdentity.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.assert_non_singular(name='assert_non_singular')` {#LinearOperatorScaledIdentity.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.assert_positive_definite(name='assert_positive_definite')` {#LinearOperatorScaledIdentity.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperatorScaledIdentity.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.batch_shape` {#LinearOperatorScaledIdentity.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperatorScaledIdentity.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.determinant(name='det')` {#LinearOperatorScaledIdentity.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.diag_part(name='diag_part')` {#LinearOperatorScaledIdentity.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.domain_dimension` {#LinearOperatorScaledIdentity.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperatorScaledIdentity.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.dtype` {#LinearOperatorScaledIdentity.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.graph_parents` {#LinearOperatorScaledIdentity.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.is_non_singular` {#LinearOperatorScaledIdentity.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.is_positive_definite` {#LinearOperatorScaledIdentity.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.is_self_adjoint` {#LinearOperatorScaledIdentity.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.is_square` {#LinearOperatorScaledIdentity.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.log_abs_determinant(name='log_abs_det')` {#LinearOperatorScaledIdentity.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.multiplier` {#LinearOperatorScaledIdentity.multiplier}
-
-The [batch] scalar `Tensor`, `c` in `cI`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.name` {#LinearOperatorScaledIdentity.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.range_dimension` {#LinearOperatorScaledIdentity.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperatorScaledIdentity.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.shape` {#LinearOperatorScaledIdentity.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.shape_tensor(name='shape_tensor')` {#LinearOperatorScaledIdentity.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.solve(rhs, adjoint=False, name='solve')` {#LinearOperatorScaledIdentity.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.tensor_rank` {#LinearOperatorScaledIdentity.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperatorScaledIdentity.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.to_dense(name='to_dense')` {#LinearOperatorScaledIdentity.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
-
-- - -
-
-### `class tf.contrib.linalg.LinearOperatorMatrix` {#LinearOperatorMatrix}
-
-`LinearOperator` that wraps a [batch] matrix.
-
-This operator wraps a [batch] matrix `A` (which is a `Tensor`) with shape
-`[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a
-batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is
-an `M x N` matrix.
-
-```python
-# Create a 2 x 2 linear operator.
-matrix = [[1., 2.], [3., 4.]]
-operator = LinearOperatorMatrix(matrix)
-
-operator.to_dense()
-==> [[1., 2.]
- [3., 4.]]
-
-operator.shape
-==> [2, 2]
-
-operator.log_determinant()
-==> scalar Tensor
-
-x = ... Shape [2, 4] Tensor
-operator.apply(x)
-==> Shape [2, 4] Tensor
-
-# Create a [2, 3] batch of 4 x 4 linear operators.
-matrix = tf.random_normal(shape=[2, 3, 4, 4])
-operator = LinearOperatorMatrix(matrix)
-```
-
-#### Shape compatibility
-
-This operator acts on [batch] matrix with compatible shape.
-`x` is a batch matrix with compatible shape for `apply` and `solve` if
-
-```
-operator.shape = [B1,...,Bb] + [M, N], with b >= 0
-x.shape = [B1,...,Bb] + [N, R], with R >= 0.
-```
-
-#### Performance
-
-`LinearOperatorMatrix` has exactly the same performance as would be achieved
-by using standard `TensorFlow` matrix ops. Intelligent choices are made
-based on the following initialization hints.
-
-* If `dtype` is real, and `is_self_adjoint` and `is_positive_definite`, a
- Cholesky factorization is used for the determinant and solve.
-
-In all cases, suppose `operator` is a `LinearOperatorMatrix` of shape
-`[M, N]`, and `x.shape = [N, R]`. Then
-
-* `operator.apply(x)` is `O(M * N * R)`.
-* If `M=N`, `operator.solve(x)` is `O(N^3 * R)`.
-* If `M=N`, `operator.determinant()` is `O(N^3)`.
-
-If instead `operator` and `x` have shape `[B1,...,Bb, M, N]` and
-`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite`.
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.__init__(matrix, is_non_singular=None, is_self_adjoint=None, is_positive_definite=None, name='LinearOperatorMatrix')` {#LinearOperatorMatrix.__init__}
-
-Initialize a `LinearOperatorMatrix`.
-
-##### Args:
-
-
-* <b>`matrix`</b>: Shape `[B1,...,Bb, M, N]` with `b >= 0`, `M, N >= 0`.
- Allowed dtypes: `float32`, `float64`, `complex64`, `complex128`.
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite,
- meaning the real part of all eigenvalues is positive. We do not require
- the operator to be self-adjoint to be positive-definite. See:
-* <b>`https`</b>: //en.wikipedia.org/wiki/Positive-definite_matrix
- #Extension_for_non_symmetric_matrices
-* <b>`name`</b>: A name for this `LinearOperator`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `diag.dtype` is not an allowed type.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.add_to_tensor(x, name='add_to_tensor')` {#LinearOperatorMatrix.add_to_tensor}
-
-Add matrix represented by this operator to `x`. Equivalent to `A + x`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with same `dtype` and shape broadcastable to `self.shape`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.apply(x, adjoint=False, name='apply')` {#LinearOperatorMatrix.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.assert_non_singular(name='assert_non_singular')` {#LinearOperatorMatrix.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.assert_positive_definite(name='assert_positive_definite')` {#LinearOperatorMatrix.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperatorMatrix.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.batch_shape` {#LinearOperatorMatrix.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperatorMatrix.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.determinant(name='det')` {#LinearOperatorMatrix.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.diag_part(name='diag_part')` {#LinearOperatorMatrix.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.domain_dimension` {#LinearOperatorMatrix.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperatorMatrix.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.dtype` {#LinearOperatorMatrix.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.graph_parents` {#LinearOperatorMatrix.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.is_non_singular` {#LinearOperatorMatrix.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.is_positive_definite` {#LinearOperatorMatrix.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.is_self_adjoint` {#LinearOperatorMatrix.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.is_square` {#LinearOperatorMatrix.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.log_abs_determinant(name='log_abs_det')` {#LinearOperatorMatrix.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.name` {#LinearOperatorMatrix.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.range_dimension` {#LinearOperatorMatrix.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperatorMatrix.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.shape` {#LinearOperatorMatrix.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.shape_tensor(name='shape_tensor')` {#LinearOperatorMatrix.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.solve(rhs, adjoint=False, name='solve')` {#LinearOperatorMatrix.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.tensor_rank` {#LinearOperatorMatrix.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperatorMatrix.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.to_dense(name='to_dense')` {#LinearOperatorMatrix.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
-
-- - -
-
-### `class tf.contrib.linalg.LinearOperatorTriL` {#LinearOperatorTriL}
-
-`LinearOperator` acting like a [batch] square lower triangular matrix.
-
-This operator acts like a [batch] lower triangular matrix `A` with shape
-`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a
-batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is
-an `N x N` matrix.
-
-`LinearOperatorTriL` is initialized with a `Tensor` having dimensions
-`[B1,...,Bb, N, N]`. The upper triangle of the last two dimensions is ignored.
-
-```python
-# Create a 2 x 2 lower-triangular linear operator.
-tril = [[1., 2.], [3., 4.]]
-operator = LinearOperatorTriL(tril)
-
-# The upper triangle is ignored.
-operator.to_dense()
-==> [[1., 0.]
- [3., 4.]]
-
-operator.shape
-==> [2, 2]
-
-operator.log_determinant()
-==> scalar Tensor
-
-x = ... Shape [2, 4] Tensor
-operator.apply(x)
-==> Shape [2, 4] Tensor
-
-# Create a [2, 3] batch of 4 x 4 linear operators.
-tril = tf.random_normal(shape=[2, 3, 4, 4])
-operator = LinearOperatorTriL(tril)
-```
-
-#### Shape compatibility
-
-This operator acts on [batch] matrix with compatible shape.
-`x` is a batch matrix with compatible shape for `apply` and `solve` if
-
-```
-operator.shape = [B1,...,Bb] + [N, N], with b >= 0
-x.shape = [B1,...,Bb] + [N, R], with R >= 0.
-```
-
-#### Performance
-
-Suppose `operator` is a `LinearOperatorTriL` of shape `[N, N]`,
-and `x.shape = [N, R]`. Then
-
-* `operator.apply(x)` involves `N^2 * R` multiplications.
-* `operator.solve(x)` involves `N * R` size `N` back-substitutions.
-* `operator.determinant()` involves a size `N` `reduce_prod`.
-
-If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and
-`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite`.
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.__init__(tril, is_non_singular=None, is_self_adjoint=None, is_positive_definite=None, name='LinearOperatorTriL')` {#LinearOperatorTriL.__init__}
-
-Initialize a `LinearOperatorTriL`.
-
-##### Args:
-
-
-* <b>`tril`</b>: Shape `[B1,...,Bb, N, N]` with `b >= 0`, `N >= 0`.
- The lower triangular part of `tril` defines this operator. The strictly
- upper triangle is ignored. Allowed dtypes: `float32`, `float64`.
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
- This operator is non-singular if and only if its diagonal elements are
- all non-zero.
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose. This operator is self-adjoint only if it is diagonal with
- real-valued diagonal entries. In this case it is advised to use
- `LinearOperatorDiag`.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite,
- meaning the real part of all eigenvalues is positive. We do not require
- the operator to be self-adjoint to be positive-definite. See:
-* <b>`https`</b>: //en.wikipedia.org/wiki/Positive-definite_matrix
- #Extension_for_non_symmetric_matrices
-* <b>`name`</b>: A name for this `LinearOperator`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `diag.dtype` is not an allowed type.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.add_to_tensor(x, name='add_to_tensor')` {#LinearOperatorTriL.add_to_tensor}
-
-Add matrix represented by this operator to `x`. Equivalent to `A + x`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with same `dtype` and shape broadcastable to `self.shape`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.apply(x, adjoint=False, name='apply')` {#LinearOperatorTriL.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.assert_non_singular(name='assert_non_singular')` {#LinearOperatorTriL.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.assert_positive_definite(name='assert_positive_definite')` {#LinearOperatorTriL.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperatorTriL.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.batch_shape` {#LinearOperatorTriL.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperatorTriL.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.determinant(name='det')` {#LinearOperatorTriL.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.diag_part(name='diag_part')` {#LinearOperatorTriL.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.domain_dimension` {#LinearOperatorTriL.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperatorTriL.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.dtype` {#LinearOperatorTriL.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.graph_parents` {#LinearOperatorTriL.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.is_non_singular` {#LinearOperatorTriL.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.is_positive_definite` {#LinearOperatorTriL.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.is_self_adjoint` {#LinearOperatorTriL.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.is_square` {#LinearOperatorTriL.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.log_abs_determinant(name='log_abs_det')` {#LinearOperatorTriL.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.name` {#LinearOperatorTriL.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.range_dimension` {#LinearOperatorTriL.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperatorTriL.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.shape` {#LinearOperatorTriL.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.shape_tensor(name='shape_tensor')` {#LinearOperatorTriL.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.solve(rhs, adjoint=False, name='solve')` {#LinearOperatorTriL.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.tensor_rank` {#LinearOperatorTriL.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperatorTriL.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.to_dense(name='to_dense')` {#LinearOperatorTriL.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
-
-- - -
-
-### `class tf.contrib.linalg.LinearOperatorUDVHUpdate` {#LinearOperatorUDVHUpdate}
-
-Perturb a `LinearOperator` with a rank `K` update.
-
-This operator acts like a [batch] matrix `A` with shape
-`[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a
-batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is
-an `M x N` matrix.
-
-`LinearOperatorUDVHUpdate` represents `A = L + U D V^H`, where
-
-```
-L, is a LinearOperator representing [batch] M x N matrices
-U, is a [batch] M x K matrix. Typically K << M.
-D, is a [batch] K x K matrix.
-V, is a [batch] N x K matrix. Typically K << N.
-V^H is the Hermitian transpose (adjoint) of V.
-```
-
-If `M = N`, determinants and solves are done using the matrix determinant
-lemma and Woodbury identities, and thus require L and D to be non-singular.
-
-Solves and determinants will be attempted unless the "is_non_singular"
-property of L and D is False.
-
-In the event that L and D are positive-definite, and U = V, solves and
-determinants can be done using a Cholesky factorization.
-
-```python
-# Create a 3 x 3 diagonal linear operator.
-diag_operator = LinearOperatorDiag(
- diag=[1., 2., 3.], is_non_singular=True, is_self_adjoint=True,
- is_positive_definite=True)
-
-# Perturb with a rank 2 perturbation
-operator = LinearOperatorUDVHUpdate(
- operator=diag_operator,
- u=[[1., 2.], [-1., 3.], [0., 0.]],
- diag=[11., 12.],
- v=[[1., 2.], [-1., 3.], [10., 10.]])
-
-operator.shape
-==> [3, 3]
-
-operator.log_determinant()
-==> scalar Tensor
-
-x = ... Shape [3, 4] Tensor
-operator.apply(x)
-==> Shape [3, 4] Tensor
-```
-
-### Shape compatibility
-
-This operator acts on [batch] matrix with compatible shape.
-`x` is a batch matrix with compatible shape for `apply` and `solve` if
-
-```
-operator.shape = [B1,...,Bb] + [M, N], with b >= 0
-x.shape = [B1,...,Bb] + [N, R], with R >= 0.
-```
-
-### Performance
-
-Suppose `operator` is a `LinearOperatorUDVHUpdate` of shape `[M, N]`,
-made from a rank `K` update of `base_operator` which performs `.apply(x)` on
-`x` having `x.shape = [N, R]` with `O(L_apply*N*R)` complexity (and similarly
-for `solve`, `determinant`. Then, if `x.shape = [N, R]`,
-
-* `operator.apply(x)` is `O(L_apply*N*R + K*N*R)`
-
-and if `M = N`,
-
-* `operator.solve(x)` is `O(L_apply*N*R + N*K*R + K^2*R + K^3)`
-* `operator.determinant()` is `O(L_determinant + L_solve*N*K + K^2*N + K^3)`
-
-If instead `operator` and `x` have shape `[B1,...,Bb, M, N]` and
-`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite, diag_positive, square`
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.__init__(base_operator, u, diag=None, v=None, is_diag_positive=None, is_non_singular=None, is_self_adjoint=None, is_positive_definite=None, is_square=None, name='LinearOperatorUDVHUpdate')` {#LinearOperatorUDVHUpdate.__init__}
-
-Initialize a `LinearOperatorUDVHUpdate`.
-
-This creates a `LinearOperator` of the form `A = L + U D V^H`, with
-`L` a `LinearOperator`, `U, V` both [batch] matrices, and `D` a [batch]
-diagonal matrix.
-
-If `L` is non-singular, solves and determinants are available.
-Solves/determinants both involve a solve/determinant of a `K x K` system.
-In the event that L and D are self-adjoint positive-definite, and U = V,
-this can be done using a Cholesky factorization. The user should set the
-`is_X` matrix property hints, which will trigger the appropriate code path.
-
-##### Args:
-
-
-* <b>`base_operator`</b>: Shape `[B1,...,Bb, M, N]` real `float32` or `float64`
- `LinearOperator`. This is `L` above.
-* <b>`u`</b>: Shape `[B1,...,Bb, M, K]` `Tensor` of same `dtype` as `base_operator`.
- This is `U` above.
-* <b>`diag`</b>: Optional shape `[B1,...,Bb, K]` `Tensor` with same `dtype` as
- `base_operator`. This is the diagonal of `D` above.
- Defaults to `D` being the identity operator.
-* <b>`v`</b>: Optional `Tensor` of same `dtype` as `u` and shape `[B1,...,Bb, N, K]`
- Defaults to `v = u`, in which case the perturbation is symmetric.
- If `M != N`, then `v` must be set since the pertrubation is not square.
-* <b>`is_diag_positive`</b>: Python `bool`. If `True`, expect `diag > 0`.
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
- Default is `None`, unless `is_positive_definite` is auto-set to be
- `True` (see below).
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose. Default is `None`, unless `base_operator` is self-adjoint
- and `v = None` (meaning `u=v`), in which case this defaults to `True`.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite.
- Default is `None`, unless `base_operator` is positive-definite
- `v = None` (meaning `u=v`), and `is_diag_positive`, in which case this
- defaults to `True`.
-* <b>`is_square`</b>: Expect that this operator acts like square [batch] matrices.
-* <b>`name`</b>: A name for this `LinearOperator`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `is_X` flags are set in an inconsistent way.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.add_to_tensor(x, name='add_to_tensor')` {#LinearOperatorUDVHUpdate.add_to_tensor}
-
-Add matrix represented by this operator to `x`. Equivalent to `A + x`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with same `dtype` and shape broadcastable to `self.shape`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.apply(x, adjoint=False, name='apply')` {#LinearOperatorUDVHUpdate.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.assert_non_singular(name='assert_non_singular')` {#LinearOperatorUDVHUpdate.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.assert_positive_definite(name='assert_positive_definite')` {#LinearOperatorUDVHUpdate.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperatorUDVHUpdate.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.base_operator` {#LinearOperatorUDVHUpdate.base_operator}
-
-If this operator is `A = L + U D V^H`, this is the `L`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.batch_shape` {#LinearOperatorUDVHUpdate.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperatorUDVHUpdate.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.determinant(name='det')` {#LinearOperatorUDVHUpdate.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.diag_arg` {#LinearOperatorUDVHUpdate.diag_arg}
-
-If this operator is `A = L + U D V^H`, this is the diagonal of `D`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.diag_operator` {#LinearOperatorUDVHUpdate.diag_operator}
-
-If this operator is `A = L + U D V^H`, this is `D`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.diag_part(name='diag_part')` {#LinearOperatorUDVHUpdate.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.domain_dimension` {#LinearOperatorUDVHUpdate.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperatorUDVHUpdate.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.dtype` {#LinearOperatorUDVHUpdate.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.graph_parents` {#LinearOperatorUDVHUpdate.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.is_diag_positive` {#LinearOperatorUDVHUpdate.is_diag_positive}
-
-If this operator is `A = L + U D V^H`, this hints `D > 0` elementwise.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.is_non_singular` {#LinearOperatorUDVHUpdate.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.is_positive_definite` {#LinearOperatorUDVHUpdate.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.is_self_adjoint` {#LinearOperatorUDVHUpdate.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.is_square` {#LinearOperatorUDVHUpdate.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.log_abs_determinant(name='log_abs_det')` {#LinearOperatorUDVHUpdate.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.name` {#LinearOperatorUDVHUpdate.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.range_dimension` {#LinearOperatorUDVHUpdate.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperatorUDVHUpdate.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.shape` {#LinearOperatorUDVHUpdate.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.shape_tensor(name='shape_tensor')` {#LinearOperatorUDVHUpdate.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.solve(rhs, adjoint=False, name='solve')` {#LinearOperatorUDVHUpdate.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.tensor_rank` {#LinearOperatorUDVHUpdate.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperatorUDVHUpdate.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.to_dense(name='to_dense')` {#LinearOperatorUDVHUpdate.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.u` {#LinearOperatorUDVHUpdate.u}
-
-If this operator is `A = L + U D V^H`, this is the `U`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.v` {#LinearOperatorUDVHUpdate.v}
-
-If this operator is `A = L + U D V^H`, this is the `V`.
-
-
-
-- - -
-
-### `class tf.contrib.linalg.LinearOperatorComposition` {#LinearOperatorComposition}
-
-Composes one or more `LinearOperators`.
-
-This operator composes one or more linear operators `[op1,...,opJ]`,
-building a new `LinearOperator` with action defined by:
-
-```
-op_composed(x) := op1(op2(...(opJ(x)...))
-```
-
-If `opj` acts like [batch] matrix `Aj`, then `op_composed` acts like the
-[batch] matrix formed with the multiplication `A1 A2...AJ`.
-
-If `opj` has shape `batch_shape_j + [M_j, N_j]`, then we must have
-`N_j = M_{j+1}`, in which case the composed operator has shape equal to
-`broadcast_batch_shape + [M_1, N_J]`, where `broadcast_batch_shape` is the
-mutual broadcast of `batch_shape_j`, `j = 1,...,J`, assuming the intermediate
-batch shapes broadcast. Even if the composed shape is well defined, the
-composed operator's methods may fail due to lack of broadcasting ability in
-the defining operators' methods.
-
-```python
-# Create a 2 x 2 linear operator composed of two 2 x 2 operators.
-operator_1 = LinearOperatorMatrix([[1., 2.], [3., 4.]])
-operator_2 = LinearOperatorMatrix([[1., 0.], [0., 1.]])
-operator = LinearOperatorComposition([operator_1, operator_2])
-
-operator.to_dense()
-==> [[1., 2.]
- [3., 4.]]
-
-operator.shape
-==> [2, 2]
-
-operator.log_determinant()
-==> scalar Tensor
-
-x = ... Shape [2, 4] Tensor
-operator.apply(x)
-==> Shape [2, 4] Tensor
-
-# Create a [2, 3] batch of 4 x 5 linear operators.
-matrix_45 = tf.random_normal(shape=[2, 3, 4, 5])
-operator_45 = LinearOperatorMatrix(matrix)
-
-# Create a [2, 3] batch of 5 x 6 linear operators.
-matrix_56 = tf.random_normal(shape=[2, 3, 5, 6])
-operator_56 = LinearOperatorMatrix(matrix_56)
-
-# Compose to create a [2, 3] batch of 4 x 6 operators.
-opeartor_46 = LinearOperatorComposition([operator_45, operator_56])
-
-# Create a shape [2, 3, 6, 2] vector.
-x = tf.random_normal(shape=[2, 3, 6, 2])
-operator.apply(x)
-==> Shape [2, 3, 4, 2] Tensor
-```
-
-#### Performance
-
-The performance of `LinearOperatorComposition` on any operation is equal to
-the sum of the individual operators' operations.
-
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite`.
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.__init__(operators, is_non_singular=None, is_self_adjoint=None, is_positive_definite=None, name=None)` {#LinearOperatorComposition.__init__}
-
-Initialize a `LinearOperatorComposition`.
-
-`LinearOperatorComposition` is initialized with a list of operators
-`[op_1,...,op_J]`. For the `apply` method to be well defined, the
-composition `op_i.apply(op_{i+1}(x))` must be defined. Other methods have
-similar constraints.
-
-##### Args:
-
-
-* <b>`operators`</b>: Iterable of `LinearOperator` objects, each with
- the same `dtype` and composible shape.
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite,
- meaning the real part of all eigenvalues is positive. We do not require
- the operator to be self-adjoint to be positive-definite. See:
-* <b>`https`</b>: //en.wikipedia.org/wiki/Positive-definite_matrix
- #Extension_for_non_symmetric_matrices
-* <b>`name`</b>: A name for this `LinearOperator`. Default is the individual
- operators names joined with `_o_`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If all operators do not have the same `dtype`.
-* <b>`ValueError`</b>: If `operators` is empty.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.add_to_tensor(x, name='add_to_tensor')` {#LinearOperatorComposition.add_to_tensor}
-
-Add matrix represented by this operator to `x`. Equivalent to `A + x`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with same `dtype` and shape broadcastable to `self.shape`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.apply(x, adjoint=False, name='apply')` {#LinearOperatorComposition.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.assert_non_singular(name='assert_non_singular')` {#LinearOperatorComposition.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.assert_positive_definite(name='assert_positive_definite')` {#LinearOperatorComposition.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperatorComposition.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.batch_shape` {#LinearOperatorComposition.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperatorComposition.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.determinant(name='det')` {#LinearOperatorComposition.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.diag_part(name='diag_part')` {#LinearOperatorComposition.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.domain_dimension` {#LinearOperatorComposition.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperatorComposition.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.dtype` {#LinearOperatorComposition.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.graph_parents` {#LinearOperatorComposition.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.is_non_singular` {#LinearOperatorComposition.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.is_positive_definite` {#LinearOperatorComposition.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.is_self_adjoint` {#LinearOperatorComposition.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.is_square` {#LinearOperatorComposition.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.log_abs_determinant(name='log_abs_det')` {#LinearOperatorComposition.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.name` {#LinearOperatorComposition.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.operators` {#LinearOperatorComposition.operators}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.range_dimension` {#LinearOperatorComposition.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperatorComposition.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.shape` {#LinearOperatorComposition.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.shape_tensor(name='shape_tensor')` {#LinearOperatorComposition.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.solve(rhs, adjoint=False, name='solve')` {#LinearOperatorComposition.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.tensor_rank` {#LinearOperatorComposition.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperatorComposition.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.to_dense(name='to_dense')` {#LinearOperatorComposition.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.losses.md b/tensorflow/g3doc/api_docs/python/contrib.losses.md
deleted file mode 100644
index eef457bd1a..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.losses.md
+++ /dev/null
@@ -1,472 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Losses (contrib)
-[TOC]
-
-Ops for building neural network losses. See @{$python/contrib.losses}.
-
-## Other Functions and Classes
-- - -
-
-### `tf.contrib.losses.absolute_difference(*args, **kwargs)` {#absolute_difference}
-
-Adds an Absolute Difference loss to the training procedure. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.absolute_difference instead.
-
-`weights` acts as a coefficient for the loss. If a scalar is provided, then
-the loss is simply scaled by the given value. If `weights` is a tensor of size
-[batch_size], then the total loss for each sample of the batch is rescaled
-by the corresponding element in the `weights` vector. If the shape of
-`weights` matches the shape of `predictions`, then the loss of each
-measurable element of `predictions` is scaled by the corresponding value of
-`weights`.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted outputs.
-* <b>`labels`</b>: The ground truth output tensor, same dimensions as 'predictions'.
-* <b>`weights`</b>: Coefficients for the loss a scalar, a tensor of shape
- [batch_size] or a tensor whose shape matches `predictions`.
-* <b>`scope`</b>: The scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `labels` or
- if the shape of `weights` is invalid.
-
-
-- - -
-
-### `tf.contrib.losses.add_loss(*args, **kwargs)` {#add_loss}
-
-Adds a externally defined loss to the collection of losses. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.add_loss instead.
-
-##### Args:
-
-
-* <b>`loss`</b>: A loss `Tensor`.
-* <b>`loss_collection`</b>: Optional collection to add the loss to.
-
-
-- - -
-
-### `tf.contrib.losses.compute_weighted_loss(*args, **kwargs)` {#compute_weighted_loss}
-
-Computes the weighted loss. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.compute_weighted_loss instead.
-
-##### Args:
-
-
-* <b>`losses`</b>: A tensor of size [batch_size, d1, ... dN].
-* <b>`weights`</b>: A tensor of size [1] or [batch_size, d1, ... dK] where K < N.
-* <b>`scope`</b>: the scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` that returns the weighted loss.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is `None` or the shape is not compatible with
- `losses`, or if the number of dimensions (rank) of either `losses` or
- `weights` is missing.
-
-
-- - -
-
-### `tf.contrib.losses.cosine_distance(*args, **kwargs)` {#cosine_distance}
-
-Adds a cosine-distance loss to the training procedure. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.cosine_distance instead.
-
-Note that the function assumes that `predictions` and `labels` are already
-unit-normalized.
-
-##### Args:
-
-
-* <b>`predictions`</b>: An arbitrary matrix.
-* <b>`labels`</b>: A `Tensor` whose shape matches 'predictions'
-* <b>`dim`</b>: The dimension along which the cosine distance is computed.
-* <b>`weights`</b>: Coefficients for the loss a scalar, a tensor of shape
- [batch_size] or a tensor whose shape matches `predictions`.
-* <b>`scope`</b>: The scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` shape doesn't match `labels` shape, or
- `weights` is `None`.
-
-
-- - -
-
-### `tf.contrib.losses.get_losses(*args, **kwargs)` {#get_losses}
-
-Gets the list of losses from the loss_collection. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.get_losses instead.
-
-##### Args:
-
-
-* <b>`scope`</b>: an optional scope for filtering the losses to return.
-* <b>`loss_collection`</b>: Optional losses collection.
-
-##### Returns:
-
- a list of loss tensors.
-
-
-- - -
-
-### `tf.contrib.losses.get_regularization_losses(*args, **kwargs)` {#get_regularization_losses}
-
-Gets the regularization losses. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.get_regularization_losses instead.
-
-##### Args:
-
-
-* <b>`scope`</b>: an optional scope for filtering the losses to return.
-
-##### Returns:
-
- A list of loss variables.
-
-
-- - -
-
-### `tf.contrib.losses.get_total_loss(*args, **kwargs)` {#get_total_loss}
-
-Returns a tensor whose value represents the total loss. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.get_total_loss instead.
-
-Notice that the function adds the given losses to the regularization losses.
-
-##### Args:
-
-
-* <b>`add_regularization_losses`</b>: A boolean indicating whether or not to use the
- regularization losses in the sum.
-* <b>`name`</b>: The name of the returned tensor.
-
-##### Returns:
-
- A `Tensor` whose value represents the total loss.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `losses` is not iterable.
-
-
-- - -
-
-### `tf.contrib.losses.hinge_loss(*args, **kwargs)` {#hinge_loss}
-
-Method that returns the loss tensor for hinge loss. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.hinge_loss instead. Note that the order of the predictions and labels arguments were changed.
-
-##### Args:
-
-
-* <b>`logits`</b>: The logits, a float tensor.
-* <b>`labels`</b>: The ground truth output tensor. Its shape should match the shape of
- logits. The values of the tensor are expected to be 0.0 or 1.0.
-* <b>`scope`</b>: The scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A `Tensor` of same shape as `logits` and `labels` representing the loss
- values across the batch.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shapes of `logits` and `labels` don't match.
-
-
-- - -
-
-### `tf.contrib.losses.log_loss(*args, **kwargs)` {#log_loss}
-
-Adds a Log Loss term to the training procedure. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.log_loss instead. Note that the order of the predictions and labels arguments was changed.
-
-`weights` acts as a coefficient for the loss. If a scalar is provided, then
-the loss is simply scaled by the given value. If `weights` is a tensor of size
-[batch_size], then the total loss for each sample of the batch is rescaled
-by the corresponding element in the `weights` vector. If the shape of
-`weights` matches the shape of `predictions`, then the loss of each
-measurable element of `predictions` is scaled by the corresponding value of
-`weights`.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted outputs.
-* <b>`labels`</b>: The ground truth output tensor, same dimensions as 'predictions'.
-* <b>`weights`</b>: Coefficients for the loss a scalar, a tensor of shape
- [batch_size] or a tensor whose shape matches `predictions`.
-* <b>`epsilon`</b>: A small increment to add to avoid taking a log of zero.
-* <b>`scope`</b>: The scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `labels` or
- if the shape of `weights` is invalid.
-
-
-- - -
-
-### `tf.contrib.losses.mean_pairwise_squared_error(*args, **kwargs)` {#mean_pairwise_squared_error}
-
-Adds a pairwise-errors-squared loss to the training procedure. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.mean_pairwise_squared_error instead. Note that the order of the predictions and labels arguments was changed.
-
-Unlike `mean_squared_error`, which is a measure of the differences between
-corresponding elements of `predictions` and `labels`,
-`mean_pairwise_squared_error` is a measure of the differences between pairs of
-corresponding elements of `predictions` and `labels`.
-
-For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are
-three pairs of differences are summed to compute the loss:
- loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3
-
-Note that since the inputs are of size [batch_size, d0, ... dN], the
-corresponding pairs are computed within each batch sample but not across
-samples within a batch. For example, if `predictions` represents a batch of
-16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs
-is drawn from each image, but not across images.
-
-`weights` acts as a coefficient for the loss. If a scalar is provided, then
-the loss is simply scaled by the given value. If `weights` is a tensor of size
-[batch_size], then the total loss for each sample of the batch is rescaled
-by the corresponding element in the `weights` vector.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted outputs, a tensor of size [batch_size, d0, .. dN]
- where N+1 is the total number of dimensions in `predictions`.
-* <b>`labels`</b>: The ground truth output tensor, whose shape must match the shape of
- the `predictions` tensor.
-* <b>`weights`</b>: Coefficients for the loss a scalar, a tensor of shape [batch_size]
- or a tensor whose shape matches `predictions`.
-* <b>`scope`</b>: The scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `labels` or
- if the shape of `weights` is invalid.
-
-
-- - -
-
-### `tf.contrib.losses.mean_squared_error(*args, **kwargs)` {#mean_squared_error}
-
-Adds a Sum-of-Squares loss to the training procedure. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.mean_squared_error instead.
-
-`weights` acts as a coefficient for the loss. If a scalar is provided, then
-the loss is simply scaled by the given value. If `weights` is a tensor of size
-[batch_size], then the total loss for each sample of the batch is rescaled
-by the corresponding element in the `weights` vector. If the shape of
-`weights` matches the shape of `predictions`, then the loss of each
-measurable element of `predictions` is scaled by the corresponding value of
-`weights`.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted outputs.
-* <b>`labels`</b>: The ground truth output tensor, same dimensions as 'predictions'.
-* <b>`weights`</b>: Coefficients for the loss a scalar, a tensor of shape
- [batch_size] or a tensor whose shape matches `predictions`.
-* <b>`scope`</b>: The scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `labels` or
- if the shape of `weights` is invalid.
-
-
-- - -
-
-### `tf.contrib.losses.sigmoid_cross_entropy(*args, **kwargs)` {#sigmoid_cross_entropy}
-
-Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.sigmoid_cross_entropy instead. Note that the order of the predictions and labels arguments was changed.
-
-`weights` acts as a coefficient for the loss. If a scalar is provided,
-then the loss is simply scaled by the given value. If `weights` is a
-tensor of size [`batch_size`], then the loss weights apply to each
-corresponding sample.
-
-If `label_smoothing` is nonzero, smooth the labels towards 1/2:
-
- new_multiclass_labels = multiclass_labels * (1 - label_smoothing)
- + 0.5 * label_smoothing
-
-##### Args:
-
-
-* <b>`logits`</b>: [batch_size, num_classes] logits outputs of the network .
-* <b>`multi_class_labels`</b>: [batch_size, num_classes] labels in (0, 1).
-* <b>`weights`</b>: Coefficients for the loss. The tensor must be a scalar, a tensor of
- shape [batch_size] or shape [batch_size, num_classes].
-* <b>`label_smoothing`</b>: If greater than 0 then smooth the labels.
-* <b>`scope`</b>: The scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `logits` doesn't match that of
- `multi_class_labels` or if the shape of `weights` is invalid, or if
- `weights` is None.
-
-
-- - -
-
-### `tf.contrib.losses.softmax_cross_entropy(*args, **kwargs)` {#softmax_cross_entropy}
-
-Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.softmax_cross_entropy instead. Note that the order of the logits and labels arguments has been changed.
-
-`weights` acts as a coefficient for the loss. If a scalar is provided,
-then the loss is simply scaled by the given value. If `weights` is a
-tensor of size [`batch_size`], then the loss weights apply to each
-corresponding sample.
-
-If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes:
- new_onehot_labels = onehot_labels * (1 - label_smoothing)
- + label_smoothing / num_classes
-
-##### Args:
-
-
-* <b>`logits`</b>: [batch_size, num_classes] logits outputs of the network .
-* <b>`onehot_labels`</b>: [batch_size, num_classes] one-hot-encoded labels.
-* <b>`weights`</b>: Coefficients for the loss. The tensor must be a scalar or a tensor
- of shape [batch_size].
-* <b>`label_smoothing`</b>: If greater than 0 then smooth the labels.
-* <b>`scope`</b>: the scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the mean loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `logits` doesn't match that of `onehot_labels`
- or if the shape of `weights` is invalid or if `weights` is None.
-
-
-- - -
-
-### `tf.contrib.losses.sparse_softmax_cross_entropy(*args, **kwargs)` {#sparse_softmax_cross_entropy}
-
-Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.sparse_softmax_cross_entropy instead. Note that the order of the logits and labels arguments has been changed.
-
-`weights` acts as a coefficient for the loss. If a scalar is provided,
-then the loss is simply scaled by the given value. If `weights` is a
-tensor of size [`batch_size`], then the loss weights apply to each
-corresponding sample.
-
-##### Args:
-
-
-* <b>`logits`</b>: [batch_size, num_classes] logits outputs of the network .
-* <b>`labels`</b>: [batch_size, 1] or [batch_size] labels of dtype `int32` or `int64`
- in the range `[0, num_classes)`.
-* <b>`weights`</b>: Coefficients for the loss. The tensor must be a scalar or a tensor
- of shape [batch_size] or [batch_size, 1].
-* <b>`scope`</b>: the scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the mean loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shapes of `logits`, `labels`, and `weights` are
- incompatible, or if `weights` is None.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.metrics.md b/tensorflow/g3doc/api_docs/python/contrib.metrics.md
deleted file mode 100644
index f11fd9d193..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.metrics.md
+++ /dev/null
@@ -1,1971 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Metrics (contrib)
-[TOC]
-
-Ops for evaluation metrics and summary statistics.
-
-See the @{$python/contrib.metrics} guide.
-
-- - -
-
-### `tf.contrib.metrics.streaming_accuracy(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_accuracy}
-
-Calculates how often `predictions` matches `labels`.
-
-The `streaming_accuracy` function creates two local variables, `total` and
-`count` that are used to compute the frequency with which `predictions`
-matches `labels`. This frequency is ultimately returned as `accuracy`: an
-idempotent operation that simply divides `total` by `count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the `accuracy`.
-Internally, an `is_correct` operation computes a `Tensor` with elements 1.0
-where the corresponding elements of `predictions` and `labels` match and 0.0
-otherwise. Then `update_op` increments `total` with the reduced sum of the
-product of `weights` and `is_correct`, and it increments `count` with the
-reduced sum of `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted values, a `Tensor` of any shape.
-* <b>`labels`</b>: The ground truth values, a `Tensor` whose shape matches
- `predictions`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `accuracy` should
- be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`accuracy`</b>: A `Tensor` representing the accuracy, the value of `total` divided
- by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `accuracy`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_mean(values, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean}
-
-Computes the (weighted) mean of the given values.
-
-The `streaming_mean` function creates two local variables, `total` and `count`
-that are used to compute the average of `values`. This average is ultimately
-returned as `mean` which is an idempotent operation that simply divides
-`total` by `count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the `mean`.
-`update_op` increments `total` with the reduced sum of the product of `values`
-and `weights`, and it increments `count` with the reduced sum of `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`values`</b>: A `Tensor` of arbitrary dimensions.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `values`, and
- must be broadcastable to `values` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `values` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `mean`
- should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op`
- should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`mean`</b>: A `Tensor` representing the current mean, the value of `total` divided
- by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `mean_value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is not `None` and its shape doesn't match `values`,
- or if either `metrics_collections` or `updates_collections` are not a list
- or tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_recall(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_recall}
-
-Computes the recall of the predictions with respect to the labels.
-
-The `streaming_recall` function creates two local variables, `true_positives`
-and `false_negatives`, that are used to compute the recall. This value is
-ultimately returned as `recall`, an idempotent operation that simply divides
-`true_positives` by the sum of `true_positives` and `false_negatives`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` that updates these variables and returns the `recall`. `update_op`
-weights each prediction by the corresponding value in `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted values, a `bool` `Tensor` of arbitrary shape.
-* <b>`labels`</b>: The ground truth values, a `bool` `Tensor` whose dimensions must
- match `predictions`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `recall` should
- be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`recall`</b>: Scalar float `Tensor` with the value of `true_positives` divided
- by the sum of `true_positives` and `false_negatives`.
-* <b>`update_op`</b>: `Operation` that increments `true_positives` and
- `false_negatives` variables appropriately and whose value matches
- `recall`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_recall_at_thresholds(predictions, labels, thresholds, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_recall_at_thresholds}
-
-Computes various recall values for different `thresholds` on `predictions`.
-
-The `streaming_recall_at_thresholds` function creates four local variables,
-`true_positives`, `true_negatives`, `false_positives` and `false_negatives`
-for various values of thresholds. `recall[i]` is defined as the total weight
-of values in `predictions` above `thresholds[i]` whose corresponding entry in
-`labels` is `True`, divided by the total weight of `True` values in `labels`
-(`true_positives[i] / (true_positives[i] + false_negatives[i])`).
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the `recall`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A floating point `Tensor` of arbitrary shape and whose values
- are in the range `[0, 1]`.
-* <b>`labels`</b>: A `bool` `Tensor` whose shape matches `predictions`.
-* <b>`thresholds`</b>: A python list or tuple of float thresholds in `[0, 1]`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `recall` should be
- added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`recall`</b>: A float `Tensor` of shape `[len(thresholds)]`.
-* <b>`update_op`</b>: An operation that increments the `true_positives`,
- `true_negatives`, `false_positives` and `false_negatives` variables that
- are used in the computation of `recall`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_precision(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_precision}
-
-Computes the precision of the predictions with respect to the labels.
-
-The `streaming_precision` function creates two local variables,
-`true_positives` and `false_positives`, that are used to compute the
-precision. This value is ultimately returned as `precision`, an idempotent
-operation that simply divides `true_positives` by the sum of `true_positives`
-and `false_positives`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`precision`. `update_op` weights each prediction by the corresponding value in
-`weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted values, a `bool` `Tensor` of arbitrary shape.
-* <b>`labels`</b>: The ground truth values, a `bool` `Tensor` whose dimensions must
- match `predictions`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `precision` should
- be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`precision`</b>: Scalar float `Tensor` with the value of `true_positives`
- divided by the sum of `true_positives` and `false_positives`.
-* <b>`update_op`</b>: `Operation` that increments `true_positives` and
- `false_positives` variables appropriately and whose value matches
- `precision`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_precision_at_thresholds(predictions, labels, thresholds, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_precision_at_thresholds}
-
-Computes precision values for different `thresholds` on `predictions`.
-
-The `streaming_precision_at_thresholds` function creates four local variables,
-`true_positives`, `true_negatives`, `false_positives` and `false_negatives`
-for various values of thresholds. `precision[i]` is defined as the total
-weight of values in `predictions` above `thresholds[i]` whose corresponding
-entry in `labels` is `True`, divided by the total weight of values in
-`predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] +
-false_positives[i])`).
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`precision`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A floating point `Tensor` of arbitrary shape and whose values
- are in the range `[0, 1]`.
-* <b>`labels`</b>: A `bool` `Tensor` whose shape matches `predictions`.
-* <b>`thresholds`</b>: A python list or tuple of float thresholds in `[0, 1]`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `auc` should be
- added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`precision`</b>: A float `Tensor` of shape `[len(thresholds)]`.
-* <b>`update_op`</b>: An operation that increments the `true_positives`,
- `true_negatives`, `false_positives` and `false_negatives` variables that
- are used in the computation of `precision`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_auc(predictions, labels, weights=None, num_thresholds=200, metrics_collections=None, updates_collections=None, curve='ROC', name=None)` {#streaming_auc}
-
-Computes the approximate AUC via a Riemann sum.
-
-The `streaming_auc` function creates four local variables, `true_positives`,
-`true_negatives`, `false_positives` and `false_negatives` that are used to
-compute the AUC. To discretize the AUC curve, a linearly spaced set of
-thresholds is used to compute pairs of recall and precision values. The area
-under the ROC-curve is therefore computed using the height of the recall
-values by the false positive rate, while the area under the PR-curve is the
-computed using the height of the precision values by the recall.
-
-This value is ultimately returned as `auc`, an idempotent operation that
-computes the area under a discretized curve of precision versus recall values
-(computed using the aforementioned variables). The `num_thresholds` variable
-controls the degree of discretization with larger numbers of thresholds more
-closely approximating the true AUC. The quality of the approximation may vary
-dramatically depending on `num_thresholds`.
-
-For best results, `predictions` should be distributed approximately uniformly
-in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC
-approximation may be poor if this is not the case.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the `auc`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A floating point `Tensor` of arbitrary shape and whose values
- are in the range `[0, 1]`.
-* <b>`labels`</b>: A `bool` `Tensor` whose shape matches `predictions`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`num_thresholds`</b>: The number of thresholds to use when discretizing the roc
- curve.
-* <b>`metrics_collections`</b>: An optional list of collections that `auc` should be
- added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`curve`</b>: Specifies the name of the curve to be computed, 'ROC' [default] or
- 'PR' for the Precision-Recall-curve.
-
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`auc`</b>: A scalar `Tensor` representing the current area-under-curve.
-* <b>`update_op`</b>: An operation that increments the `true_positives`,
- `true_negatives`, `false_positives` and `false_negatives` variables
- appropriately and whose value matches `auc`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_recall_at_k(*args, **kwargs)` {#streaming_recall_at_k}
-
-Computes the recall@k of the predictions with respect to dense labels. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-11-08.
-Instructions for updating:
-Please use `streaming_sparse_recall_at_k`, and reshape labels from [batch_size] to [batch_size, 1].
-
-The `streaming_recall_at_k` function creates two local variables, `total` and
-`count`, that are used to compute the recall@k frequency. This frequency is
-ultimately returned as `recall_at_<k>`: an idempotent operation that simply
-divides `total` by `count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`recall_at_<k>`. Internally, an `in_top_k` operation computes a `Tensor` with
-shape [batch_size] whose elements indicate whether or not the corresponding
-label is in the top `k` `predictions`. Then `update_op` increments `total`
-with the reduced sum of `weights` where `in_top_k` is `True`, and it
-increments `count` with the reduced sum of `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A float `Tensor` of dimension [batch_size, num_classes].
-* <b>`labels`</b>: A `Tensor` of dimension [batch_size] whose type is in `int32`,
- `int64`.
-* <b>`k`</b>: The number of top elements to look at for computing recall.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `recall_at_k`
- should be added to.
-* <b>`updates_collections`</b>: An optional list of collections `update_op` should be
- added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`recall_at_k`</b>: A `Tensor` representing the recall@k, the fraction of labels
- which fall into the top `k` predictions.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `recall_at_k`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_mean_absolute_error(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_absolute_error}
-
-Computes the mean absolute error between the labels and predictions.
-
-The `streaming_mean_absolute_error` function creates two local variables,
-`total` and `count` that are used to compute the mean absolute error. This
-average is weighted by `weights`, and it is ultimately returned as
-`mean_absolute_error`: an idempotent operation that simply divides `total` by
-`count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`mean_absolute_error`. Internally, an `absolute_errors` operation computes the
-absolute value of the differences between `predictions` and `labels`. Then
-`update_op` increments `total` with the reduced sum of the product of
-`weights` and `absolute_errors`, and it increments `count` with the reduced
-sum of `weights`
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of arbitrary shape.
-* <b>`labels`</b>: A `Tensor` of the same shape as `predictions`.
-* <b>`weights`</b>: Optional `Tensor` indicating the frequency with which an example is
- sampled. Rank must be 0, or the same rank as `labels`, and must be
- broadcastable to `labels` (i.e., all dimensions must be either `1`, or
- the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that
- `mean_absolute_error` should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`mean_absolute_error`</b>: A `Tensor` representing the current mean, the value of
- `total` divided by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `mean_absolute_error`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_mean_iou(predictions, labels, num_classes, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_iou}
-
-Calculate per-step mean Intersection-Over-Union (mIOU).
-
-Mean Intersection-Over-Union is a common evaluation metric for
-semantic image segmentation, which first computes the IOU for each
-semantic class and then computes the average over classes.
-
-##### IOU is defined as follows:
-
- IOU = true_positive / (true_positive + false_positive + false_negative).
-The predictions are accumulated in a confusion matrix, weighted by `weights`,
-and mIOU is then calculated from it.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the `mean_iou`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of prediction results for semantic labels, whose
- shape is [batch size] and type `int32` or `int64`. The tensor will be
- flattened, if its rank > 1.
-* <b>`labels`</b>: A `Tensor` of ground truth labels with shape [batch size] and of
- type `int32` or `int64`. The tensor will be flattened, if its rank > 1.
-* <b>`num_classes`</b>: The possible number of labels the prediction task can
- have. This value must be provided, since a confusion matrix of
- dimension = [num_classes, num_classes] will be allocated.
-* <b>`weights`</b>: An optional `Tensor` whose shape is broadcastable to `predictions`.
-* <b>`metrics_collections`</b>: An optional list of collections that `mean_iou`
- should be added to.
-* <b>`updates_collections`</b>: An optional list of collections `update_op` should be
- added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`mean_iou`</b>: A `Tensor` representing the mean intersection-over-union.
-* <b>`update_op`</b>: An operation that increments the confusion matrix.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_mean_relative_error(predictions, labels, normalizer, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_relative_error}
-
-Computes the mean relative error by normalizing with the given values.
-
-The `streaming_mean_relative_error` function creates two local variables,
-`total` and `count` that are used to compute the mean relative absolute error.
-This average is weighted by `weights`, and it is ultimately returned as
-`mean_relative_error`: an idempotent operation that simply divides `total` by
-`count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`mean_reative_error`. Internally, a `relative_errors` operation divides the
-absolute value of the differences between `predictions` and `labels` by the
-`normalizer`. Then `update_op` increments `total` with the reduced sum of the
-product of `weights` and `relative_errors`, and it increments `count` with the
-reduced sum of `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of arbitrary shape.
-* <b>`labels`</b>: A `Tensor` of the same shape as `predictions`.
-* <b>`normalizer`</b>: A `Tensor` of the same shape as `predictions`.
-* <b>`weights`</b>: Optional `Tensor` indicating the frequency with which an example is
- sampled. Rank must be 0, or the same rank as `labels`, and must be
- broadcastable to `labels` (i.e., all dimensions must be either `1`, or
- the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that
- `mean_relative_error` should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`mean_relative_error`</b>: A `Tensor` representing the current mean, the value of
- `total` divided by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `mean_relative_error`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_mean_squared_error(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_squared_error}
-
-Computes the mean squared error between the labels and predictions.
-
-The `streaming_mean_squared_error` function creates two local variables,
-`total` and `count` that are used to compute the mean squared error.
-This average is weighted by `weights`, and it is ultimately returned as
-`mean_squared_error`: an idempotent operation that simply divides `total` by
-`count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`mean_squared_error`. Internally, a `squared_error` operation computes the
-element-wise square of the difference between `predictions` and `labels`. Then
-`update_op` increments `total` with the reduced sum of the product of
-`weights` and `squared_error`, and it increments `count` with the reduced sum
-of `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of arbitrary shape.
-* <b>`labels`</b>: A `Tensor` of the same shape as `predictions`.
-* <b>`weights`</b>: Optional `Tensor` indicating the frequency with which an example is
- sampled. Rank must be 0, or the same rank as `labels`, and must be
- broadcastable to `labels` (i.e., all dimensions must be either `1`, or
- the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that
- `mean_squared_error` should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`mean_squared_error`</b>: A `Tensor` representing the current mean, the value of
- `total` divided by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `mean_squared_error`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_mean_tensor(values, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_tensor}
-
-Computes the element-wise (weighted) mean of the given tensors.
-
-In contrast to the `streaming_mean` function which returns a scalar with the
-mean, this function returns an average tensor with the same shape as the
-input tensors.
-
-The `streaming_mean_tensor` function creates two local variables,
-`total_tensor` and `count_tensor` that are used to compute the average of
-`values`. This average is ultimately returned as `mean` which is an idempotent
-operation that simply divides `total` by `count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the `mean`.
-`update_op` increments `total` with the reduced sum of the product of `values`
-and `weights`, and it increments `count` with the reduced sum of `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`values`</b>: A `Tensor` of arbitrary dimensions.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `values`, and
- must be broadcastable to `values` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `values` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `mean`
- should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op`
- should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`mean`</b>: A float `Tensor` representing the current mean, the value of `total`
- divided by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `mean_value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is not `None` and its shape doesn't match `values`,
- or if either `metrics_collections` or `updates_collections` are not a list
- or tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_root_mean_squared_error(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_root_mean_squared_error}
-
-Computes the root mean squared error between the labels and predictions.
-
-The `streaming_root_mean_squared_error` function creates two local variables,
-`total` and `count` that are used to compute the root mean squared error.
-This average is weighted by `weights`, and it is ultimately returned as
-`root_mean_squared_error`: an idempotent operation that takes the square root
-of the division of `total` by `count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`root_mean_squared_error`. Internally, a `squared_error` operation computes
-the element-wise square of the difference between `predictions` and `labels`.
-Then `update_op` increments `total` with the reduced sum of the product of
-`weights` and `squared_error`, and it increments `count` with the reduced sum
-of `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of arbitrary shape.
-* <b>`labels`</b>: A `Tensor` of the same shape as `predictions`.
-* <b>`weights`</b>: Optional `Tensor` indicating the frequency with which an example is
- sampled. Rank must be 0, or the same rank as `labels`, and must be
- broadcastable to `labels` (i.e., all dimensions must be either `1`, or
- the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that
- `root_mean_squared_error` should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`root_mean_squared_error`</b>: A `Tensor` representing the current mean, the value
- of `total` divided by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `root_mean_squared_error`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_covariance(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_covariance}
-
-Computes the unbiased sample covariance between `predictions` and `labels`.
-
-The `streaming_covariance` function creates four local variables,
-`comoment`, `mean_prediction`, `mean_label`, and `count`, which are used to
-compute the sample covariance between predictions and labels across multiple
-batches of data. The covariance is ultimately returned as an idempotent
-operation that simply divides `comoment` by `count` - 1. We use `count` - 1
-in order to get an unbiased estimate.
-
-The algorithm used for this online computation is described in
-https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance.
-Specifically, the formula used to combine two sample comoments is
-`C_AB = C_A + C_B + (E[x_A] - E[x_B]) * (E[y_A] - E[y_B]) * n_A * n_B / n_AB`
-The comoment for a single batch of data is simply
-`sum((x - E[x]) * (y - E[y]))`, optionally weighted.
-
-If `weights` is not None, then it is used to compute weighted comoments,
-means, and count. NOTE: these weights are treated as "frequency weights", as
-opposed to "reliability weights". See discussion of the difference on
-https://wikipedia.org/wiki/Weighted_arithmetic_mean#Weighted_sample_variance
-
-To facilitate the computation of covariance across multiple batches of data,
-the function creates an `update_op` operation, which updates underlying
-variables and returns the updated covariance.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of arbitrary size.
-* <b>`labels`</b>: A `Tensor` of the same size as `predictions`.
-* <b>`weights`</b>: Optional `Tensor` indicating the frequency with which an example is
- sampled. Rank must be 0, or the same rank as `labels`, and must be
- broadcastable to `labels` (i.e., all dimensions must be either `1`, or
- the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: A `Tensor` representing the current unbiased sample covariance,
- `comoment` / (`count` - 1).
-* <b>`update_op`</b>: An operation that updates the local variables appropriately.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If labels and predictions are of different sizes or if either
- `metrics_collections` or `updates_collections` are not a list or tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_pearson_correlation(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_pearson_correlation}
-
-Computes Pearson correlation coefficient between `predictions`, `labels`.
-
-The `streaming_pearson_correlation` function delegates to
-`streaming_covariance` the tracking of three [co]variances:
-
-- `streaming_covariance(predictions, labels)`, i.e. covariance
-- `streaming_covariance(predictions, predictions)`, i.e. variance
-- `streaming_covariance(labels, labels)`, i.e. variance
-
-The product-moment correlation ultimately returned is an idempotent operation
-`cov(predictions, labels) / sqrt(var(predictions) * var(labels))`. To
-facilitate correlation computation across multiple batches, the function
-groups the `update_op`s of the underlying streaming_covariance and returns an
-`update_op`.
-
-If `weights` is not None, then it is used to compute a weighted correlation.
-NOTE: these weights are treated as "frequency weights", as opposed to
-"reliability weights". See discussion of the difference on
-https://wikipedia.org/wiki/Weighted_arithmetic_mean#Weighted_sample_variance
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of arbitrary size.
-* <b>`labels`</b>: A `Tensor` of the same size as predictions.
-* <b>`weights`</b>: Optional `Tensor` indicating the frequency with which an example is
- sampled. Rank must be 0, or the same rank as `labels`, and must be
- broadcastable to `labels` (i.e., all dimensions must be either `1`, or
- the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`pearson_r`</b>: A `Tensor` representing the current Pearson product-moment
- correlation coefficient, the value of
- `cov(predictions, labels) / sqrt(var(predictions) * var(labels))`.
-* <b>`update_op`</b>: An operation that updates the underlying variables appropriately.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `labels` and `predictions` are of different sizes, or if
- `weights` is the wrong size, or if either `metrics_collections` or
- `updates_collections` are not a `list` or `tuple`.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_mean_cosine_distance(predictions, labels, dim, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_cosine_distance}
-
-Computes the cosine distance between the labels and predictions.
-
-The `streaming_mean_cosine_distance` function creates two local variables,
-`total` and `count` that are used to compute the average cosine distance
-between `predictions` and `labels`. This average is weighted by `weights`,
-and it is ultimately returned as `mean_distance`, which is an idempotent
-operation that simply divides `total` by `count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`mean_distance`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of the same shape as `labels`.
-* <b>`labels`</b>: A `Tensor` of arbitrary shape.
-* <b>`dim`</b>: The dimension along which the cosine distance is computed.
-* <b>`weights`</b>: An optional `Tensor` whose shape is broadcastable to `predictions`,
- and whose dimension `dim` is 1.
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`mean_distance`</b>: A `Tensor` representing the current mean, the value of
- `total` divided by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_percentage_less(values, threshold, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_percentage_less}
-
-Computes the percentage of values less than the given threshold.
-
-The `streaming_percentage_less` function creates two local variables,
-`total` and `count` that are used to compute the percentage of `values` that
-fall below `threshold`. This rate is weighted by `weights`, and it is
-ultimately returned as `percentage` which is an idempotent operation that
-simply divides `total` by `count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`percentage`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`values`</b>: A numeric `Tensor` of arbitrary size.
-* <b>`threshold`</b>: A scalar threshold.
-* <b>`weights`</b>: An optional `Tensor` whose shape is broadcastable to `values`.
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`percentage`</b>: A `Tensor` representing the current mean, the value of `total`
- divided by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is not `None` and its shape doesn't match `values`,
- or if either `metrics_collections` or `updates_collections` are not a list
- or tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_sensitivity_at_specificity(predictions, labels, specificity, weights=None, num_thresholds=200, metrics_collections=None, updates_collections=None, name=None)` {#streaming_sensitivity_at_specificity}
-
-Computes the specificity at a given sensitivity.
-
-The `streaming_sensitivity_at_specificity` function creates four local
-variables, `true_positives`, `true_negatives`, `false_positives` and
-`false_negatives` that are used to compute the sensitivity at the given
-specificity value. The threshold for the given specificity value is computed
-and used to evaluate the corresponding sensitivity.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`sensitivity`. `update_op` increments the `true_positives`, `true_negatives`,
-`false_positives` and `false_negatives` counts with the weight of each case
-found in the `predictions` and `labels`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-For additional information about specificity and sensitivity, see the
-following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
-
-##### Args:
-
-
-* <b>`predictions`</b>: A floating point `Tensor` of arbitrary shape and whose values
- are in the range `[0, 1]`.
-* <b>`labels`</b>: A `bool` `Tensor` whose shape matches `predictions`.
-* <b>`specificity`</b>: A scalar value in range `[0, 1]`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`num_thresholds`</b>: The number of thresholds to use for matching the given
- specificity.
-* <b>`metrics_collections`</b>: An optional list of collections that `sensitivity`
- should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`sensitivity`</b>: A scalar `Tensor` representing the sensitivity at the given
- `specificity` value.
-* <b>`update_op`</b>: An operation that increments the `true_positives`,
- `true_negatives`, `false_positives` and `false_negatives` variables
- appropriately and whose value matches `sensitivity`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- `specificity` is not between 0 and 1, or if either `metrics_collections`
- or `updates_collections` are not a list or tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_sparse_average_precision_at_k(predictions, labels, k, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_sparse_average_precision_at_k}
-
-Computes average precision@k of predictions with respect to sparse labels.
-
-See `sparse_average_precision_at_k` for details on formula. `weights` are
-applied to the result of `sparse_average_precision_at_k`
-
-`streaming_sparse_average_precision_at_k` creates two local variables,
-`average_precision_at_<k>/total` and `average_precision_at_<k>/max`, that
-are used to compute the frequency. This frequency is ultimately returned as
-`average_precision_at_<k>`: an idempotent operation that simply divides
-`average_precision_at_<k>/total` by `average_precision_at_<k>/max`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`precision_at_<k>`. Internally, a `top_k` operation computes a `Tensor`
-indicating the top `k` `predictions`. Set operations applied to `top_k` and
-`labels` calculate the true positives and false positives weighted by
-`weights`. Then `update_op` increments `true_positive_at_<k>` and
-`false_positive_at_<k>` using these values.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: Float `Tensor` with shape [D1, ... DN, num_classes] where
- N >= 1. Commonly, N=1 and `predictions` has shape
- [batch size, num_classes]. The final dimension contains the logit values
- for each class. [D1, ... DN] must match `labels`.
-* <b>`labels`</b>: `int64` `Tensor` or `SparseTensor` with shape
- [D1, ... DN, num_labels], where N >= 1 and num_labels is the number of
- target classes for the associated prediction. Commonly, N=1 and `labels`
- has shape [batch_size, num_labels]. [D1, ... DN] must match
- `predictions_`. Values should be in range [0, num_classes), where
- num_classes is the last dimension of `predictions`. Values outside this
- range are ignored.
-* <b>`k`</b>: Integer, k for @k metric. This will calculate an average precision for
- range `[1,k]`, as documented above.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or n-1, where n is the rank of
- `labels`. If the latter, it must be broadcastable to `labels` (i.e., all
- dimensions must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that values should
- be added to.
-* <b>`updates_collections`</b>: An optional list of collections that updates should
- be added to.
-* <b>`name`</b>: Name of new update operation, and namespace for other dependent ops.
-
-##### Returns:
-
-
-* <b>`mean_average_precision`</b>: Scalar `float64` `Tensor` with the mean average
- precision values.
-* <b>`update`</b>: `Operation` that increments variables appropriately, and whose
- value matches `metric`.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_sparse_precision_at_k(predictions, labels, k, class_id=None, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_sparse_precision_at_k}
-
-Computes precision@k of the predictions with respect to sparse labels.
-
-If `class_id` is not specified, we calculate precision as the ratio of true
- positives (i.e., correct predictions, items in the top `k` highest
- `predictions` that are found in the corresponding row in `labels`) to
- positives (all top `k` `predictions`).
-If `class_id` is specified, we calculate precision by considering only the
- rows in the batch for which `class_id` is in the top `k` highest
- `predictions`, and computing the fraction of them for which `class_id` is
- in the corresponding row in `labels`.
-
-We expect precision to decrease as `k` increases.
-
-`streaming_sparse_precision_at_k` creates two local variables,
-`true_positive_at_<k>` and `false_positive_at_<k>`, that are used to compute
-the precision@k frequency. This frequency is ultimately returned as
-`precision_at_<k>`: an idempotent operation that simply divides
-`true_positive_at_<k>` by total (`true_positive_at_<k>` +
-`false_positive_at_<k>`).
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`precision_at_<k>`. Internally, a `top_k` operation computes a `Tensor`
-indicating the top `k` `predictions`. Set operations applied to `top_k` and
-`labels` calculate the true positives and false positives weighted by
-`weights`. Then `update_op` increments `true_positive_at_<k>` and
-`false_positive_at_<k>` using these values.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: Float `Tensor` with shape [D1, ... DN, num_classes] where
- N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes].
- The final dimension contains the logit values for each class. [D1, ... DN]
- must match `labels`.
-* <b>`labels`</b>: `int64` `Tensor` or `SparseTensor` with shape
- [D1, ... DN, num_labels], where N >= 1 and num_labels is the number of
- target classes for the associated prediction. Commonly, N=1 and `labels`
- has shape [batch_size, num_labels]. [D1, ... DN] must match
- `predictions`. Values should be in range [0, num_classes), where
- num_classes is the last dimension of `predictions`. Values outside this
- range are ignored.
-* <b>`k`</b>: Integer, k for @k metric.
-* <b>`class_id`</b>: Integer class ID for which we want binary metrics. This should be
- in range [0, num_classes], where num_classes is the last dimension of
- `predictions`. If `class_id` is outside this range, the method returns
- NAN.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or n-1, where n is the rank of
- `labels`. If the latter, it must be broadcastable to `labels` (i.e., all
- dimensions must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that values should
- be added to.
-* <b>`updates_collections`</b>: An optional list of collections that updates should
- be added to.
-* <b>`name`</b>: Name of new update operation, and namespace for other dependent ops.
-
-##### Returns:
-
-
-* <b>`precision`</b>: Scalar `float64` `Tensor` with the value of `true_positives`
- divided by the sum of `true_positives` and `false_positives`.
-* <b>`update_op`</b>: `Operation` that increments `true_positives` and
- `false_positives` variables appropriately, and whose value matches
- `precision`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is not `None` and its shape doesn't match
- `predictions`, or if either `metrics_collections` or `updates_collections`
- are not a list or tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_sparse_precision_at_top_k(top_k_predictions, labels, class_id=None, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_sparse_precision_at_top_k}
-
-Computes precision@k of top-k predictions with respect to sparse labels.
-
-If `class_id` is not specified, we calculate precision as the ratio of
- true positives (i.e., correct predictions, items in `top_k_predictions`
- that are found in the corresponding row in `labels`) to positives (all
- `top_k_predictions`).
-If `class_id` is specified, we calculate precision by considering only the
- rows in the batch for which `class_id` is in the top `k` highest
- `predictions`, and computing the fraction of them for which `class_id` is
- in the corresponding row in `labels`.
-
-We expect precision to decrease as `k` increases.
-
-`streaming_sparse_precision_at_top_k` creates two local variables,
-`true_positive_at_k` and `false_positive_at_k`, that are used to compute
-the precision@k frequency. This frequency is ultimately returned as
-`precision_at_k`: an idempotent operation that simply divides
-`true_positive_at_k` by total (`true_positive_at_k` + `false_positive_at_k`).
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`precision_at_k`. Internally, set operations applied to `top_k_predictions`
-and `labels` calculate the true positives and false positives weighted by
-`weights`. Then `update_op` increments `true_positive_at_k` and
-`false_positive_at_k` using these values.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`top_k_predictions`</b>: Integer `Tensor` with shape [D1, ... DN, k] where
- N >= 1. Commonly, N=1 and top_k_predictions has shape [batch size, k].
- The final dimension contains the indices of top-k labels. [D1, ... DN]
- must match `labels`.
-* <b>`labels`</b>: `int64` `Tensor` or `SparseTensor` with shape
- [D1, ... DN, num_labels], where N >= 1 and num_labels is the number of
- target classes for the associated prediction. Commonly, N=1 and `labels`
- has shape [batch_size, num_labels]. [D1, ... DN] must match
- `top_k_predictions`. Values should be in range [0, num_classes), where
- num_classes is the last dimension of `predictions`. Values outside this
- range are ignored.
-* <b>`class_id`</b>: Integer class ID for which we want binary metrics. This should be
- in range [0, num_classes), where num_classes is the last dimension of
- `predictions`. If `class_id` is outside this range, the method returns
- NAN.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or n-1, where n is the rank of
- `labels`. If the latter, it must be broadcastable to `labels` (i.e., all
- dimensions must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that values should
- be added to.
-* <b>`updates_collections`</b>: An optional list of collections that updates should
- be added to.
-* <b>`name`</b>: Name of new update operation, and namespace for other dependent ops.
-
-##### Returns:
-
-
-* <b>`precision`</b>: Scalar `float64` `Tensor` with the value of `true_positives`
- divided by the sum of `true_positives` and `false_positives`.
-* <b>`update_op`</b>: `Operation` that increments `true_positives` and
- `false_positives` variables appropriately, and whose value matches
- `precision`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is not `None` and its shape doesn't match
- `predictions`, or if either `metrics_collections` or `updates_collections`
- are not a list or tuple.
-* <b>`ValueError`</b>: If `top_k_predictions` has rank < 2.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_sparse_recall_at_k(predictions, labels, k, class_id=None, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_sparse_recall_at_k}
-
-Computes recall@k of the predictions with respect to sparse labels.
-
-If `class_id` is not specified, we'll calculate recall as the ratio of true
- positives (i.e., correct predictions, items in the top `k` highest
- `predictions` that are found in the corresponding row in `labels`) to
- actual positives (the full `labels` row).
-If `class_id` is specified, we calculate recall by considering only the rows
- in the batch for which `class_id` is in `labels`, and computing the
- fraction of them for which `class_id` is in the corresponding row in
- `labels`.
-
-`streaming_sparse_recall_at_k` creates two local variables,
-`true_positive_at_<k>` and `false_negative_at_<k>`, that are used to compute
-the recall_at_k frequency. This frequency is ultimately returned as
-`recall_at_<k>`: an idempotent operation that simply divides
-`true_positive_at_<k>` by total (`true_positive_at_<k>` +
-`false_negative_at_<k>`).
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`recall_at_<k>`. Internally, a `top_k` operation computes a `Tensor`
-indicating the top `k` `predictions`. Set operations applied to `top_k` and
-`labels` calculate the true positives and false negatives weighted by
-`weights`. Then `update_op` increments `true_positive_at_<k>` and
-`false_negative_at_<k>` using these values.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: Float `Tensor` with shape [D1, ... DN, num_classes] where
- N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes].
- The final dimension contains the logit values for each class. [D1, ... DN]
- must match `labels`.
-* <b>`labels`</b>: `int64` `Tensor` or `SparseTensor` with shape
- [D1, ... DN, num_labels], where N >= 1 and num_labels is the number of
- target classes for the associated prediction. Commonly, N=1 and `labels`
- has shape [batch_size, num_labels]. [D1, ... DN] must match `predictions`.
- Values should be in range [0, num_classes), where num_classes is the last
- dimension of `predictions`. Values outside this range always count
- towards `false_negative_at_<k>`.
-* <b>`k`</b>: Integer, k for @k metric.
-* <b>`class_id`</b>: Integer class ID for which we want binary metrics. This should be
- in range [0, num_classes), where num_classes is the last dimension of
- `predictions`. If class_id is outside this range, the method returns NAN.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or n-1, where n is the rank of
- `labels`. If the latter, it must be broadcastable to `labels` (i.e., all
- dimensions must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that values should
- be added to.
-* <b>`updates_collections`</b>: An optional list of collections that updates should
- be added to.
-* <b>`name`</b>: Name of new update operation, and namespace for other dependent ops.
-
-##### Returns:
-
-
-* <b>`recall`</b>: Scalar `float64` `Tensor` with the value of `true_positives` divided
- by the sum of `true_positives` and `false_negatives`.
-* <b>`update_op`</b>: `Operation` that increments `true_positives` and
- `false_negatives` variables appropriately, and whose value matches
- `recall`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is not `None` and its shape doesn't match
- `predictions`, or if either `metrics_collections` or `updates_collections`
- are not a list or tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_specificity_at_sensitivity(predictions, labels, sensitivity, weights=None, num_thresholds=200, metrics_collections=None, updates_collections=None, name=None)` {#streaming_specificity_at_sensitivity}
-
-Computes the specificity at a given sensitivity.
-
-The `streaming_specificity_at_sensitivity` function creates four local
-variables, `true_positives`, `true_negatives`, `false_positives` and
-`false_negatives` that are used to compute the specificity at the given
-sensitivity value. The threshold for the given sensitivity value is computed
-and used to evaluate the corresponding specificity.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`specificity`. `update_op` increments the `true_positives`, `true_negatives`,
-`false_positives` and `false_negatives` counts with the weight of each case
-found in the `predictions` and `labels`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-For additional information about specificity and sensitivity, see the
-following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
-
-##### Args:
-
-
-* <b>`predictions`</b>: A floating point `Tensor` of arbitrary shape and whose values
- are in the range `[0, 1]`.
-* <b>`labels`</b>: A `bool` `Tensor` whose shape matches `predictions`.
-* <b>`sensitivity`</b>: A scalar value in range `[0, 1]`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`num_thresholds`</b>: The number of thresholds to use for matching the given
- sensitivity.
-* <b>`metrics_collections`</b>: An optional list of collections that `specificity`
- should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`specificity`</b>: A scalar `Tensor` representing the specificity at the given
- `specificity` value.
-* <b>`update_op`</b>: An operation that increments the `true_positives`,
- `true_negatives`, `false_positives` and `false_negatives` variables
- appropriately and whose value matches `specificity`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- `sensitivity` is not between 0 and 1, or if either `metrics_collections`
- or `updates_collections` are not a list or tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_concat(values, axis=0, max_size=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_concat}
-
-Concatenate values along an axis across batches.
-
-The function `streaming_concat` creates two local variables, `array` and
-`size`, that are used to store concatenated values. Internally, `array` is
-used as storage for a dynamic array (if `maxsize` is `None`), which ensures
-that updates can be run in amortized constant time.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that appends the values of a tensor and returns the
-length of the concatenated axis.
-
-This op allows for evaluating metrics that cannot be updated incrementally
-using the same framework as other streaming metrics.
-
-##### Args:
-
-
-* <b>`values`</b>: `Tensor` to concatenate. Rank and the shape along all axes other
- than the axis to concatenate along must be statically known.
-* <b>`axis`</b>: optional integer axis to concatenate along.
-* <b>`max_size`</b>: optional integer maximum size of `value` along the given axis.
- Once the maximum size is reached, further updates are no-ops. By default,
- there is no maximum size: the array is resized as necessary.
-* <b>`metrics_collections`</b>: An optional list of collections that `value`
- should be added to.
-* <b>`updates_collections`</b>: An optional list of collections `update_op` should be
- added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`value`</b>: A `Tensor` representing the concatenated values.
-* <b>`update_op`</b>: An operation that concatenates the next values.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `values` does not have a statically known rank, `axis` is
- not in the valid range or the size of `values` is not statically known
- along any axis other than `axis`.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_false_negatives(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_false_negatives}
-
-Computes the total number of false positives.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted values, a `Tensor` of arbitrary dimensions. Will
- be cast to `bool`.
-* <b>`labels`</b>: The ground truth values, a `Tensor` whose dimensions must match
- `predictions`. Will be cast to `bool`.
-* <b>`weights`</b>: Optional `Tensor` whose rank is either 0, or the same rank as
- `labels`, and must be broadcastable to `labels` (i.e., all dimensions
- must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`value_tensor`</b>: A `Tensor` representing the current value of the metric.
-* <b>`update_op`</b>: An operation that accumulates the error from a batch of data.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is not `None` and its shape doesn't match `values`,
- or if either `metrics_collections` or `updates_collections` are not a list
- or tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_false_negatives_at_thresholds(predictions, labels, thresholds, weights=None)` {#streaming_false_negatives_at_thresholds}
-
-
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_false_positives(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_false_positives}
-
-Sum the weights of false positives.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted values, a `Tensor` of arbitrary dimensions. Will
- be cast to `bool`.
-* <b>`labels`</b>: The ground truth values, a `Tensor` whose dimensions must match
- `predictions`. Will be cast to `bool`.
-* <b>`weights`</b>: Optional `Tensor` whose rank is either 0, or the same rank as
- `labels`, and must be broadcastable to `labels` (i.e., all dimensions
- must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`value_tensor`</b>: A `Tensor` representing the current value of the metric.
-* <b>`update_op`</b>: An operation that accumulates the error from a batch of data.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_false_positives_at_thresholds(predictions, labels, thresholds, weights=None)` {#streaming_false_positives_at_thresholds}
-
-
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_true_negatives(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_true_negatives}
-
-Sum the weights of true_negatives.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted values, a `Tensor` of arbitrary dimensions. Will
- be cast to `bool`.
-* <b>`labels`</b>: The ground truth values, a `Tensor` whose dimensions must match
- `predictions`. Will be cast to `bool`.
-* <b>`weights`</b>: Optional `Tensor` whose rank is either 0, or the same rank as
- `labels`, and must be broadcastable to `labels` (i.e., all dimensions
- must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`value_tensor`</b>: A `Tensor` representing the current value of the metric.
-* <b>`update_op`</b>: An operation that accumulates the error from a batch of data.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_true_negatives_at_thresholds(predictions, labels, thresholds, weights=None)` {#streaming_true_negatives_at_thresholds}
-
-
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_true_positives(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_true_positives}
-
-Sum the weights of true_positives.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted values, a `Tensor` of arbitrary dimensions. Will
- be cast to `bool`.
-* <b>`labels`</b>: The ground truth values, a `Tensor` whose dimensions must match
- `predictions`. Will be cast to `bool`.
-* <b>`weights`</b>: Optional `Tensor` whose rank is either 0, or the same rank as
- `labels`, and must be broadcastable to `labels` (i.e., all dimensions
- must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`value_tensor`</b>: A `Tensor` representing the current value of the metric.
-* <b>`update_op`</b>: An operation that accumulates the error from a batch of data.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
-
-- - -
-
-### `tf.contrib.metrics.streaming_true_positives_at_thresholds(predictions, labels, thresholds, weights=None)` {#streaming_true_positives_at_thresholds}
-
-
-
-
-- - -
-
-### `tf.contrib.metrics.auc_using_histogram(boolean_labels, scores, score_range, nbins=100, collections=None, check_shape=True, name=None)` {#auc_using_histogram}
-
-AUC computed by maintaining histograms.
-
-Rather than computing AUC directly, this Op maintains Variables containing
-histograms of the scores associated with `True` and `False` labels. By
-comparing these the AUC is generated, with some discretization error.
-See: "Efficient AUC Learning Curve Calculation" by Bouckaert.
-
-This AUC Op updates in `O(batch_size + nbins)` time and works well even with
-large class imbalance. The accuracy is limited by discretization error due
-to finite number of bins. If scores are concentrated in a fewer bins,
-accuracy is lower. If this is a concern, we recommend trying different
-numbers of bins and comparing results.
-
-##### Args:
-
-
-* <b>`boolean_labels`</b>: 1-D boolean `Tensor`. Entry is `True` if the corresponding
- record is in class.
-* <b>`scores`</b>: 1-D numeric `Tensor`, same shape as boolean_labels.
-* <b>`score_range`</b>: `Tensor` of shape `[2]`, same dtype as `scores`. The min/max
- values of score that we expect. Scores outside range will be clipped.
-* <b>`nbins`</b>: Integer number of bins to use. Accuracy strictly increases as the
- number of bins increases.
-* <b>`collections`</b>: List of graph collections keys. Internal histogram Variables
- are added to these collections. Defaults to `[GraphKeys.LOCAL_VARIABLES]`.
-* <b>`check_shape`</b>: Boolean. If `True`, do a runtime shape check on the scores
- and labels.
-* <b>`name`</b>: A name for this Op. Defaults to "auc_using_histogram".
-
-##### Returns:
-
-
-* <b>`auc`</b>: `float32` scalar `Tensor`. Fetching this converts internal histograms
- to auc value.
-* <b>`update_op`</b>: `Op`, when run, updates internal histograms.
-
-
-- - -
-
-### `tf.contrib.metrics.accuracy(predictions, labels, weights=None)` {#accuracy}
-
-Computes the percentage of times that predictions matches labels.
-
-##### Args:
-
-
-* <b>`predictions`</b>: the predicted values, a `Tensor` whose dtype and shape
- matches 'labels'.
-* <b>`labels`</b>: the ground truth values, a `Tensor` of any shape and
- bool, integer, or string dtype.
-* <b>`weights`</b>: None or `Tensor` of float values to reweight the accuracy.
-
-##### Returns:
-
- Accuracy `Tensor`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if dtypes don't match or
- if dtype is not bool, integer, or string.
-
-
-- - -
-
-### `tf.contrib.metrics.aggregate_metrics(*value_update_tuples)` {#aggregate_metrics}
-
-Aggregates the metric value tensors and update ops into two lists.
-
-##### Args:
-
-
-* <b>`*value_update_tuples`</b>: a variable number of tuples, each of which contain the
- pair of (value_tensor, update_op) from a streaming metric.
-
-##### Returns:
-
- A list of value `Tensor` objects and a list of update ops.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `value_update_tuples` is empty.
-
-
-- - -
-
-### `tf.contrib.metrics.aggregate_metric_map(names_to_tuples)` {#aggregate_metric_map}
-
-Aggregates the metric names to tuple dictionary.
-
-This function is useful for pairing metric names with their associated value
-and update ops when the list of metrics is long. For example:
-
-```python
- metrics_to_values, metrics_to_updates = slim.metrics.aggregate_metric_map({
- 'Mean Absolute Error': new_slim.metrics.streaming_mean_absolute_error(
- predictions, labels, weights),
- 'Mean Relative Error': new_slim.metrics.streaming_mean_relative_error(
- predictions, labels, labels, weights),
- 'RMSE Linear': new_slim.metrics.streaming_root_mean_squared_error(
- predictions, labels, weights),
- 'RMSE Log': new_slim.metrics.streaming_root_mean_squared_error(
- predictions, labels, weights),
- })
-```
-
-##### Args:
-
-
-* <b>`names_to_tuples`</b>: a map of metric names to tuples, each of which contain the
- pair of (value_tensor, update_op) from a streaming metric.
-
-##### Returns:
-
- A dictionary from metric names to value ops and a dictionary from metric
- names to update ops.
-
-
-- - -
-
-### `tf.contrib.metrics.confusion_matrix(labels, predictions, num_classes=None, dtype=tf.int32, name=None, weights=None)` {#confusion_matrix}
-
-Deprecated. Use tf.confusion_matrix instead.
-
-
-- - -
-
-### `tf.contrib.metrics.set_difference(a, b, aminusb=True, validate_indices=True)` {#set_difference}
-
-Compute set difference of elements in last dimension of `a` and `b`.
-
-All but the last dimension of `a` and `b` must match.
-
-Example:
-
-```python
- a = [
- [
- [
- [1, 2],
- [3],
- ],
- [
- [4],
- [5, 6],
- ],
- ],
- ]
- b = [
- [
- [
- [1, 3],
- [2],
- ],
- [
- [4, 5],
- [5, 6, 7, 8],
- ],
- ],
- ]
- set_difference(a, b, aminusb=True) = [
- [
- [
- [2],
- [3],
- ],
- [
- [],
- [],
- ],
- ],
- ]
-```
-
-##### Args:
-
-
-* <b>`a`</b>: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices
- must be sorted in row-major order.
-* <b>`b`</b>: `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices
- must be sorted in row-major order.
-* <b>`aminusb`</b>: Whether to subtract `b` from `a`, vs vice versa.
-* <b>`validate_indices`</b>: Whether to validate the order and range of sparse indices
- in `a` and `b`.
-
-##### Returns:
-
- A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but
- the last dimension the same. Elements along the last dimension contain the
- differences.
-
-
-- - -
-
-### `tf.contrib.metrics.set_intersection(a, b, validate_indices=True)` {#set_intersection}
-
-Compute set intersection of elements in last dimension of `a` and `b`.
-
-All but the last dimension of `a` and `b` must match.
-
-Example:
-
-```python
- a = [
- [
- [
- [1, 2],
- [3],
- ],
- [
- [4],
- [5, 6],
- ],
- ],
- ]
- b = [
- [
- [
- [1, 3],
- [2],
- ],
- [
- [4, 5],
- [5, 6, 7, 8],
- ],
- ],
- ]
- set_intersection(a, b) = [
- [
- [
- [1],
- [],
- ],
- [
- [4],
- [5, 6],
- ],
- ],
- ]
-```
-
-##### Args:
-
-
-* <b>`a`</b>: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices
- must be sorted in row-major order.
-* <b>`b`</b>: `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices
- must be sorted in row-major order.
-* <b>`validate_indices`</b>: Whether to validate the order and range of sparse indices
- in `a` and `b`.
-
-##### Returns:
-
- A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but
- the last dimension the same. Elements along the last dimension contain the
- intersections.
-
-
-- - -
-
-### `tf.contrib.metrics.set_size(a, validate_indices=True)` {#set_size}
-
-Compute number of unique elements along last dimension of `a`.
-
-##### Args:
-
-
-* <b>`a`</b>: `SparseTensor`, with indices sorted in row-major order.
-* <b>`validate_indices`</b>: Whether to validate the order and range of sparse indices
- in `a`.
-
-##### Returns:
-
- `int32` `Tensor` of set sizes. For `a` ranked `n`, this is a `Tensor` with
- rank `n-1`, and the same 1st `n-1` dimensions as `a`. Each value is the
- number of unique elements in the corresponding `[0...n-1]` dimension of `a`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `a` is an invalid types.
-
-
-- - -
-
-### `tf.contrib.metrics.set_union(a, b, validate_indices=True)` {#set_union}
-
-Compute set union of elements in last dimension of `a` and `b`.
-
-All but the last dimension of `a` and `b` must match.
-
-Example:
-
-```python
- a = [
- [
- [
- [1, 2],
- [3],
- ],
- [
- [4],
- [5, 6],
- ],
- ],
- ]
- b = [
- [
- [
- [1, 3],
- [2],
- ],
- [
- [4, 5],
- [5, 6, 7, 8],
- ],
- ],
- ]
- set_union(a, b) = [
- [
- [
- [1, 2, 3],
- [2, 3],
- ],
- [
- [4, 5],
- [5, 6, 7, 8],
- ],
- ],
- ]
-```
-
-##### Args:
-
-
-* <b>`a`</b>: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices
- must be sorted in row-major order.
-* <b>`b`</b>: `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices
- must be sorted in row-major order.
-* <b>`validate_indices`</b>: Whether to validate the order and range of sparse indices
- in `a` and `b`.
-
-##### Returns:
-
- A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but
- the last dimension the same. Elements along the last dimension contain the
- unions.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.opt.md b/tensorflow/g3doc/api_docs/python/contrib.opt.md
deleted file mode 100644
index e93e3f4571..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.opt.md
+++ /dev/null
@@ -1,454 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Optimization (contrib)
-[TOC]
-
-A module containing optimization routines.
-
-## Other Functions and Classes
-- - -
-
-### `class tf.contrib.opt.ExternalOptimizerInterface` {#ExternalOptimizerInterface}
-
-Base class for interfaces with external optimization algorithms.
-
-Subclass this and implement `_minimize` in order to wrap a new optimization
-algorithm.
-
-`ExternalOptimizerInterface` should not be instantiated directly; instead use
-e.g. `ScipyOptimizerInterface`.
-
-- - -
-
-#### `tf.contrib.opt.ExternalOptimizerInterface.__init__(loss, var_list=None, equalities=None, inequalities=None, **optimizer_kwargs)` {#ExternalOptimizerInterface.__init__}
-
-Initialize a new interface instance.
-
-##### Args:
-
-
-* <b>`loss`</b>: A scalar `Tensor` to be minimized.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`equalities`</b>: Optional list of equality constraint scalar `Tensor`s to be
- held equal to zero.
-* <b>`inequalities`</b>: Optional list of inequality constraint scalar `Tensor`s
- to be kept nonnegative.
-* <b>`**optimizer_kwargs`</b>: Other subclass-specific keyword arguments.
-
-
-
-- - -
-
-#### `tf.contrib.opt.ExternalOptimizerInterface.minimize(session=None, feed_dict=None, fetches=None, step_callback=None, loss_callback=None)` {#ExternalOptimizerInterface.minimize}
-
-Minimize a scalar `Tensor`.
-
-Variables subject to optimization are updated in-place at the end of
-optimization.
-
-Note that this method does *not* just return a minimization `Op`, unlike
-`Optimizer.minimize()`; instead it actually performs minimization by
-executing commands to control a `Session`.
-
-##### Args:
-
-
-* <b>`session`</b>: A `Session` instance.
-* <b>`feed_dict`</b>: A feed dict to be passed to calls to `session.run`.
-* <b>`fetches`</b>: A list of `Tensor`s to fetch and supply to `loss_callback`
- as positional arguments.
-* <b>`step_callback`</b>: A function to be called at each optimization step;
- arguments are the current values of all optimization variables
- flattened into a single vector.
-* <b>`loss_callback`</b>: A function to be called every time the loss and gradients
- are computed, with evaluated fetches supplied as positional arguments.
-
-
-
-- - -
-
-### `class tf.contrib.opt.MovingAverageOptimizer` {#MovingAverageOptimizer}
-
-Optimizer that computes a moving average of the variables.
-
-Empirically it has been found that using the moving average of the trained
-parameters of a deep network is better than using its trained parameters
-directly. This optimizer allows you to compute this moving average and swap
-the variables at save time so that any code outside of the training loop will
-use by default the averaged values instead of the original ones.
-
-Example of usage:
-
-```python
-
-// Encapsulate your favorite optimizer (here the momentum one)
-// inside the MovingAverageOptimizer.
-opt = tf.train.MomentumOptimizer(learning_rate, FLAGS.momentum)
-opt = tf.contrib.opt.MovingAverageOptimizer(opt)
-// Then create your model and all its variables.
-model = build_model()
-// Add the training op that optimizes using opt.
-// This needs to be called before swapping_saver().
-opt.minimize(cost, var_list)
-// Then create your saver like this:
-saver = opt.swapping_saver()
-// Pass it to your training loop.
- slim.learning.train(
- model,
- ...
- saver=saver)
-```
-
-Note that for evaluation, the normal saver should be used instead of
-swapping_saver().
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.__init__(opt, average_decay=0.9999, num_updates=None, sequential_update=True)` {#MovingAverageOptimizer.__init__}
-
-Construct a new MovingAverageOptimizer.
-
-##### Args:
-
-
-* <b>`opt`</b>: A tf.Optimizer that will be used to compute and apply gradients.
-* <b>`average_decay`</b>: Float. Decay to use to maintain the moving averages
- of trained variables.
- See tf.train.ExponentialMovingAverage for details.
-* <b>`num_updates`</b>: Optional count of number of updates applied to variables.
- See tf.train.ExponentialMovingAverage for details.
-* <b>`sequential_update`</b>: Bool. If False, will compute the moving average at the
- same time as the model is updated, potentially doing
- benign data races.
- If True, will update the moving average after gradient
- updates.
-
-
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#MovingAverageOptimizer.apply_gradients}
-
-
-
-
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#MovingAverageOptimizer.compute_gradients}
-
-Compute gradients of `loss` for the variables in `var_list`.
-
-This is the first part of `minimize()`. It returns a list
-of (gradient, variable) pairs where "gradient" is the gradient
-for "variable". Note that "gradient" can be a `Tensor`, an
-`IndexedSlices`, or `None` if there is no gradient for the
-given variable.
-
-##### Args:
-
-
-* <b>`loss`</b>: A Tensor containing the value to minimize.
-* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKey.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- A list of (gradient, variable) pairs. Variable is always present, but
- gradient can be `None`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
-* <b>`ValueError`</b>: If some arguments are invalid.
-
-
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.get_name()` {#MovingAverageOptimizer.get_name}
-
-
-
-
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.get_slot(var, name)` {#MovingAverageOptimizer.get_slot}
-
-Return a slot named `name` created for `var` by the Optimizer.
-
-Some `Optimizer` subclasses use additional variables. For example
-`Momentum` and `Adagrad` use variables to accumulate updates. This method
-gives access to these `Variable` objects if for some reason you need them.
-
-Use `get_slot_names()` to get the list of slot names created by the
-`Optimizer`.
-
-##### Args:
-
-
-* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
-* <b>`name`</b>: A string.
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.get_slot_names()` {#MovingAverageOptimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-See `get_slot()`.
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#MovingAverageOptimizer.minimize}
-
-Add operations to minimize `loss` by updating `var_list`.
-
-This method simply combines calls `compute_gradients()` and
-`apply_gradients()`. If you want to process the gradient before applying
-them call `compute_gradients()` and `apply_gradients()` explicitly instead
-of using this function.
-
-##### Args:
-
-
-* <b>`loss`</b>: A `Tensor` containing the value to minimize.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`name`</b>: Optional name for the returned operation.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- An Operation that updates the variables in `var_list`. If `global_step`
- was not `None`, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
-
-
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.swapping_saver(var_list=None, name='swapping_saver', **kwargs)` {#MovingAverageOptimizer.swapping_saver}
-
-Create a saver swapping moving averages and variables.
-
-You should use this saver during training. It will save the moving averages
-of the trained parameters under the original parameter names. For
-evaluations or inference you should use a regular saver and it will
-automatically use the moving averages for the trained variable.
-
-You must call this function after all variables have been created and after
-you have called Optimizer.minimize().
-
-##### Args:
-
-
-* <b>`var_list`</b>: List of variables to save, as per `Saver()`.
- If set to None, will save all the variables that have been
- created before this call.
-* <b>`name`</b>: The name of the saver.
-* <b>`**kwargs`</b>: Keyword arguments of `Saver()`.
-
-##### Returns:
-
- A `tf.Saver` object.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If apply_gradients or minimize has not been called before.
-
-
-
-- - -
-
-### `class tf.contrib.opt.ScipyOptimizerInterface` {#ScipyOptimizerInterface}
-
-Wrapper allowing `scipy.optimize.minimize` to operate a `tf.Session`.
-
-Example:
-
-```python
-vector = tf.Variable([7., 7.], 'vector')
-
-# Make vector norm as small as possible.
-loss = tf.reduce_sum(tf.square(vector))
-
-optimizer = ScipyOptimizerInterface(loss, options={'maxiter': 100})
-
-with tf.Session() as session:
- optimizer.minimize(session)
-
-# The value of vector should now be [0., 0.].
-```
-
-Example with constraints:
-
-```python
-vector = tf.Variable([7., 7.], 'vector')
-
-# Make vector norm as small as possible.
-loss = tf.reduce_sum(tf.square(vector))
-# Ensure the vector's y component is = 1.
-equalities = [vector[1] - 1.]
-# Ensure the vector's x component is >= 1.
-inequalities = [vector[0] - 1.]
-
-# Our default SciPy optimization algorithm, L-BFGS-B, does not support
-# general constraints. Thus we use SLSQP instead.
-optimizer = ScipyOptimizerInterface(
- loss, equalities=equalities, inequalities=inequalities, method='SLSQP')
-
-with tf.Session() as session:
- optimizer.minimize(session)
-
-# The value of vector should now be [1., 1.].
-```
-- - -
-
-#### `tf.contrib.opt.ScipyOptimizerInterface.__init__(loss, var_list=None, equalities=None, inequalities=None, **optimizer_kwargs)` {#ScipyOptimizerInterface.__init__}
-
-Initialize a new interface instance.
-
-##### Args:
-
-
-* <b>`loss`</b>: A scalar `Tensor` to be minimized.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`equalities`</b>: Optional list of equality constraint scalar `Tensor`s to be
- held equal to zero.
-* <b>`inequalities`</b>: Optional list of inequality constraint scalar `Tensor`s
- to be kept nonnegative.
-* <b>`**optimizer_kwargs`</b>: Other subclass-specific keyword arguments.
-
-
-- - -
-
-#### `tf.contrib.opt.ScipyOptimizerInterface.minimize(session=None, feed_dict=None, fetches=None, step_callback=None, loss_callback=None)` {#ScipyOptimizerInterface.minimize}
-
-Minimize a scalar `Tensor`.
-
-Variables subject to optimization are updated in-place at the end of
-optimization.
-
-Note that this method does *not* just return a minimization `Op`, unlike
-`Optimizer.minimize()`; instead it actually performs minimization by
-executing commands to control a `Session`.
-
-##### Args:
-
-
-* <b>`session`</b>: A `Session` instance.
-* <b>`feed_dict`</b>: A feed dict to be passed to calls to `session.run`.
-* <b>`fetches`</b>: A list of `Tensor`s to fetch and supply to `loss_callback`
- as positional arguments.
-* <b>`step_callback`</b>: A function to be called at each optimization step;
- arguments are the current values of all optimization variables
- flattened into a single vector.
-* <b>`loss_callback`</b>: A function to be called every time the loss and gradients
- are computed, with evaluated fetches supplied as positional arguments.
-
-
-
-- - -
-
-### `class tf.contrib.opt.VariableClippingOptimizer` {#VariableClippingOptimizer}
-
-Wrapper optimizer that clips the norm of specified variables after update.
-
-This optimizer delegates all aspects of gradient calculation and application
-to an underlying optimizer. After applying gradients, this optimizer then
-clips the variable to have a maximum L2 norm along specified dimensions.
-NB: this is quite different from clipping the norm of the gradients.
-
-Multiple instances of `VariableClippingOptimizer` may be chained to specify
-different max norms for different subsets of variables.
-
-This is more efficient at serving-time than using normalization during
-embedding lookup, at the expense of more expensive training and fewer
-guarantees about the norms.
-
-- - -
-
-#### `tf.contrib.opt.VariableClippingOptimizer.__init__(opt, vars_to_clip_dims, max_norm, use_locking=False, colocate_clip_ops_with_vars=False, name='VariableClipping')` {#VariableClippingOptimizer.__init__}
-
-Construct a new clip-norm optimizer.
-
-##### Args:
-
-
-* <b>`opt`</b>: The actual optimizer that will be used to compute and apply the
- gradients. Must be one of the Optimizer classes.
-* <b>`vars_to_clip_dims`</b>: A dict with keys as Variables and values as lists
- of dimensions along which to compute the L2-norm. See
- `tf.clip_by_norm` for more details.
-* <b>`max_norm`</b>: The L2-norm to clip to, for all variables specified.
-* <b>`use_locking`</b>: If `True` use locks for clip update operations.
-* <b>`colocate_clip_ops_with_vars`</b>: If `True`, try colocating the clip norm
- ops with the corresponding variable.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "VariableClipping".
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.contrib.opt.VariableClippingOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#VariableClippingOptimizer.apply_gradients}
-
-
-
-
-- - -
-
-#### `tf.contrib.opt.VariableClippingOptimizer.compute_gradients(*args, **kwargs)` {#VariableClippingOptimizer.compute_gradients}
-
-
-
-
-- - -
-
-#### `tf.contrib.opt.VariableClippingOptimizer.get_slot(*args, **kwargs)` {#VariableClippingOptimizer.get_slot}
-
-
-
-
-- - -
-
-#### `tf.contrib.opt.VariableClippingOptimizer.get_slot_names(*args, **kwargs)` {#VariableClippingOptimizer.get_slot_names}
-
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.rnn.md b/tensorflow/g3doc/api_docs/python/contrib.rnn.md
deleted file mode 100644
index 6107639c95..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.rnn.md
+++ /dev/null
@@ -1,2203 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# RNN and Cells (contrib)
-[TOC]
-
-RNN Cells and additional RNN operations. See @{$python/contrib.rnn} guide.
-
-- - -
-
-### `class tf.contrib.rnn.RNNCell` {#RNNCell}
-
-Abstract object representing an RNN cell.
-
-The definition of cell in this package differs from the definition used in the
-literature. In the literature, cell refers to an object with a single scalar
-output. The definition in this package refers to a horizontal array of such
-units.
-
-An RNN cell, in the most abstract setting, is anything that has
-a state and performs some operation that takes a matrix of inputs.
-This operation results in an output matrix with `self.output_size` columns.
-If `self.state_size` is an integer, this operation also results in a new
-state matrix with `self.state_size` columns. If `self.state_size` is a
-tuple of integers, then it results in a tuple of `len(state_size)` state
-matrices, each with a column size corresponding to values in `state_size`.
-
-This module provides a number of basic commonly used RNN cells, such as
-LSTM (Long Short Term Memory) or GRU (Gated Recurrent Unit), and a number
-of operators that allow add dropouts, projections, or embeddings for inputs.
-Constructing multi-layer cells is supported by the class `MultiRNNCell`,
-or by calling the `rnn` ops several times. Every `RNNCell` must have the
-properties below and implement `__call__` with the following signature.
-- - -
-
-#### `tf.contrib.rnn.RNNCell.__call__(inputs, state, scope=None)` {#RNNCell.__call__}
-
-Run this RNN cell on inputs, starting from the given state.
-
-##### Args:
-
-
-* <b>`inputs`</b>: `2-D` tensor with shape `[batch_size x input_size]`.
-* <b>`state`</b>: if `self.state_size` is an integer, this should be a `2-D Tensor`
- with shape `[batch_size x self.state_size]`. Otherwise, if
- `self.state_size` is a tuple of integers, this should be a tuple
- with shapes `[batch_size x s] for s in self.state_size`.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to class name.
-
-##### Returns:
-
- A pair containing:
-
- - Output: A `2-D` tensor with shape `[batch_size x self.output_size]`.
- - New state: Either a single `2-D` tensor, or a tuple of tensors matching
- the arity and shapes of `state`.
-
-
-- - -
-
-#### `tf.contrib.rnn.RNNCell.output_size` {#RNNCell.output_size}
-
-Integer or TensorShape: size of outputs produced by this cell.
-
-
-- - -
-
-#### `tf.contrib.rnn.RNNCell.state_size` {#RNNCell.state_size}
-
-size(s) of state(s) used by this cell.
-
-It can be represented by an Integer, a TensorShape or a tuple of Integers
-or TensorShapes.
-
-
-- - -
-
-#### `tf.contrib.rnn.RNNCell.zero_state(batch_size, dtype)` {#RNNCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.BasicRNNCell` {#BasicRNNCell}
-
-The most basic RNN cell.
-- - -
-
-#### `tf.contrib.rnn.BasicRNNCell.__call__(inputs, state, scope=None)` {#BasicRNNCell.__call__}
-
-Most basic RNN: output = new_state = act(W * input + U * state + B).
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicRNNCell.__init__(num_units, input_size=None, activation=tanh)` {#BasicRNNCell.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicRNNCell.output_size` {#BasicRNNCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicRNNCell.state_size` {#BasicRNNCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicRNNCell.zero_state(batch_size, dtype)` {#BasicRNNCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.BasicLSTMCell` {#BasicLSTMCell}
-
-Basic LSTM recurrent network cell.
-
-The implementation is based on: http://arxiv.org/abs/1409.2329.
-
-We add forget_bias (default: 1) to the biases of the forget gate in order to
-reduce the scale of forgetting in the beginning of the training.
-
-It does not allow cell clipping, a projection layer, and does not
-use peep-hole connections: it is the basic baseline.
-
-For advanced models, please use the full LSTMCell that follows.
-- - -
-
-#### `tf.contrib.rnn.BasicLSTMCell.__call__(inputs, state, scope=None)` {#BasicLSTMCell.__call__}
-
-Long short-term memory cell (LSTM).
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicLSTMCell.__init__(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=tanh)` {#BasicLSTMCell.__init__}
-
-Initialize the basic LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell.
-* <b>`forget_bias`</b>: float, The bias added to forget gates (see above).
-* <b>`input_size`</b>: Deprecated and unused.
-* <b>`state_is_tuple`</b>: If True, accepted and returned states are 2-tuples of
- the `c_state` and `m_state`. If False, they are concatenated
- along the column axis. The latter behavior will soon be deprecated.
-* <b>`activation`</b>: Activation function of the inner states.
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicLSTMCell.output_size` {#BasicLSTMCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicLSTMCell.state_size` {#BasicLSTMCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicLSTMCell.zero_state(batch_size, dtype)` {#BasicLSTMCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.GRUCell` {#GRUCell}
-
-Gated Recurrent Unit cell (cf. http://arxiv.org/abs/1406.1078).
-- - -
-
-#### `tf.contrib.rnn.GRUCell.__call__(inputs, state, scope=None)` {#GRUCell.__call__}
-
-Gated recurrent unit (GRU) with nunits cells.
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUCell.__init__(num_units, input_size=None, activation=tanh)` {#GRUCell.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUCell.output_size` {#GRUCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUCell.state_size` {#GRUCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUCell.zero_state(batch_size, dtype)` {#GRUCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.LSTMCell` {#LSTMCell}
-
-Long short-term memory unit (LSTM) recurrent network cell.
-
-The default non-peephole implementation is based on:
-
- http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf
-
-S. Hochreiter and J. Schmidhuber.
-"Long Short-Term Memory". Neural Computation, 9(8):1735-1780, 1997.
-
-The peephole implementation is based on:
-
- https://research.google.com/pubs/archive/43905.pdf
-
-Hasim Sak, Andrew Senior, and Francoise Beaufays.
-"Long short-term memory recurrent neural network architectures for
- large scale acoustic modeling." INTERSPEECH, 2014.
-
-The class uses optional peep-hole connections, optional cell clipping, and
-an optional projection layer.
-- - -
-
-#### `tf.contrib.rnn.LSTMCell.__call__(inputs, state, scope=None)` {#LSTMCell.__call__}
-
-Run one step of LSTM.
-
-##### Args:
-
-
-* <b>`inputs`</b>: input Tensor, 2D, batch x num_units.
-* <b>`state`</b>: if `state_is_tuple` is False, this must be a state Tensor,
- `2-D, batch x state_size`. If `state_is_tuple` is True, this must be a
- tuple of state Tensors, both `2-D`, with column sizes `c_state` and
- `m_state`.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "lstm_cell".
-
-##### Returns:
-
- A tuple containing:
-
- - A `2-D, [batch x output_dim]`, Tensor representing the output of the
- LSTM after reading `inputs` when previous state was `state`.
- Here output_dim is:
- num_proj if num_proj was set,
- num_units otherwise.
- - Tensor(s) representing the new state of LSTM after reading `inputs` when
- the previous state was `state`. Same type and shape(s) as `state`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If input size cannot be inferred from inputs via
- static shape inference.
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMCell.__init__(num_units, input_size=None, use_peepholes=False, cell_clip=None, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=None, num_proj_shards=None, forget_bias=1.0, state_is_tuple=True, activation=tanh)` {#LSTMCell.__init__}
-
-Initialize the parameters for an LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell
-* <b>`input_size`</b>: Deprecated and unused.
-* <b>`use_peepholes`</b>: bool, set True to enable diagonal/peephole connections.
-* <b>`cell_clip`</b>: (optional) A float value, if provided the cell state is clipped
- by this value prior to the cell output activation.
-* <b>`initializer`</b>: (optional) The initializer to use for the weight and
- projection matrices.
-* <b>`num_proj`</b>: (optional) int, The output dimensionality for the projection
- matrices. If None, no projection is performed.
-* <b>`proj_clip`</b>: (optional) A float value. If `num_proj > 0` and `proj_clip` is
- provided, then the projected values are clipped elementwise to within
- `[-proj_clip, proj_clip]`.
-* <b>`num_unit_shards`</b>: Deprecated, will be removed by Jan. 2017.
- Use a variable_scope partitioner instead.
-* <b>`num_proj_shards`</b>: Deprecated, will be removed by Jan. 2017.
- Use a variable_scope partitioner instead.
-* <b>`forget_bias`</b>: Biases of the forget gate are initialized by default to 1
- in order to reduce the scale of forgetting at the beginning of
- the training.
-* <b>`state_is_tuple`</b>: If True, accepted and returned states are 2-tuples of
- the `c_state` and `m_state`. If False, they are concatenated
- along the column axis. This latter behavior will soon be deprecated.
-* <b>`activation`</b>: Activation function of the inner states.
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMCell.output_size` {#LSTMCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMCell.state_size` {#LSTMCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMCell.zero_state(batch_size, dtype)` {#LSTMCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.LayerNormBasicLSTMCell` {#LayerNormBasicLSTMCell}
-
-LSTM unit with layer normalization and recurrent dropout.
-
-This class adds layer normalization and recurrent dropout to a
-basic LSTM unit. Layer normalization implementation is based on:
-
- https://arxiv.org/abs/1607.06450.
-
-"Layer Normalization"
-Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton
-
-and is applied before the internal nonlinearities.
-Recurrent dropout is base on:
-
- https://arxiv.org/abs/1603.05118
-
-"Recurrent Dropout without Memory Loss"
-Stanislau Semeniuta, Aliaksei Severyn, Erhardt Barth.
-- - -
-
-#### `tf.contrib.rnn.LayerNormBasicLSTMCell.__call__(inputs, state, scope=None)` {#LayerNormBasicLSTMCell.__call__}
-
-LSTM cell with layer normalization and recurrent dropout.
-
-
-- - -
-
-#### `tf.contrib.rnn.LayerNormBasicLSTMCell.__init__(num_units, forget_bias=1.0, input_size=None, activation=tanh, layer_norm=True, norm_gain=1.0, norm_shift=0.0, dropout_keep_prob=1.0, dropout_prob_seed=None)` {#LayerNormBasicLSTMCell.__init__}
-
-Initializes the basic LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell.
-* <b>`forget_bias`</b>: float, The bias added to forget gates (see above).
-* <b>`input_size`</b>: Deprecated and unused.
-* <b>`activation`</b>: Activation function of the inner states.
-* <b>`layer_norm`</b>: If `True`, layer normalization will be applied.
-* <b>`norm_gain`</b>: float, The layer normalization gain initial value. If
- `layer_norm` has been set to `False`, this argument will be ignored.
-* <b>`norm_shift`</b>: float, The layer normalization shift initial value. If
- `layer_norm` has been set to `False`, this argument will be ignored.
-* <b>`dropout_keep_prob`</b>: unit Tensor or float between 0 and 1 representing the
- recurrent dropout probability value. If float and 1.0, no dropout will
- be applied.
-* <b>`dropout_prob_seed`</b>: (optional) integer, the randomness seed.
-
-
-- - -
-
-#### `tf.contrib.rnn.LayerNormBasicLSTMCell.output_size` {#LayerNormBasicLSTMCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.LayerNormBasicLSTMCell.state_size` {#LayerNormBasicLSTMCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.LayerNormBasicLSTMCell.zero_state(batch_size, dtype)` {#LayerNormBasicLSTMCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.LSTMStateTuple` {#LSTMStateTuple}
-
-Tuple used by LSTM Cells for `state_size`, `zero_state`, and output state.
-
-Stores two elements: `(c, h)`, in that order.
-
-Only used when `state_is_tuple=True`.
-- - -
-
-#### `tf.contrib.rnn.LSTMStateTuple.__getnewargs__()` {#LSTMStateTuple.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMStateTuple.__getstate__()` {#LSTMStateTuple.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMStateTuple.__new__(_cls, c, h)` {#LSTMStateTuple.__new__}
-
-Create new instance of LSTMStateTuple(c, h)
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMStateTuple.__repr__()` {#LSTMStateTuple.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMStateTuple.c` {#LSTMStateTuple.c}
-
-Alias for field number 0
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMStateTuple.dtype` {#LSTMStateTuple.dtype}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMStateTuple.h` {#LSTMStateTuple.h}
-
-Alias for field number 1
-
-
-
-- - -
-
-### `class tf.contrib.rnn.MultiRNNCell` {#MultiRNNCell}
-
-RNN cell composed sequentially of multiple simple cells.
-- - -
-
-#### `tf.contrib.rnn.MultiRNNCell.__call__(inputs, state, scope=None)` {#MultiRNNCell.__call__}
-
-Run this multi-layer cell on inputs, starting from state.
-
-
-- - -
-
-#### `tf.contrib.rnn.MultiRNNCell.__init__(cells, state_is_tuple=True)` {#MultiRNNCell.__init__}
-
-Create a RNN cell composed sequentially of a number of RNNCells.
-
-##### Args:
-
-
-* <b>`cells`</b>: list of RNNCells that will be composed in this order.
-* <b>`state_is_tuple`</b>: If True, accepted and returned states are n-tuples, where
- `n = len(cells)`. If False, the states are all
- concatenated along the column axis. This latter behavior will soon be
- deprecated.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if cells is empty (not allowed), or at least one of the cells
- returns a state tuple but the flag `state_is_tuple` is `False`.
-
-
-- - -
-
-#### `tf.contrib.rnn.MultiRNNCell.output_size` {#MultiRNNCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.MultiRNNCell.state_size` {#MultiRNNCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.MultiRNNCell.zero_state(batch_size, dtype)` {#MultiRNNCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.LSTMBlockWrapper` {#LSTMBlockWrapper}
-
-This is a helper class that provides housekeeping for LSTM cells.
-
-This may be useful for alternative LSTM and similar type of cells.
-The subclasses must implement `_call_cell` method and `num_units` property.
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockWrapper.__call__(inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)` {#LSTMBlockWrapper.__call__}
-
-Run this LSTM on inputs, starting from the given state.
-
-##### Args:
-
-
-* <b>`inputs`</b>: `3-D` tensor with shape `[time_len, batch_size, input_size]`
- or a list of `time_len` tensors of shape `[batch_size, input_size]`.
-* <b>`initial_state`</b>: a tuple `(initial_cell_state, initial_output)` with tensors
- of shape `[batch_size, self._num_units]`. If this is not provided, the
- cell is expected to create a zero initial state of type `dtype`.
-* <b>`dtype`</b>: The data type for the initial state and expected output. Required
- if `initial_state` is not provided or RNN state has a heterogeneous
- dtype.
-* <b>`sequence_length`</b>: Specifies the length of each sequence in inputs. An
- `int32` or `int64` vector (tensor) size `[batch_size]`, values in `[0,
- time_len).`
- Defaults to `time_len` for each element.
-* <b>`scope`</b>: `VariableScope` for the created subgraph; defaults to class name.
-
-##### Returns:
-
- A pair containing:
-
- - Output: A `3-D` tensor of shape `[time_len, batch_size, output_size]`
- or a list of time_len tensors of shape `[batch_size, output_size]`,
- to match the type of the `inputs`.
- - Final state: a tuple `(cell_state, output)` matching `initial_state`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: in case of shape mismatches
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockWrapper.num_units` {#LSTMBlockWrapper.num_units}
-
-Number of units in this cell (output dimension).
-
-
-
-- - -
-
-### `class tf.contrib.rnn.DropoutWrapper` {#DropoutWrapper}
-
-Operator adding dropout to inputs and outputs of the given cell.
-- - -
-
-#### `tf.contrib.rnn.DropoutWrapper.__call__(inputs, state, scope=None)` {#DropoutWrapper.__call__}
-
-Run the cell with the declared dropouts.
-
-
-- - -
-
-#### `tf.contrib.rnn.DropoutWrapper.__init__(cell, input_keep_prob=1.0, output_keep_prob=1.0, seed=None)` {#DropoutWrapper.__init__}
-
-Create a cell with added input and/or output dropout.
-
-Dropout is never used on the state.
-
-##### Args:
-
-
-* <b>`cell`</b>: an RNNCell, a projection to output_size is added to it.
-* <b>`input_keep_prob`</b>: unit Tensor or float between 0 and 1, input keep
- probability; if it is float and 1, no input dropout will be added.
-* <b>`output_keep_prob`</b>: unit Tensor or float between 0 and 1, output keep
- probability; if it is float and 1, no output dropout will be added.
-* <b>`seed`</b>: (optional) integer, the randomness seed.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if cell is not an RNNCell.
-* <b>`ValueError`</b>: if keep_prob is not between 0 and 1.
-
-
-- - -
-
-#### `tf.contrib.rnn.DropoutWrapper.output_size` {#DropoutWrapper.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.DropoutWrapper.state_size` {#DropoutWrapper.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.DropoutWrapper.zero_state(batch_size, dtype)` {#DropoutWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.EmbeddingWrapper` {#EmbeddingWrapper}
-
-Operator adding input embedding to the given cell.
-
-Note: in many cases it may be more efficient to not use this wrapper,
-but instead concatenate the whole sequence of your inputs in time,
-do the embedding on this batch-concatenated sequence, then split it and
-feed into your RNN.
-- - -
-
-#### `tf.contrib.rnn.EmbeddingWrapper.__call__(inputs, state, scope=None)` {#EmbeddingWrapper.__call__}
-
-Run the cell on embedded inputs.
-
-
-- - -
-
-#### `tf.contrib.rnn.EmbeddingWrapper.__init__(cell, embedding_classes, embedding_size, initializer=None)` {#EmbeddingWrapper.__init__}
-
-Create a cell with an added input embedding.
-
-##### Args:
-
-
-* <b>`cell`</b>: an RNNCell, an embedding will be put before its inputs.
-* <b>`embedding_classes`</b>: integer, how many symbols will be embedded.
-* <b>`embedding_size`</b>: integer, the size of the vectors we embed into.
-* <b>`initializer`</b>: an initializer to use when creating the embedding;
- if None, the initializer from variable scope or a default one is used.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if cell is not an RNNCell.
-* <b>`ValueError`</b>: if embedding_classes is not positive.
-
-
-- - -
-
-#### `tf.contrib.rnn.EmbeddingWrapper.output_size` {#EmbeddingWrapper.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.EmbeddingWrapper.state_size` {#EmbeddingWrapper.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.EmbeddingWrapper.zero_state(batch_size, dtype)` {#EmbeddingWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.InputProjectionWrapper` {#InputProjectionWrapper}
-
-Operator adding an input projection to the given cell.
-
-Note: in many cases it may be more efficient to not use this wrapper,
-but instead concatenate the whole sequence of your inputs in time,
-do the projection on this batch-concatenated sequence, then split it.
-- - -
-
-#### `tf.contrib.rnn.InputProjectionWrapper.__call__(inputs, state, scope=None)` {#InputProjectionWrapper.__call__}
-
-Run the input projection and then the cell.
-
-
-- - -
-
-#### `tf.contrib.rnn.InputProjectionWrapper.__init__(cell, num_proj, input_size=None)` {#InputProjectionWrapper.__init__}
-
-Create a cell with input projection.
-
-##### Args:
-
-
-* <b>`cell`</b>: an RNNCell, a projection of inputs is added before it.
-* <b>`num_proj`</b>: Python integer. The dimension to project to.
-* <b>`input_size`</b>: Deprecated and unused.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if cell is not an RNNCell.
-
-
-- - -
-
-#### `tf.contrib.rnn.InputProjectionWrapper.output_size` {#InputProjectionWrapper.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.InputProjectionWrapper.state_size` {#InputProjectionWrapper.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.InputProjectionWrapper.zero_state(batch_size, dtype)` {#InputProjectionWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.OutputProjectionWrapper` {#OutputProjectionWrapper}
-
-Operator adding an output projection to the given cell.
-
-Note: in many cases it may be more efficient to not use this wrapper,
-but instead concatenate the whole sequence of your outputs in time,
-do the projection on this batch-concatenated sequence, then split it
-if needed or directly feed into a softmax.
-- - -
-
-#### `tf.contrib.rnn.OutputProjectionWrapper.__call__(inputs, state, scope=None)` {#OutputProjectionWrapper.__call__}
-
-Run the cell and output projection on inputs, starting from state.
-
-
-- - -
-
-#### `tf.contrib.rnn.OutputProjectionWrapper.__init__(cell, output_size)` {#OutputProjectionWrapper.__init__}
-
-Create a cell with output projection.
-
-##### Args:
-
-
-* <b>`cell`</b>: an RNNCell, a projection to output_size is added to it.
-* <b>`output_size`</b>: integer, the size of the output after projection.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if cell is not an RNNCell.
-* <b>`ValueError`</b>: if output_size is not positive.
-
-
-- - -
-
-#### `tf.contrib.rnn.OutputProjectionWrapper.output_size` {#OutputProjectionWrapper.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.OutputProjectionWrapper.state_size` {#OutputProjectionWrapper.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.OutputProjectionWrapper.zero_state(batch_size, dtype)` {#OutputProjectionWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.DeviceWrapper` {#DeviceWrapper}
-
-Operator that ensures an RNNCell runs on a particular device.
-- - -
-
-#### `tf.contrib.rnn.DeviceWrapper.__call__(inputs, state, scope=None)` {#DeviceWrapper.__call__}
-
-Run the cell on specified device.
-
-
-- - -
-
-#### `tf.contrib.rnn.DeviceWrapper.__init__(cell, device)` {#DeviceWrapper.__init__}
-
-Construct a `DeviceWrapper` for `cell` with device `device`.
-
-Ensures the wrapped `cell` is called with `tf.device(device)`.
-
-##### Args:
-
-
-* <b>`cell`</b>: An instance of `RNNCell`.
-* <b>`device`</b>: A device string or function, for passing to `tf.device`.
-
-
-- - -
-
-#### `tf.contrib.rnn.DeviceWrapper.output_size` {#DeviceWrapper.output_size}
-
-Integer or TensorShape: size of outputs produced by this cell.
-
-
-- - -
-
-#### `tf.contrib.rnn.DeviceWrapper.state_size` {#DeviceWrapper.state_size}
-
-size(s) of state(s) used by this cell.
-
-It can be represented by an Integer, a TensorShape or a tuple of Integers
-or TensorShapes.
-
-
-- - -
-
-#### `tf.contrib.rnn.DeviceWrapper.zero_state(batch_size, dtype)` {#DeviceWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.ResidualWrapper` {#ResidualWrapper}
-
-RNNCell wrapper that ensures cell inputs are added to the outputs.
-- - -
-
-#### `tf.contrib.rnn.ResidualWrapper.__call__(inputs, state, scope=None)` {#ResidualWrapper.__call__}
-
-Run the cell and add its inputs to its outputs.
-
-##### Args:
-
-
-* <b>`inputs`</b>: cell inputs.
-* <b>`state`</b>: cell state.
-* <b>`scope`</b>: optional cell scope.
-
-##### Returns:
-
- Tuple of cell outputs and new state.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If cell inputs and outputs have different structure (type).
-* <b>`ValueError`</b>: If cell inputs and outputs have different structure (value).
-
-
-- - -
-
-#### `tf.contrib.rnn.ResidualWrapper.__init__(cell)` {#ResidualWrapper.__init__}
-
-Constructs a `ResidualWrapper` for `cell`.
-
-##### Args:
-
-
-* <b>`cell`</b>: An instance of `RNNCell`.
-
-
-- - -
-
-#### `tf.contrib.rnn.ResidualWrapper.output_size` {#ResidualWrapper.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.ResidualWrapper.state_size` {#ResidualWrapper.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.ResidualWrapper.zero_state(batch_size, dtype)` {#ResidualWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.LSTMBlockCell` {#LSTMBlockCell}
-
-Basic LSTM recurrent network cell.
-
-The implementation is based on: http://arxiv.org/abs/1409.2329.
-
-We add `forget_bias` (default: 1) to the biases of the forget gate in order to
-reduce the scale of forgetting in the beginning of the training.
-
-Unlike `core_rnn_cell.LSTMCell`, this is a monolithic op and should be much
-faster. The weight and bias matrixes should be compatible as long as the
-variable scope matches.
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockCell.__call__(x, states_prev, scope=None)` {#LSTMBlockCell.__call__}
-
-Long short-term memory cell (LSTM).
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockCell.__init__(num_units, forget_bias=1.0, use_peephole=False)` {#LSTMBlockCell.__init__}
-
-Initialize the basic LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell.
-* <b>`forget_bias`</b>: float, The bias added to forget gates (see above).
-* <b>`use_peephole`</b>: Whether to use peephole connections or not.
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockCell.output_size` {#LSTMBlockCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockCell.state_size` {#LSTMBlockCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockCell.zero_state(batch_size, dtype)` {#LSTMBlockCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.GRUBlockCell` {#GRUBlockCell}
-
-Block GRU cell implementation.
-
-The implementation is based on: http://arxiv.org/abs/1406.1078
-Computes the LSTM cell forward propagation for 1 time step.
-
-This kernel op implements the following mathematical equations:
-
-Biases are initialized with:
-
-* `b_ru` - constant_initializer(1.0)
-* `b_c` - constant_initializer(0.0)
-
-```
-x_h_prev = [x, h_prev]
-
-[r_bar u_bar] = x_h_prev * w_ru + b_ru
-
-r = sigmoid(r_bar)
-u = sigmoid(u_bar)
-
-h_prevr = h_prev \circ r
-
-x_h_prevr = [x h_prevr]
-
-c_bar = x_h_prevr * w_c + b_c
-c = tanh(c_bar)
-
-h = (1-u) \circ c + u \circ h_prev
-```
-- - -
-
-#### `tf.contrib.rnn.GRUBlockCell.__call__(x, h_prev, scope=None)` {#GRUBlockCell.__call__}
-
-GRU cell.
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUBlockCell.__init__(cell_size)` {#GRUBlockCell.__init__}
-
-Initialize the Block GRU cell.
-
-##### Args:
-
-
-* <b>`cell_size`</b>: int, GRU cell size.
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUBlockCell.output_size` {#GRUBlockCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUBlockCell.state_size` {#GRUBlockCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUBlockCell.zero_state(batch_size, dtype)` {#GRUBlockCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.FusedRNNCell` {#FusedRNNCell}
-
-Abstract object representing a fused RNN cell.
-
-A fused RNN cell represents the entire RNN expanded over the time
-dimension. In effect, this represents an entire recurrent network.
-
-Unlike RNN cells which are subclasses of `rnn_cell.RNNCell`, a `FusedRNNCell`
-operates on the entire time sequence at once, by putting the loop over time
-inside the cell. This usually leads to much more efficient, but more complex
-and less flexible implementations.
-
-Every `FusedRNNCell` must implement `__call__` with the following signature.
-- - -
-
-#### `tf.contrib.rnn.FusedRNNCell.__call__(inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)` {#FusedRNNCell.__call__}
-
-Run this fused RNN on inputs, starting from the given state.
-
-##### Args:
-
-
-* <b>`inputs`</b>: `3-D` tensor with shape `[time_len x batch_size x input_size]`
- or a list of `time_len` tensors of shape `[batch_size x input_size]`.
-* <b>`initial_state`</b>: either a tensor with shape `[batch_size x state_size]`
- or a tuple with shapes `[batch_size x s] for s in state_size`, if the
- cell takes tuples. If this is not provided, the cell is expected to
- create a zero initial state of type `dtype`.
-* <b>`dtype`</b>: The data type for the initial state and expected output. Required
- if `initial_state` is not provided or RNN state has a heterogeneous
- dtype.
-* <b>`sequence_length`</b>: Specifies the length of each sequence in inputs. An
- `int32` or `int64` vector (tensor) size `[batch_size]`, values in `[0,
- time_len)`.
- Defaults to `time_len` for each element.
-* <b>`scope`</b>: `VariableScope` or `string` for the created subgraph; defaults to
- class name.
-
-##### Returns:
-
- A pair containing:
-
- - Output: A `3-D` tensor of shape `[time_len x batch_size x output_size]`
- or a list of `time_len` tensors of shape `[batch_size x output_size]`,
- to match the type of the `inputs`.
- - Final state: Either a single `2-D` tensor, or a tuple of tensors
- matching the arity and shapes of `initial_state`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.FusedRNNCellAdaptor` {#FusedRNNCellAdaptor}
-
-This is an adaptor for RNNCell classes to be used with `FusedRNNCell`.
-- - -
-
-#### `tf.contrib.rnn.FusedRNNCellAdaptor.__call__(inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)` {#FusedRNNCellAdaptor.__call__}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.FusedRNNCellAdaptor.__init__(cell, use_dynamic_rnn=False)` {#FusedRNNCellAdaptor.__init__}
-
-Initialize the adaptor.
-
-##### Args:
-
-
-* <b>`cell`</b>: an instance of a subclass of a `rnn_cell.RNNCell`.
-* <b>`use_dynamic_rnn`</b>: whether to use dynamic (or static) RNN.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.TimeReversedFusedRNN` {#TimeReversedFusedRNN}
-
-This is an adaptor to time-reverse a FusedRNNCell.
-
-For example,
-
-```python
-cell = tf.contrib.rnn.BasicRNNCell(10)
-fw_lstm = tf.contrib.rnn.FusedRNNCellAdaptor(cell, use_dynamic_rnn=True)
-bw_lstm = tf.contrib.rnn.TimeReversedFusedRNN(fw_lstm)
-fw_out, fw_state = fw_lstm(inputs)
-bw_out, bw_state = bw_lstm(inputs)
-```
-- - -
-
-#### `tf.contrib.rnn.TimeReversedFusedRNN.__call__(inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)` {#TimeReversedFusedRNN.__call__}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.TimeReversedFusedRNN.__init__(cell)` {#TimeReversedFusedRNN.__init__}
-
-
-
-
-
-- - -
-
-### `class tf.contrib.rnn.LSTMBlockFusedCell` {#LSTMBlockFusedCell}
-
-FusedRNNCell implementation of LSTM.
-
-This is an extremely efficient LSTM implementation, that uses a single TF op
-for the entire LSTM. It should be both faster and more memory-efficient than
-LSTMBlockCell defined above.
-
-The implementation is based on: http://arxiv.org/abs/1409.2329.
-
-We add forget_bias (default: 1) to the biases of the forget gate in order to
-reduce the scale of forgetting in the beginning of the training.
-
-The variable naming is consistent with `core_rnn_cell.LSTMCell`.
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockFusedCell.__call__(inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)` {#LSTMBlockFusedCell.__call__}
-
-Run this LSTM on inputs, starting from the given state.
-
-##### Args:
-
-
-* <b>`inputs`</b>: `3-D` tensor with shape `[time_len, batch_size, input_size]`
- or a list of `time_len` tensors of shape `[batch_size, input_size]`.
-* <b>`initial_state`</b>: a tuple `(initial_cell_state, initial_output)` with tensors
- of shape `[batch_size, self._num_units]`. If this is not provided, the
- cell is expected to create a zero initial state of type `dtype`.
-* <b>`dtype`</b>: The data type for the initial state and expected output. Required
- if `initial_state` is not provided or RNN state has a heterogeneous
- dtype.
-* <b>`sequence_length`</b>: Specifies the length of each sequence in inputs. An
- `int32` or `int64` vector (tensor) size `[batch_size]`, values in `[0,
- time_len).`
- Defaults to `time_len` for each element.
-* <b>`scope`</b>: `VariableScope` for the created subgraph; defaults to class name.
-
-##### Returns:
-
- A pair containing:
-
- - Output: A `3-D` tensor of shape `[time_len, batch_size, output_size]`
- or a list of time_len tensors of shape `[batch_size, output_size]`,
- to match the type of the `inputs`.
- - Final state: a tuple `(cell_state, output)` matching `initial_state`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: in case of shape mismatches
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockFusedCell.__init__(num_units, forget_bias=1.0, cell_clip=None, use_peephole=False)` {#LSTMBlockFusedCell.__init__}
-
-Initialize the LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell.
-* <b>`forget_bias`</b>: float, The bias added to forget gates (see above).
-* <b>`cell_clip`</b>: clip the cell to this value. Defaults to `3`.
-* <b>`use_peephole`</b>: Whether to use peephole connections or not.
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockFusedCell.num_units` {#LSTMBlockFusedCell.num_units}
-
-Number of units in this cell (output dimension).
-
-
-
-- - -
-
-### `class tf.contrib.rnn.CoupledInputForgetGateLSTMCell` {#CoupledInputForgetGateLSTMCell}
-
-Long short-term memory unit (LSTM) recurrent network cell.
-
-The default non-peephole implementation is based on:
-
- http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf
-
-S. Hochreiter and J. Schmidhuber.
-"Long Short-Term Memory". Neural Computation, 9(8):1735-1780, 1997.
-
-The peephole implementation is based on:
-
- https://research.google.com/pubs/archive/43905.pdf
-
-Hasim Sak, Andrew Senior, and Francoise Beaufays.
-"Long short-term memory recurrent neural network architectures for
- large scale acoustic modeling." INTERSPEECH, 2014.
-
-The coupling of input and forget gate is based on:
-
- http://arxiv.org/pdf/1503.04069.pdf
-
-Greff et al. "LSTM: A Search Space Odyssey"
-
-The class uses optional peep-hole connections, and an optional projection
-layer.
-- - -
-
-#### `tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__call__(inputs, state, scope=None)` {#CoupledInputForgetGateLSTMCell.__call__}
-
-Run one step of LSTM.
-
-##### Args:
-
-
-* <b>`inputs`</b>: input Tensor, 2D, batch x num_units.
-* <b>`state`</b>: if `state_is_tuple` is False, this must be a state Tensor,
- `2-D, batch x state_size`. If `state_is_tuple` is True, this must be a
- tuple of state Tensors, both `2-D`, with column sizes `c_state` and
- `m_state`.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "LSTMCell".
-
-##### Returns:
-
- A tuple containing:
- - A `2-D, [batch x output_dim]`, Tensor representing the output of the
- LSTM after reading `inputs` when previous state was `state`.
- Here output_dim is:
- num_proj if num_proj was set,
- num_units otherwise.
- - Tensor(s) representing the new state of LSTM after reading `inputs` when
- the previous state was `state`. Same type and shape(s) as `state`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If input size cannot be inferred from inputs via
- static shape inference.
-
-
-- - -
-
-#### `tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__init__(num_units, use_peepholes=False, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=1, num_proj_shards=1, forget_bias=1.0, state_is_tuple=False, activation=tanh)` {#CoupledInputForgetGateLSTMCell.__init__}
-
-Initialize the parameters for an LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell
-* <b>`use_peepholes`</b>: bool, set True to enable diagonal/peephole connections.
-* <b>`initializer`</b>: (optional) The initializer to use for the weight and
- projection matrices.
-* <b>`num_proj`</b>: (optional) int, The output dimensionality for the projection
- matrices. If None, no projection is performed.
-* <b>`proj_clip`</b>: (optional) A float value. If `num_proj > 0` and `proj_clip` is
- provided, then the projected values are clipped elementwise to within
- `[-proj_clip, proj_clip]`.
-
-* <b>`num_unit_shards`</b>: How to split the weight matrix. If >1, the weight
- matrix is stored across num_unit_shards.
-* <b>`num_proj_shards`</b>: How to split the projection matrix. If >1, the
- projection matrix is stored across num_proj_shards.
-* <b>`forget_bias`</b>: Biases of the forget gate are initialized by default to 1
- in order to reduce the scale of forgetting at the beginning of
- the training.
-* <b>`state_is_tuple`</b>: If True, accepted and returned states are 2-tuples of
- the `c_state` and `m_state`. By default (False), they are concatenated
- along the column axis. This default behavior will soon be deprecated.
-* <b>`activation`</b>: Activation function of the inner states.
-
-
-- - -
-
-#### `tf.contrib.rnn.CoupledInputForgetGateLSTMCell.output_size` {#CoupledInputForgetGateLSTMCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.CoupledInputForgetGateLSTMCell.state_size` {#CoupledInputForgetGateLSTMCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.CoupledInputForgetGateLSTMCell.zero_state(batch_size, dtype)` {#CoupledInputForgetGateLSTMCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.TimeFreqLSTMCell` {#TimeFreqLSTMCell}
-
-Time-Frequency Long short-term memory unit (LSTM) recurrent network cell.
-
-This implementation is based on:
-
- Tara N. Sainath and Bo Li
- "Modeling Time-Frequency Patterns with LSTM vs. Convolutional Architectures
- for LVCSR Tasks." submitted to INTERSPEECH, 2016.
-
-It uses peep-hole connections and optional cell clipping.
-- - -
-
-#### `tf.contrib.rnn.TimeFreqLSTMCell.__call__(inputs, state, scope=None)` {#TimeFreqLSTMCell.__call__}
-
-Run one step of LSTM.
-
-##### Args:
-
-
-* <b>`inputs`</b>: input Tensor, 2D, batch x num_units.
-* <b>`state`</b>: state Tensor, 2D, batch x state_size.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "TimeFreqLSTMCell".
-
-##### Returns:
-
- A tuple containing:
- - A 2D, batch x output_dim, Tensor representing the output of the LSTM
- after reading "inputs" when previous state was "state".
- Here output_dim is num_units.
- - A 2D, batch x state_size, Tensor representing the new state of LSTM
- after reading "inputs" when previous state was "state".
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an input_size was specified and the provided inputs have
- a different dimension.
-
-
-- - -
-
-#### `tf.contrib.rnn.TimeFreqLSTMCell.__init__(num_units, use_peepholes=False, cell_clip=None, initializer=None, num_unit_shards=1, forget_bias=1.0, feature_size=None, frequency_skip=None)` {#TimeFreqLSTMCell.__init__}
-
-Initialize the parameters for an LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell
-* <b>`use_peepholes`</b>: bool, set True to enable diagonal/peephole connections.
-* <b>`cell_clip`</b>: (optional) A float value, if provided the cell state is clipped
- by this value prior to the cell output activation.
-* <b>`initializer`</b>: (optional) The initializer to use for the weight and
- projection matrices.
-* <b>`num_unit_shards`</b>: int, How to split the weight matrix. If >1, the weight
- matrix is stored across num_unit_shards.
-* <b>`forget_bias`</b>: float, Biases of the forget gate are initialized by default
- to 1 in order to reduce the scale of forgetting at the beginning
- of the training.
-* <b>`feature_size`</b>: int, The size of the input feature the LSTM spans over.
-* <b>`frequency_skip`</b>: int, The amount the LSTM filter is shifted by in
- frequency.
-
-
-- - -
-
-#### `tf.contrib.rnn.TimeFreqLSTMCell.output_size` {#TimeFreqLSTMCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.TimeFreqLSTMCell.state_size` {#TimeFreqLSTMCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.TimeFreqLSTMCell.zero_state(batch_size, dtype)` {#TimeFreqLSTMCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.GridLSTMCell` {#GridLSTMCell}
-
-Grid Long short-term memory unit (LSTM) recurrent network cell.
-
-The default is based on:
- Nal Kalchbrenner, Ivo Danihelka and Alex Graves
- "Grid Long Short-Term Memory," Proc. ICLR 2016.
- http://arxiv.org/abs/1507.01526
-
-When peephole connections are used, the implementation is based on:
- Tara N. Sainath and Bo Li
- "Modeling Time-Frequency Patterns with LSTM vs. Convolutional Architectures
- for LVCSR Tasks." submitted to INTERSPEECH, 2016.
-
-The code uses optional peephole connections, shared_weights and cell clipping.
-- - -
-
-#### `tf.contrib.rnn.GridLSTMCell.__call__(inputs, state, scope=None)` {#GridLSTMCell.__call__}
-
-Run one step of LSTM.
-
-##### Args:
-
-
-* <b>`inputs`</b>: input Tensor, 2D, [batch, feature_size].
-* <b>`state`</b>: Tensor or tuple of Tensors, 2D, [batch, state_size], depends on the
- flag self._state_is_tuple.
-* <b>`scope`</b>: (optional) VariableScope for the created subgraph; if None, it
- defaults to "GridLSTMCell".
-
-##### Returns:
-
- A tuple containing:
- - A 2D, [batch, output_dim], Tensor representing the output of the LSTM
- after reading "inputs" when previous state was "state".
- Here output_dim is num_units.
- - A 2D, [batch, state_size], Tensor representing the new state of LSTM
- after reading "inputs" when previous state was "state".
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an input_size was specified and the provided inputs have
- a different dimension.
-
-
-- - -
-
-#### `tf.contrib.rnn.GridLSTMCell.__init__(num_units, use_peepholes=False, share_time_frequency_weights=False, cell_clip=None, initializer=None, num_unit_shards=1, forget_bias=1.0, feature_size=None, frequency_skip=None, num_frequency_blocks=None, start_freqindex_list=None, end_freqindex_list=None, couple_input_forget_gates=False, state_is_tuple=False)` {#GridLSTMCell.__init__}
-
-Initialize the parameters for an LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell
-* <b>`use_peepholes`</b>: (optional) bool, default False. Set True to enable
- diagonal/peephole connections.
-* <b>`share_time_frequency_weights`</b>: (optional) bool, default False. Set True to
- enable shared cell weights between time and frequency LSTMs.
-* <b>`cell_clip`</b>: (optional) A float value, default None, if provided the cell
- state is clipped by this value prior to the cell output activation.
-* <b>`initializer`</b>: (optional) The initializer to use for the weight and
- projection matrices, default None.
-* <b>`num_unit_shards`</b>: (optional) int, defualt 1, How to split the weight
- matrix. If > 1,the weight matrix is stored across num_unit_shards.
-* <b>`forget_bias`</b>: (optional) float, default 1.0, The initial bias of the
- forget gates, used to reduce the scale of forgetting at the beginning
- of the training.
-* <b>`feature_size`</b>: (optional) int, default None, The size of the input feature
- the LSTM spans over.
-* <b>`frequency_skip`</b>: (optional) int, default None, The amount the LSTM filter
- is shifted by in frequency.
-* <b>`num_frequency_blocks`</b>: [required] A list of frequency blocks needed to
- cover the whole input feature splitting defined by start_freqindex_list
- and end_freqindex_list.
-* <b>`start_freqindex_list`</b>: [optional], list of ints, default None, The
- starting frequency index for each frequency block.
-* <b>`end_freqindex_list`</b>: [optional], list of ints, default None. The ending
- frequency index for each frequency block.
-* <b>`couple_input_forget_gates`</b>: (optional) bool, default False, Whether to
- couple the input and forget gates, i.e. f_gate = 1.0 - i_gate, to reduce
- model parameters and computation cost.
-* <b>`state_is_tuple`</b>: If True, accepted and returned states are 2-tuples of
- the `c_state` and `m_state`. By default (False), they are concatenated
- along the column axis. This default behavior will soon be deprecated.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the num_frequency_blocks list is not specified
-
-
-- - -
-
-#### `tf.contrib.rnn.GridLSTMCell.output_size` {#GridLSTMCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GridLSTMCell.state_size` {#GridLSTMCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GridLSTMCell.state_tuple_type` {#GridLSTMCell.state_tuple_type}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GridLSTMCell.zero_state(batch_size, dtype)` {#GridLSTMCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.AttentionCellWrapper` {#AttentionCellWrapper}
-
-Basic attention cell wrapper.
-
-Implementation based on https://arxiv.org/abs/1409.0473.
-- - -
-
-#### `tf.contrib.rnn.AttentionCellWrapper.__call__(inputs, state, scope=None)` {#AttentionCellWrapper.__call__}
-
-Long short-term memory cell with attention (LSTMA).
-
-
-- - -
-
-#### `tf.contrib.rnn.AttentionCellWrapper.__init__(cell, attn_length, attn_size=None, attn_vec_size=None, input_size=None, state_is_tuple=False)` {#AttentionCellWrapper.__init__}
-
-Create a cell with attention.
-
-##### Args:
-
-
-* <b>`cell`</b>: an RNNCell, an attention is added to it.
-* <b>`attn_length`</b>: integer, the size of an attention window.
-* <b>`attn_size`</b>: integer, the size of an attention vector. Equal to
- cell.output_size by default.
-* <b>`attn_vec_size`</b>: integer, the number of convolutional features calculated
- on attention state and a size of the hidden layer built from
- base cell state. Equal attn_size to by default.
-* <b>`input_size`</b>: integer, the size of a hidden linear layer,
- built from inputs and attention. Derived from the input tensor
- by default.
-* <b>`state_is_tuple`</b>: If True, accepted and returned states are n-tuples, where
- `n = len(cells)`. By default (False), the states are all
- concatenated along the column axis.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if cell is not an RNNCell.
-* <b>`ValueError`</b>: if cell returns a state tuple but the flag
- `state_is_tuple` is `False` or if attn_length is zero or less.
-
-
-- - -
-
-#### `tf.contrib.rnn.AttentionCellWrapper.output_size` {#AttentionCellWrapper.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.AttentionCellWrapper.state_size` {#AttentionCellWrapper.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.AttentionCellWrapper.zero_state(batch_size, dtype)` {#AttentionCellWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `class tf.contrib.rnn.CompiledWrapper` {#CompiledWrapper}
-
-Wraps step execution in an XLA JIT scope.
-- - -
-
-#### `tf.contrib.rnn.CompiledWrapper.__call__(inputs, state, scope=None)` {#CompiledWrapper.__call__}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.CompiledWrapper.__init__(cell, compile_stateful=False)` {#CompiledWrapper.__init__}
-
-Create CompiledWrapper cell.
-
-##### Args:
-
-
-* <b>`cell`</b>: Instance of `RNNCell`.
-* <b>`compile_stateful`</b>: Whether to compile stateful ops like initializers
- and random number generators (default: False).
-
-
-- - -
-
-#### `tf.contrib.rnn.CompiledWrapper.output_size` {#CompiledWrapper.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.CompiledWrapper.state_size` {#CompiledWrapper.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.CompiledWrapper.zero_state(batch_size, dtype)` {#CompiledWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
-
-- - -
-
-### `tf.contrib.rnn.static_rnn(cell, inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)` {#static_rnn}
-
-Creates a recurrent neural network specified by RNNCell `cell`.
-
-The simplest form of RNN network generated is:
-
-```python
- state = cell.zero_state(...)
- outputs = []
- for input_ in inputs:
- output, state = cell(input_, state)
- outputs.append(output)
- return (outputs, state)
-```
-However, a few other options are available:
-
-An initial state can be provided.
-If the sequence_length vector is provided, dynamic calculation is performed.
-This method of calculation does not compute the RNN steps past the maximum
-sequence length of the minibatch (thus saving computational time),
-and properly propagates the state at an example's sequence length
-to the final state output.
-
-The dynamic calculation performed is, at time `t` for batch row `b`,
-
-```python
- (output, state)(b, t) =
- (t >= sequence_length(b))
- ? (zeros(cell.output_size), states(b, sequence_length(b) - 1))
- : cell(input(b, t), state(b, t - 1))
-```
-
-##### Args:
-
-
-* <b>`cell`</b>: An instance of RNNCell.
-* <b>`inputs`</b>: A length T list of inputs, each a `Tensor` of shape
- `[batch_size, input_size]`, or a nested tuple of such elements.
-* <b>`initial_state`</b>: (optional) An initial state for the RNN.
- If `cell.state_size` is an integer, this must be
- a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`.
- If `cell.state_size` is a tuple, this should be a tuple of
- tensors having shapes `[batch_size, s] for s in cell.state_size`.
-* <b>`dtype`</b>: (optional) The data type for the initial state and expected output.
- Required if initial_state is not provided or RNN state has a heterogeneous
- dtype.
-* <b>`sequence_length`</b>: Specifies the length of each sequence in inputs.
- An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
-
-##### Returns:
-
- A pair (outputs, state) where:
-
- - outputs is a length T list of outputs (one for each input), or a nested
- tuple of such elements.
- - state is the final state
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cell` is not an instance of RNNCell.
-* <b>`ValueError`</b>: If `inputs` is `None` or an empty list, or if the input depth
- (column size) cannot be inferred from inputs via shape inference.
-
-
-- - -
-
-### `tf.contrib.rnn.static_state_saving_rnn(cell, inputs, state_saver, state_name, sequence_length=None, scope=None)` {#static_state_saving_rnn}
-
-RNN that accepts a state saver for time-truncated RNN calculation.
-
-##### Args:
-
-
-* <b>`cell`</b>: An instance of `RNNCell`.
-* <b>`inputs`</b>: A length T list of inputs, each a `Tensor` of shape
- `[batch_size, input_size]`.
-* <b>`state_saver`</b>: A state saver object with methods `state` and `save_state`.
-* <b>`state_name`</b>: Python string or tuple of strings. The name to use with the
- state_saver. If the cell returns tuples of states (i.e.,
- `cell.state_size` is a tuple) then `state_name` should be a tuple of
- strings having the same length as `cell.state_size`. Otherwise it should
- be a single string.
-* <b>`sequence_length`</b>: (optional) An int32/int64 vector size [batch_size].
- See the documentation for rnn() for more details about sequence_length.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
-
-##### Returns:
-
- A pair (outputs, state) where:
- outputs is a length T list of outputs (one for each input)
- states is the final state
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cell` is not an instance of RNNCell.
-* <b>`ValueError`</b>: If `inputs` is `None` or an empty list, or if the arity and
- type of `state_name` does not match that of `cell.state_size`.
-
-
-- - -
-
-### `tf.contrib.rnn.static_bidirectional_rnn(cell_fw, cell_bw, inputs, initial_state_fw=None, initial_state_bw=None, dtype=None, sequence_length=None, scope=None)` {#static_bidirectional_rnn}
-
-Creates a bidirectional recurrent neural network.
-
-Similar to the unidirectional case above (rnn) but takes input and builds
-independent forward and backward RNNs with the final forward and backward
-outputs depth-concatenated, such that the output will have the format
-[time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of
-forward and backward cell must match. The initial state for both directions
-is zero by default (but can be set optionally) and no intermediate states are
-ever returned -- the network is fully unrolled for the given (passed in)
-length(s) of the sequence(s) or completely unrolled if length(s) is not given.
-
-##### Args:
-
-
-* <b>`cell_fw`</b>: An instance of RNNCell, to be used for forward direction.
-* <b>`cell_bw`</b>: An instance of RNNCell, to be used for backward direction.
-* <b>`inputs`</b>: A length T list of inputs, each a tensor of shape
- [batch_size, input_size], or a nested tuple of such elements.
-* <b>`initial_state_fw`</b>: (optional) An initial state for the forward RNN.
- This must be a tensor of appropriate type and shape
- `[batch_size, cell_fw.state_size]`.
- If `cell_fw.state_size` is a tuple, this should be a tuple of
- tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
-* <b>`initial_state_bw`</b>: (optional) Same as for `initial_state_fw`, but using
- the corresponding properties of `cell_bw`.
-* <b>`dtype`</b>: (optional) The data type for the initial state. Required if
- either of the initial states are not provided.
-* <b>`sequence_length`</b>: (optional) An int32/int64 vector, size `[batch_size]`,
- containing the actual lengths for each of the sequences.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "bidirectional_rnn"
-
-##### Returns:
-
- A tuple (outputs, output_state_fw, output_state_bw) where:
- outputs is a length `T` list of outputs (one for each input), which
- are depth-concatenated forward and backward outputs.
- output_state_fw is the final state of the forward rnn.
- output_state_bw is the final state of the backward rnn.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cell_fw` or `cell_bw` is not an instance of `RNNCell`.
-* <b>`ValueError`</b>: If inputs is None or an empty list.
-
-
-- - -
-
-### `tf.contrib.rnn.stack_bidirectional_dynamic_rnn(cells_fw, cells_bw, inputs, initial_states_fw=None, initial_states_bw=None, dtype=None, sequence_length=None, scope=None)` {#stack_bidirectional_dynamic_rnn}
-
-Creates a dynamic bidirectional recurrent neural network.
-
-Stacks several bidirectional rnn layers. The combined forward and backward
-layer outputs are used as input of the next layer. tf.bidirectional_rnn
-does not allow to share forward and backward information between layers.
-The input_size of the first forward and backward cells must match.
-The initial state for both directions is zero and no intermediate states
-are returned.
-
-##### Args:
-
-
-* <b>`cells_fw`</b>: List of instances of RNNCell, one per layer,
- to be used for forward direction.
-* <b>`cells_bw`</b>: List of instances of RNNCell, one per layer,
- to be used for backward direction.
-* <b>`inputs`</b>: A length T list of inputs, each a tensor of shape
- [batch_size, input_size], or a nested tuple of such elements.
-* <b>`initial_states_fw`</b>: (optional) A list of the initial states (one per layer)
- for the forward RNN.
- Each tensor must has an appropriate type and shape
- `[batch_size, cell_fw.state_size]`.
-* <b>`initial_states_bw`</b>: (optional) Same as for `initial_states_fw`, but using
- the corresponding properties of `cells_bw`.
-* <b>`dtype`</b>: (optional) The data type for the initial state. Required if
- either of the initial states are not provided.
-* <b>`sequence_length`</b>: (optional) An int32/int64 vector, size `[batch_size]`,
- containing the actual lengths for each of the sequences.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to None.
-
-##### Returns:
-
- A tuple (outputs, output_state_fw, output_state_bw) where:
-
-* <b>`outputs`</b>: Output `Tensor` shaped:
- `batch_size, max_time, layers_output]`. Where layers_output
- are depth-concatenated forward and backward outputs.
- output_states_fw is the final states, one tensor per layer,
- of the forward rnn.
- output_states_bw is the final states, one tensor per layer,
- of the backward rnn.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cell_fw` or `cell_bw` is not an instance of `RNNCell`.
-* <b>`ValueError`</b>: If inputs is `None`, not a list or an empty list.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.training.md b/tensorflow/g3doc/api_docs/python/contrib.training.md
deleted file mode 100644
index 88ab5e6e23..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.training.md
+++ /dev/null
@@ -1,1057 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Training (contrib)
-[TOC]
-
-Training and input utilities. See @{$python/contrib.training} guide.
-
-- - -
-
-### `tf.contrib.training.batch_sequences_with_states(input_key, input_sequences, input_context, input_length, initial_states, num_unroll, batch_size, num_threads=3, capacity=1000, allow_small_batch=True, pad=True, make_keys_unique=False, make_keys_unique_seed=None, name=None)` {#batch_sequences_with_states}
-
-Creates batches of segments of sequential input.
-
-This method creates a `SequenceQueueingStateSaver` (SQSS) and adds it to
-the queuerunners. It returns a `NextQueuedSequenceBatch`.
-
-It accepts one example at a time identified by a unique `input_key`.
-`input_sequence` is a dict with values that are tensors with time as first
-dimension. This time dimension must be the same across those tensors of an
-example. It can vary across examples. Although it always has to be a multiple
-of `num_unroll`. Hence, padding may be necessary and it is turned on by
-default by `pad=True`.
-
-`input_length` is a Tensor scalar or an int recording the time dimension prior
-to padding. It should be between 0 and the time dimension. One reason we want
-to keep track of it is so that we can take it into consideration when
-computing the loss. If `pad=True` then `input_length` can be `None` and will
-be inferred.
-
-This methods segments `input_sequence` into segments of length `num_unroll`.
-It batches input sequences from `batch_size` many examples. These mini-batches
-are available through the `sequence` property of the output. Moreover, for
-each entry in the batch we can access its original `input_key` in `key` and
-its input length in `total_length`. `length` records within this segment how
-many non-padded time steps there are.
-
-Static features of an example that do not vary across time can be part of the
-`input_context`, a dict with Tensor values. This method copies the context for
-each segment and makes it available in the `context` of the output.
-
-This method can maintain and update a state for each example. It accepts some
-initial_states as a dict with Tensor values. The first mini-batch an example
-is contained has initial_states as entry of the `state`. If save_state is
-called then the next segment will have the updated entry of the `state`.
-See `NextQueuedSequenceBatch` for a complete list of properties and methods.
-
-Example usage:
-
-```python
-batch_size = 32
-num_unroll = 20
-num_enqueue_threads = 3
-lstm_size = 8
-cell = tf.contrib.rnn.BasicLSTMCell(num_units=lstm_size)
-
-key, sequences, context = my_parser(raw_data)
-initial_state_values = tf.zeros((state_size,), dtype=tf.float32)
-initial_states = {"lstm_state": initial_state_values}
-batch = tf.batch_sequences_with_states(
- input_key=key,
- input_sequences=sequences,
- input_context=context,
- initial_states=initial_states,
- num_unroll=num_unroll,
- batch_size=batch_size,
- num_threads=num_enqueue_threads,
- capacity=batch_size * num_enqueue_threads * 2)
-
-inputs = batch.sequences["input"]
-context_label = batch.context["label"]
-
-inputs_by_time = tf.split(value=inputs, num_or_size_splits=num_unroll, axis=1)
-assert len(inputs_by_time) == num_unroll
-
-lstm_output, _ = tf.contrib.rnn.static_state_saving_rnn(
- cell,
- inputs_by_time,
- state_saver=batch,
- state_name="lstm_state")
-
-# Start a prefetcher in the background
-sess = tf.Session()
-
-tf.train.start_queue_runners(sess=session)
-
-while True:
- # Step through batches, perform training or inference...
- session.run([lstm_output])
-```
-
-##### Args:
-
-
-* <b>`input_key`</b>: A string scalar `Tensor`, the **unique** key for the given
- input example. This is used to keep track of the split minibatch elements
- of this input. Batched keys of the current iteration are made
- accessible via the `key` property. The shape of `input_key` (scalar) must
- be fully specified. Consider setting `make_keys_unique` to True when
- iterating over the same input multiple times.
-
- **Note**: if `make_keys_unique=False` then `input_key`s must be unique.
-
-* <b>`input_sequences`</b>: A dict mapping string names to `Tensor` values. The values
- must all have matching first dimension, called `value_length`. They may
- vary from input to input. The remainder of the shape (other than the first
- dimension) must be fully specified.
- The `SequenceQueueingStateSaver` will split these tensors along
- this first dimension into minibatch elements of dimension `num_unrolled`.
- Batched and segmented sequences of the current iteration are made
- accessible via the `sequences` property.
-
- **Note**: if `pad=False`, then `value_length` must always be a multiple
- of `num_unroll`.
-
-* <b>`input_context`</b>: A dict mapping string names to `Tensor` values. The values
- are treated as "global" across all time splits of the given input example,
- and will be copied across for all minibatch elements accordingly.
- Batched and copied context of the current iteration are made
- accessible via the `context` property.
-
- **Note**: All input_context values must have fully defined shapes.
-
-* <b>`input_length`</b>: None or an int32 scalar `Tensor`, the length of the sequence
- prior to padding. If `input_length=None` and `pad=True` then the length
- will be inferred and will be equal to `value_length`. If `pad=False` then
- `input_length` cannot be `None`: `input_length` must be specified. Its
- shape of `input_length` (scalar) must be fully specified. Its value may be
- at most `value_length` for any given input (see above for the definition
- of `value_length`). Batched and total lengths of the current iteration are
- made accessible via the `length` and `total_length` properties.
-* <b>`initial_states`</b>: A dict mapping string state names to multi-dimensional
- values (e.g. constants or tensors). This input defines the set of
- states that will be kept track of during computing iterations, and
- which can be accessed via the `state` and `save_state` methods.
-
- **Note**: All initial_state values must have fully defined shapes.
-
-* <b>`num_unroll`</b>: Python integer, how many time steps to unroll at a time.
- The input sequences of length k are then split into k / num_unroll many
- segments.
-* <b>`batch_size`</b>: int or int32 scalar `Tensor`, how large minibatches should
- be when accessing the `state()` method and `context`, `sequences`, etc,
- properties.
-* <b>`num_threads`</b>: The int number of threads enqueuing input examples into a
- queue.
-* <b>`capacity`</b>: The max capacity of the queue in number of examples. Needs to be
- at least `batch_size`. Defaults to 1000. When iterating over the same
- input example multiple times reusing their keys the `capacity` must be
- smaller than the number of examples.
-* <b>`allow_small_batch`</b>: If true, the queue will return smaller batches when
- there aren't enough input examples to fill a whole batch and the end of
- the input has been reached.
-* <b>`pad`</b>: If `True`, `input_sequences` will be padded to multiple of
- `num_unroll`. In that case `input_length` may be `None` and is assumed to
- be the length of first dimension of values in `input_sequences`
- (i.e. `value_length`).
-* <b>`make_keys_unique`</b>: Whether to append a random integer to the `input_key` in
- an effort to make it unique. The seed can be set via
- `make_keys_unique_seed`.
-* <b>`make_keys_unique_seed`</b>: If `make_keys_unique=True` this fixes the seed with
- which a random postfix is generated.
-* <b>`name`</b>: An op name string (optional).
-
-##### Returns:
-
- A NextQueuedSequenceBatch with segmented and batched inputs and their
- states.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if any of the inputs is not an expected type.
-* <b>`ValueError`</b>: if any of the input values is inconsistent, e.g. if
- not enough shape information is available from inputs to build
- the state saver.
-
-
-- - -
-
-### `class tf.contrib.training.NextQueuedSequenceBatch` {#NextQueuedSequenceBatch}
-
-NextQueuedSequenceBatch stores deferred SequenceQueueingStateSaver data.
-
-This class is instantiated by `SequenceQueueingStateSaver` and is accessible
-via its `next_batch` property.
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.__init__(state_saver)` {#NextQueuedSequenceBatch.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.batch_size` {#NextQueuedSequenceBatch.batch_size}
-
-The batch_size of the given batch.
-
-Usually, this is the batch_size requested when initializing the SQSS, but
-if allow_small_batch=True this will become smaller when inputs are
-exhausted.
-
-##### Returns:
-
- A scalar integer tensor, the batch_size
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.context` {#NextQueuedSequenceBatch.context}
-
-A dict mapping keys of `input_context` to batched context.
-
-##### Returns:
-
- A dict mapping keys of `input_context` to tensors.
- If we had at input:
-
- ```python
- context["name"].get_shape() == [d1, d2, ...]
- ```
-
- then for this property:
-
- ```python
- context["name"].get_shape() == [batch_size, d1, d2, ...]
- ```
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.insertion_index` {#NextQueuedSequenceBatch.insertion_index}
-
-The insertion indices of the examples (when they were first added).
-
-These indices start with the value -2**63 and increase with every
-call to the prefetch op. Each whole example gets its own insertion
-index, and this is used to prioritize the example so that its truncated
-segments appear in adjacent iterations, even if new examples are inserted
-by the prefetch op between iterations.
-
-##### Returns:
-
- An int64 vector of length `batch_size`, the insertion indices.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.key` {#NextQueuedSequenceBatch.key}
-
-The key names of the given truncated unrolled examples.
-
-The format of the key is:
-
-```python
-"%05d_of_%05d:%s" % (sequence, sequence_count, original_key)
-```
-
-where `original_key` is the unique key read in by the prefetcher.
-
-##### Returns:
-
- A string vector of length `batch_size`, the keys.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.length` {#NextQueuedSequenceBatch.length}
-
-The lengths of the given truncated unrolled examples.
-
-For initial iterations, for which `sequence * num_unroll < length`,
-this number is `num_unroll`. For the remainder,
-this number is between `0` and `num_unroll`.
-
-##### Returns:
-
- An integer vector of length `batch_size`, the lengths.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.next_key` {#NextQueuedSequenceBatch.next_key}
-
-The key names of the next (in iteration) truncated unrolled examples.
-
-The format of the key is:
-
-```python
-"%05d_of_%05d:%s" % (sequence + 1, sequence_count, original_key)
-```
-
-if `sequence + 1 < sequence_count`, otherwise:
-
-```python
-"STOP:%s" % original_key
-```
-
-where `original_key` is the unique key read in by the prefetcher.
-
-##### Returns:
-
- A string vector of length `batch_size`, the keys.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.save_state(state_name, value, name=None)` {#NextQueuedSequenceBatch.save_state}
-
-Returns an op to save the current batch of state `state_name`.
-
-##### Args:
-
-
-* <b>`state_name`</b>: string, matches a key provided in `initial_states`.
-* <b>`value`</b>: A `Tensor`.
- Its type must match that of `initial_states[state_name].dtype`.
- If we had at input:
-
- ```python
- initial_states[state_name].get_shape() == [d1, d2, ...]
- ```
-
- then the shape of `value` must match:
-
- ```python
- tf.shape(value) == [batch_size, d1, d2, ...]
- ```
-
-
-* <b>`name`</b>: string (optional). The name scope for newly created ops.
-
-##### Returns:
-
- A control flow op that stores the new state of each entry into
- the state saver. This op must be run for every iteration that
- accesses data from the state saver (otherwise the state saver
- will never progress through its states and run out of capacity).
-
-##### Raises:
-
-
-* <b>`KeyError`</b>: if `state_name` does not match any of the initial states
- declared in `initial_states`.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.sequence` {#NextQueuedSequenceBatch.sequence}
-
-An int32 vector, length `batch_size`: the sequence index of each entry.
-
-When an input is split up, the sequence values
-```
-0, 1, ..., sequence_count - 1
-```
-are assigned to each split.
-
-##### Returns:
-
- An int32 vector `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.sequence_count` {#NextQueuedSequenceBatch.sequence_count}
-
-An int32 vector, length `batch_size`: the sequence count of each entry.
-
-When an input is split up, the number of splits is equal to:
-`padded_length / num_unroll`. This is the sequence_count.
-
-##### Returns:
-
- An int32 vector `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.sequences` {#NextQueuedSequenceBatch.sequences}
-
-A dict mapping keys of `input_sequences` to split and rebatched data.
-
-##### Returns:
-
- A dict mapping keys of `input_sequences` to tensors.
- If we had at input:
-
- ```python
- sequences["name"].get_shape() == [None, d1, d2, ...]
- ```
-
- where `None` meant the sequence time was dynamic, then for this property:
-
- ```python
- sequences["name"].get_shape() == [batch_size, num_unroll, d1, d2, ...].
- ```
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.state(state_name)` {#NextQueuedSequenceBatch.state}
-
-Returns batched state tensors.
-
-##### Args:
-
-
-* <b>`state_name`</b>: string, matches a key provided in `initial_states`.
-
-##### Returns:
-
- A `Tensor`: a batched set of states, either initial states (if this is
- the first run of the given example), or a value as stored during
- a previous iteration via `save_state` control flow.
- Its type is the same as `initial_states["state_name"].dtype`.
- If we had at input:
-
- ```python
- initial_states[state_name].get_shape() == [d1, d2, ...],
- ```
-
- then
-
- ```python
- state(state_name).get_shape() == [batch_size, d1, d2, ...]
- ```
-
-##### Raises:
-
-
-* <b>`KeyError`</b>: if `state_name` does not match any of the initial states
- declared in `initial_states`.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.total_length` {#NextQueuedSequenceBatch.total_length}
-
-The lengths of the original (non-truncated) unrolled examples.
-
-##### Returns:
-
- An integer vector of length `batch_size`, the total lengths.
-
-
-
-- - -
-
-### `class tf.contrib.training.SequenceQueueingStateSaver` {#SequenceQueueingStateSaver}
-
-SequenceQueueingStateSaver provides access to stateful values from input.
-
-This class is meant to be used instead of, e.g., a `Queue`, for splitting
-variable-length sequence inputs into segments of sequences with fixed length
-and batching them into mini-batches. It maintains contexts and state for a
-sequence across the segments. It can be used in conjunction with a
-`QueueRunner` (see the example below).
-
-The `SequenceQueueingStateSaver` (SQSS) accepts one example at a time via the
-inputs `input_length`, `input_key`, `input_sequences` (a dict),
-`input_context` (a dict), and `initial_states` (a dict).
-The sequences, values in `input_sequences`, may have variable first dimension
-(the `padded_length`), though this dimension must always be a multiple of
-`num_unroll`. All other dimensions must be fixed and accessible via
-`get_shape` calls. The length prior to padding can be recorded in
-`input_length`. The context values in `input_context` must all have fixed and
-well defined dimensions. The initial state values must all have fixed and
-well defined dimensions.
-
-The SQSS splits the sequences of an input example into segments of length
-`num_unroll`. Across examples minibatches of size `batch_size` are formed.
-These minibatches contain a segment of the sequences, copy the context values,
-and maintain state, length, and key information of the original input
-examples. In the first segment of an example the state is still the initial
-state. It can then be updated; and updated state values are accessible in
-subsequent segments of the same example. After each segment
-`batch.save_state()` must be called which is done by the state_saving_rnn.
-Without this call, the dequeue op associated with the SQSS will not run.
-Internally, SQSS has a queue for the input examples. Its `capacity` is
-configurable. If set smaller than `batch_size` then the dequeue op will block
-indefinitely. A small multiple of `batch_size` is a good rule of thumb to
-prevent that queue from becoming a bottleneck and slowing down training.
-If set too large (and note that it defaults to unbounded) memory consumption
-goes up. Moreover, when iterating over the same input examples multiple times
-reusing the same `key` the `capacity` must be smaller than the number of
-examples.
-
-The prefetcher, which reads one unrolled, variable-length input sequence at
-a time, is accessible via `prefetch_op`. The underlying `Barrier` object
-is accessible via `barrier`. Processed minibatches, as well as
-state read and write capabilities are accessible via `next_batch`.
-Specifically, `next_batch` provides access to all of the minibatched
-data, including the following, see `NextQueuedSequenceBatch` for details:
-
-* `total_length`, `length`, `insertion_index`, `key`, `next_key`,
-* `sequence` (the index each minibatch entry's time segment index),
-* `sequence_count` (the total time segment count for each minibatch entry),
-* `context` (a dict of the copied minibatched context values),
-* `sequences` (a dict of the split minibatched variable-length sequences),
-* `state` (to access the states of the current segments of these entries)
-* `save_state` (to save the states for the next segments of these entries)
-
-Example usage:
-
-```python
-batch_size = 32
-num_unroll = 20
-lstm_size = 8
-cell = tf.contrib.rnn.BasicLSTMCell(num_units=lstm_size)
-initial_state_values = tf.zeros(cell.state_size, dtype=tf.float32)
-
-raw_data = get_single_input_from_input_reader()
-length, key, sequences, context = my_parser(raw_data)
-assert "input" in sequences.keys()
-assert "label" in context.keys()
-initial_states = {"lstm_state": initial_state_value}
-
-stateful_reader = tf.SequenceQueueingStateSaver(
- batch_size, num_unroll,
- length=length, input_key=key, input_sequences=sequences,
- input_context=context, initial_states=initial_states,
- capacity=batch_size*100)
-
-batch = stateful_reader.next_batch
-inputs = batch.sequences["input"]
-context_label = batch.context["label"]
-
-inputs_by_time = tf.split(value=inputs, num_or_size_splits=num_unroll, axis=1)
-assert len(inputs_by_time) == num_unroll
-
-lstm_output, _ = tf.contrib.rnn.static_state_saving_rnn(
- cell,
- inputs_by_time,
- state_saver=batch,
- state_name="lstm_state")
-
-# Start a prefetcher in the background
-sess = tf.Session()
-num_threads = 3
-queue_runner = tf.train.QueueRunner(
- stateful_reader, [stateful_reader.prefetch_op] * num_threads)
-tf.train.add_queue_runner(queue_runner)
-tf.train.start_queue_runners(sess=session)
-
-while True:
- # Step through batches, perform training or inference...
- session.run([lstm_output])
-```
-
-**Note**: Usually the barrier is given to a QueueRunner as in the
- examples above. The QueueRunner will close the barrier if the prefetch_op
- receives an OutOfRange Error from upstream input queues (i.e., reaches
- the end of the input). If the barrier is closed no further new examples
- are added to the SQSS. The underlying barrier might, however, still
- contain further unroll-steps of examples that have not undergone all
- iterations. To gracefully finish all examples, the flag
- `allow_small_batch` must be set to true, which causes the SQSS to issue
- progressively smaller mini-batches with the remaining examples.
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.__init__(batch_size, num_unroll, input_length, input_key, input_sequences, input_context, initial_states, capacity=None, allow_small_batch=False, name=None)` {#SequenceQueueingStateSaver.__init__}
-
-Creates the SequenceQueueingStateSaver.
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int or int32 scalar `Tensor`, how large minibatches should
- be when accessing the `state()` method and `context`, `sequences`, etc,
- properties.
-* <b>`num_unroll`</b>: Python integer, how many time steps to unroll at a time.
- The input sequences of length `k` are then split into `k / num_unroll`
- many segments.
-* <b>`input_length`</b>: An int32 scalar `Tensor`, the length of the sequence prior
- to padding. This value may be at most `padded_length` for any given
- input (see below for the definition of `padded_length`).
- Batched and total lengths of the current iteration are made accessible
- via the `length` and `total_length` properties. The shape of
- input_length (scalar) must be fully specified.
-* <b>`input_key`</b>: A string scalar `Tensor`, the **unique** key for the given
- input. This is used to keep track of the split minibatch elements
- of this input. Batched keys of the current iteration are made
- accessible via the `key` property. The shape of `input_key` (scalar)
- must be fully specified.
-* <b>`input_sequences`</b>: A dict mapping string names to `Tensor` values. The
- values must all have matching first dimension, called `padded_length`.
- The `SequenceQueueingStateSaver` will split these tensors along
- this first dimension into minibatch elements of dimension
- `num_unroll`. Batched and segmented sequences of the current iteration
- are made accessible via the `sequences` property.
-
- **Note**: `padded_length` may be dynamic, and may vary from input
- to input, but must always be a multiple of `num_unroll`. The remainder
- of the shape (other than the first dimension) must be fully specified.
-
-* <b>`input_context`</b>: A dict mapping string names to `Tensor` values. The values
- are treated as "global" across all time splits of the given input,
- and will be copied across for all minibatch elements accordingly.
- Batched and copied context of the current iteration are made
- accessible via the `context` property.
-
- **Note**: All input_context values must have fully defined shapes.
-
-* <b>`initial_states`</b>: A dict mapping string state names to multi-dimensional
- values (e.g. constants or tensors). This input defines the set of
- states that will be kept track of during computing iterations, and
- which can be accessed via the `state` and `save_state` methods.
-
- **Note**: All initial_state values must have fully defined shapes.
-
-* <b>`capacity`</b>: The max capacity of the SQSS in number of examples. Needs to be
- at least `batch_size`. Defaults to unbounded.
-* <b>`allow_small_batch`</b>: If true, the SQSS will return smaller batches when
- there aren't enough input examples to fill a whole batch and the end of
- the input has been reached (i.e., the underlying barrier has been
- closed).
-* <b>`name`</b>: An op name string (optional).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if any of the inputs is not an expected type.
-* <b>`ValueError`</b>: if any of the input values is inconsistent, e.g. if
- not enough shape information is available from inputs to build
- the state saver.
-
-
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.barrier` {#SequenceQueueingStateSaver.barrier}
-
-
-
-
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.batch_size` {#SequenceQueueingStateSaver.batch_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.close(cancel_pending_enqueues=False, name=None)` {#SequenceQueueingStateSaver.close}
-
-Closes the barrier and the FIFOQueue.
-
-This operation signals that no more segments of new sequences will be
-enqueued. New segments of already inserted sequences may still be enqueued
-and dequeued if there is a sufficient number filling a batch or
-allow_small_batch is true. Otherwise dequeue operations will fail
-immediately.
-
-##### Args:
-
-
-* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
- `False`. If `True`, all pending enqueues to the underlying queues will
- be cancelled, and completing already started sequences is not possible.
-* <b>`name`</b>: Optional name for the op.
-
-##### Returns:
-
- The operation that closes the barrier and the FIFOQueue.
-
-
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.name` {#SequenceQueueingStateSaver.name}
-
-
-
-
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.next_batch` {#SequenceQueueingStateSaver.next_batch}
-
-The `NextQueuedSequenceBatch` providing access to batched output data.
-
-Also provides access to the `state` and `save_state` methods.
-The first time this gets called, it additionally prepares barrier reads
-and creates `NextQueuedSequenceBatch` / next_batch objects. Subsequent
-calls simply return the previously created `next_batch`.
-
-In order to access data in `next_batch` without blocking, the `prefetch_op`
-must have been run at least `batch_size` times (ideally in a separate
-thread, or launched via a `QueueRunner`). After processing a segment in
-`next_batch()`, `batch.save_state()` must be called which is done by the
-state_saving_rnn. Without this call, the dequeue op associated with the SQSS
-will not run.
-
-##### Returns:
-
- A cached `NextQueuedSequenceBatch` instance.
-
-
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.num_unroll` {#SequenceQueueingStateSaver.num_unroll}
-
-
-
-
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.prefetch_op` {#SequenceQueueingStateSaver.prefetch_op}
-
-The op used to prefetch new data into the state saver.
-
-Running it once enqueues one new input example into the state saver.
-The first time this gets called, it additionally creates the prefetch_op.
-Subsequent calls simply return the previously created `prefetch_op`.
-
-It should be run in a separate thread via e.g. a `QueueRunner`.
-
-##### Returns:
-
- An `Operation` that performs prefetching.
-
-
-
-- - -
-
-### `tf.contrib.training.rejection_sample(tensors, accept_prob_fn, batch_size, queue_threads=1, enqueue_many=False, prebatch_capacity=16, prebatch_threads=1, runtime_checks=False, name=None)` {#rejection_sample}
-
-Stochastically creates batches by rejection sampling.
-
-Each list of non-batched tensors is evaluated by `accept_prob_fn`, to produce
-a scalar tensor between 0 and 1. This tensor corresponds to the probability of
-being accepted. When `batch_size` tensor groups have been accepted, the batch
-queue will return a mini-batch.
-
-##### Args:
-
-
-* <b>`tensors`</b>: List of tensors for data. All tensors are either one item or a
- batch, according to enqueue_many.
-* <b>`accept_prob_fn`</b>: A python lambda that takes a non-batch tensor from each
- item in `tensors`, and produces a scalar tensor.
-* <b>`batch_size`</b>: Size of batch to be returned.
-* <b>`queue_threads`</b>: The number of threads for the queue that will hold the final
- batch.
-* <b>`enqueue_many`</b>: Bool. If true, interpret input tensors as having a batch
- dimension.
-* <b>`prebatch_capacity`</b>: Capacity for the large queue that is used to convert
- batched tensors to single examples.
-* <b>`prebatch_threads`</b>: Number of threads for the large queue that is used to
- convert batched tensors to single examples.
-* <b>`runtime_checks`</b>: Bool. If true, insert runtime checks on the output of
- `accept_prob_fn`. Using `True` might have a performance impact.
-* <b>`name`</b>: Optional prefix for ops created by this function.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: enqueue_many is True and labels doesn't have a batch
- dimension, or if enqueue_many is False and labels isn't a scalar.
-* <b>`ValueError`</b>: enqueue_many is True, and batch dimension on data and labels
- don't match.
-* <b>`ValueError`</b>: if a zero initial probability class has a nonzero target
- probability.
-
-##### Returns:
-
- A list of tensors of the same length as `tensors`, with batch dimension
- `batch_size`.
-
-##### Example:
-
- # Get tensor for a single data and label example.
- data, label = data_provider.Get(['data', 'label'])
-
- # Get stratified batch according to data tensor.
- accept_prob_fn = lambda x: (tf.tanh(x[0]) + 1) / 2
- data_batch = tf.contrib.training.rejection_sample(
- [data, label], accept_prob_fn, 16)
-
- # Run batch through network.
- ...
-
-
-- - -
-
-### `tf.contrib.training.resample_at_rate(inputs, rates, scope=None, seed=None, back_prop=False)` {#resample_at_rate}
-
-Given `inputs` tensors, stochastically resamples each at a given rate.
-
-For example, if the inputs are `[[a1, a2], [b1, b2]]` and the rates
-tensor contains `[3, 1]`, then the return value may look like `[[a1,
-a2, a1, a1], [b1, b2, b1, b1]]`. However, many other outputs are
-possible, since this is stochastic -- averaged over many repeated
-calls, each set of inputs should appear in the output `rate` times
-the number of invocations.
-
-Uses Knuth's method to generate samples from the poisson
-distribution (but instead of just incrementing a count, actually
-emits the input); this is described at
-https://en.wikipedia.org/wiki/Poisson_distribution in the section on
-generating Poisson-distributed random variables.
-
-Note that this method is not appropriate for large rate values: with
-float16 it will stop performing correctly for rates above 9.17;
-float32, 87; and float64, 708. (These are the base-e versions of the
-minimum representable exponent for each type.)
-
-##### Args:
-
-
-* <b>`inputs`</b>: A list of tensors, each of which has a shape of `[batch_size, ...]`
-* <b>`rates`</b>: A tensor of shape `[batch_size]` contiaining the resampling rates
- for each input.
-* <b>`scope`</b>: Scope for the op.
-* <b>`seed`</b>: Random seed to use.
-* <b>`back_prop`</b>: Whether to allow back-propagation through this op.
-
-##### Returns:
-
- Selections from the input tensors.
-
-
-- - -
-
-### `tf.contrib.training.stratified_sample(tensors, labels, target_probs, batch_size, init_probs=None, enqueue_many=False, queue_capacity=16, threads_per_queue=1, name=None)` {#stratified_sample}
-
-Stochastically creates batches based on per-class probabilities.
-
-This method discards examples. Internally, it creates one queue to amortize
-the cost of disk reads, and one queue to hold the properly-proportioned
-batch.
-
-##### Args:
-
-
-* <b>`tensors`</b>: List of tensors for data. All tensors are either one item or a
- batch, according to enqueue_many.
-* <b>`labels`</b>: Tensor for label of data. Label is a single integer or a batch,
- depending on enqueue_many. It is not a one-hot vector.
-* <b>`target_probs`</b>: Target class proportions in batch. An object whose type has a
- registered Tensor conversion function.
-* <b>`batch_size`</b>: Size of batch to be returned.
-* <b>`init_probs`</b>: Class proportions in the data. An object whose type has a
- registered Tensor conversion function, or `None` for estimating the
- initial distribution.
-* <b>`enqueue_many`</b>: Bool. If true, interpret input tensors as having a batch
- dimension.
-* <b>`queue_capacity`</b>: Capacity of the large queue that holds input examples.
-* <b>`threads_per_queue`</b>: Number of threads for the large queue that holds input
- examples and for the final queue with the proper class proportions.
-* <b>`name`</b>: Optional prefix for ops created by this function.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: enqueue_many is True and labels doesn't have a batch
- dimension, or if enqueue_many is False and labels isn't a scalar.
-* <b>`ValueError`</b>: enqueue_many is True, and batch dimension on data and labels
- don't match.
-* <b>`ValueError`</b>: if probs don't sum to one.
-* <b>`ValueError`</b>: if a zero initial probability class has a nonzero target
- probability.
-* <b>`TFAssertion`</b>: if labels aren't integers in [0, num classes).
-
-##### Returns:
-
- (data_batch, label_batch), where data_batch is a list of tensors of the same
- length as `tensors`
-
-##### Example:
-
- # Get tensor for a single data and label example.
- data, label = data_provider.Get(['data', 'label'])
-
- # Get stratified batch according to per-class probabilities.
- target_probs = [...distribution you want...]
- [data_batch], labels = tf.contrib.training.stratified_sample(
- [data], label, target_probs)
-
- # Run batch through network.
- ...
-
-
-- - -
-
-### `tf.contrib.training.weighted_resample(inputs, weights, overall_rate, scope=None, mean_decay=0.999, seed=None)` {#weighted_resample}
-
-Performs an approximate weighted resampling of `inputs`.
-
-This method chooses elements from `inputs` where each item's rate of
-selection is proportional to its value in `weights`, and the average
-rate of selection across all inputs (and many invocations!) is
-`overall_rate`.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A list of tensors whose first dimension is `batch_size`.
-* <b>`weights`</b>: A `[batch_size]`-shaped tensor with each batch member's weight.
-* <b>`overall_rate`</b>: Desired overall rate of resampling.
-* <b>`scope`</b>: Scope to use for the op.
-* <b>`mean_decay`</b>: How quickly to decay the running estimate of the mean weight.
-* <b>`seed`</b>: Random seed.
-
-##### Returns:
-
- A list of tensors exactly like `inputs`, but with an unknown (and
- possibly zero) first dimension.
- A tensor containing the effective resampling rate used for each output.
-
-
-- - -
-
-### `tf.contrib.training.bucket(tensors, which_bucket, batch_size, num_buckets, num_threads=1, capacity=32, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, keep_input=True, shared_name=None, name=None)` {#bucket}
-
-Lazy bucketing of input tensors according to `which_bucket`.
-
-The argument `tensors` can be a list or a dictionary of tensors.
-The value returned by the function will be of the same type
-as `tensors`.
-
-The tensors entering this function are put into the bucket given by
-`which_bucket`. Each bucket has its own queue. When a bucket contains
-`batch_size` elements, this minibatch is pushed onto a top queue. The
-tensors returned from this function are a the result of dequeueing the
-next minibatch from this top queue.
-
-This function is implemented using several queues. A `QueueRunner` for the
-queues is added to the current `Graph`'s `QUEUE_RUNNER` collection.
-
-As the returned tensors are the result of of a dequeue operation, evaluating
-them will throw a `tf.errors.OutOfRangeError` when the input queue is
-exhausted. If these tensors are feeding another input queue, its queue runner
-will catch this exception, however, if they are used in your main thread
-you are responsible for catching this yourself.
-
-*N.B.:* If `dynamic_pad` is `False`, you must ensure that either
-(i) the `shapes` argument is passed, or (ii) all of the tensors in
-`tensors` must have fully-defined shapes. `ValueError` will be
-raised if neither of these conditions holds.
-
-If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
-tensors is known, but individual dimensions may have shape `None`.
-In this case, for each enqueue the dimensions with value `None`
-may have a variable length; upon dequeue, the output tensors will be padded
-on the right to the maximum shape of the tensors in the current minibatch.
-For numbers, this padding takes value 0. For strings, this padding is
-the empty string. See `PaddingFIFOQueue` for more info.
-
-If `allow_smaller_final_batch` is `True`, a smaller batch value than
-`batch_size` is returned when the queues are closed and there are not enough
-elements to fill the batch, otherwise the pending elements are discarded.
-In addition, all output tensors' static shapes, as accessed via the
-`get_shape()` method will have a 0th `Dimension` value of `None`, and
-operations that depend on fixed batch_size would fail.
-
-##### Args:
-
-
-* <b>`tensors`</b>: The list or dictionary of tensors, representing a single element,
- to bucket. Nested lists are not supported.
-* <b>`which_bucket`</b>: An `int32` scalar Tensor taking a value in `[0, num_buckets)`.
-* <b>`batch_size`</b>: The new batch size pulled from the queue (all queues will have
- the same size). If a list is passed in then each bucket will have a
- different batch_size.
- (python int, int32 scalar or iterable of integers of length num_buckets).
-* <b>`num_buckets`</b>: A python integer, the number of buckets.
-* <b>`num_threads`</b>: An integer. The number of threads enqueuing `tensors`.
-* <b>`capacity`</b>: An integer. The maximum number of minibatches in the top queue,
- and also the maximum number of elements within each bucket.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensors`.
-* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
- The given dimensions are padded upon dequeue so that tensors within a
- batch have the same shapes.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batches to be smaller if there are insufficient items left in the queues.
-* <b>`keep_input`</b>: A `bool` scalar Tensor. If provided, this tensor controls
- whether the input is added to the queue or not. If it evaluates `True`,
- then `tensors` are added to the bucket; otherwise they are dropped. This
- tensor essentially acts as a filtering mechanism.
-* <b>`shared_name`</b>: (Optional). If set, the queues will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A tuple `(bucket, outputs)` where `bucket` is
- a `int32` scalar tensor and `outputs` is a list or
- dictionary of batched outputs corresponding to elements of `tensors`.
- Every step will receive a new bucket of outputs.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensors` or if batch_size is a sequence
- but it's length != num_buckets.
-
-
-- - -
-
-### `tf.contrib.training.bucket_by_sequence_length(input_length, tensors, batch_size, bucket_boundaries, num_threads=1, capacity=32, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, keep_input=True, shared_name=None, name=None)` {#bucket_by_sequence_length}
-
-Lazy bucketing of inputs according to their length.
-
-This method calls `tf.contrib.training.bucket` under the hood, after first
-subdividing the bucket boundaries into separate buckets and identifying which
-bucket the given `input_length` belongs to. See the documentation for
-`which_bucket` for details of the other arguments.
-
-##### Args:
-
-
-* <b>`input_length`</b>: `int32` scalar `Tensor`, the sequence length of tensors.
-* <b>`tensors`</b>: The list or dictionary of tensors, representing a single element,
- to bucket. Nested lists are not supported.
-* <b>`batch_size`</b>: The new batch size pulled from the queue (all queues will have
- the same size). If a list is passed in then each bucket will have a
- different batch_size.
- (python int, int32 scalar or iterable of integers of length num_buckets).
-* <b>`bucket_boundaries`</b>: int list, increasing non-negative numbers.
- The edges of the buckets to use when bucketing tensors. Two extra buckets
- are created, one for `input_length < bucket_boundaries[0]` and
- one for `input_length >= bucket_boundaries[-1]`.
-* <b>`num_threads`</b>: An integer. The number of threads enqueuing `tensors`.
-* <b>`capacity`</b>: An integer. The maximum number of minibatches in the top queue,
- and also the maximum number of elements within each bucket.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensors`.
-* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
- The given dimensions are padded upon dequeue so that tensors within a
- batch have the same shapes.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batches to be smaller if there are insufficient items left in the queues.
-* <b>`keep_input`</b>: A `bool` scalar Tensor. If provided, this tensor controls
- whether the input is added to the queue or not. If it evaluates `True`,
- then `tensors` are added to the bucket; otherwise they are dropped. This
- tensor essentially acts as a filtering mechanism.
-* <b>`shared_name`</b>: (Optional). If set, the queues will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A tuple `(sequence_length, outputs)` where `sequence_length` is
- a 1-D `Tensor` of size `batch_size` and `outputs` is a list or dictionary
- of batched, bucketed, outputs corresponding to elements of `tensors`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `bucket_boundaries` is not a list of python integers.
-* <b>`ValueError`</b>: if `bucket_boundaries` is empty or contains non-increasing
- values or if batch_size is a list and it's length doesn't equal the number
- of buckets.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/contrib.util.md b/tensorflow/g3doc/api_docs/python/contrib.util.md
deleted file mode 100644
index a5a22eb27d..0000000000
--- a/tensorflow/g3doc/api_docs/python/contrib.util.md
+++ /dev/null
@@ -1,157 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Utilities (contrib)
-[TOC]
-
-Utilities for dealing with Tensors. See @{$python/contrib.util} guide.
-
-- - -
-
-### `tf.contrib.util.constant_value(tensor)` {#constant_value}
-
-Returns the constant value of the given tensor, if efficiently calculable.
-
-This function attempts to partially evaluate the given tensor, and
-returns its value as a numpy ndarray if this succeeds.
-
-TODO(mrry): Consider whether this function should use a registration
-mechanism like gradients and ShapeFunctions, so that it is easily
-extensible.
-
-NOTE: If `constant_value(tensor)` returns a non-`None` result, it will no
-longer be possible to feed a different value for `tensor`. This allows the
-result of this function to influence the graph that is constructed, and
-permits static shape optimizations.
-
-##### Args:
-
-
-* <b>`tensor`</b>: The Tensor to be evaluated.
-
-##### Returns:
-
- A numpy ndarray containing the constant value of the given `tensor`,
- or None if it cannot be calculated.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if tensor is not an ops.Tensor.
-
-
-- - -
-
-### `tf.contrib.util.make_tensor_proto(values, dtype=None, shape=None, verify_shape=False)` {#make_tensor_proto}
-
-Create a TensorProto.
-
-##### Args:
-
-
-* <b>`values`</b>: Values to put in the TensorProto.
-* <b>`dtype`</b>: Optional tensor_pb2 DataType value.
-* <b>`shape`</b>: List of integers representing the dimensions of tensor.
-* <b>`verify_shape`</b>: Boolean that enables verification of a shape of values.
-
-##### Returns:
-
- A TensorProto. Depending on the type, it may contain data in the
- "tensor_content" attribute, which is not directly useful to Python programs.
- To access the values you should convert the proto back to a numpy ndarray
- with tensor_util.MakeNdarray(proto).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if unsupported types are provided.
-* <b>`ValueError`</b>: if arguments have inappropriate values or if verify_shape is
- True and shape of values is not equals to a shape from the argument.
-
-make_tensor_proto accepts "values" of a python scalar, a python list, a
-numpy ndarray, or a numpy scalar.
-
-If "values" is a python scalar or a python list, make_tensor_proto
-first convert it to numpy ndarray. If dtype is None, the
-conversion tries its best to infer the right numpy data
-type. Otherwise, the resulting numpy array has a compatible data
-type with the given dtype.
-
-In either case above, the numpy ndarray (either the caller provided
-or the auto converted) must have the compatible type with dtype.
-
-make_tensor_proto then converts the numpy array to a tensor proto.
-
-If "shape" is None, the resulting tensor proto represents the numpy
-array precisely.
-
-Otherwise, "shape" specifies the tensor's shape and the numpy array
-can not have more elements than what "shape" specifies.
-
-
-- - -
-
-### `tf.contrib.util.make_ndarray(tensor)` {#make_ndarray}
-
-Create a numpy ndarray from a tensor.
-
-Create a numpy ndarray with the same shape and data as the tensor.
-
-##### Args:
-
-
-* <b>`tensor`</b>: A TensorProto.
-
-##### Returns:
-
- A numpy array with the tensor contents.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if tensor has unsupported type.
-
-
-- - -
-
-### `tf.contrib.util.ops_used_by_graph_def(graph_def)` {#ops_used_by_graph_def}
-
-Collect the list of ops used by a graph.
-
-Does not validate that the ops are all registered.
-
-##### Args:
-
-
-* <b>`graph_def`</b>: A `GraphDef` proto, as from `graph.as_graph_def()`.
-
-##### Returns:
-
- A list of strings, each naming an op used by the graph.
-
-
-- - -
-
-### `tf.contrib.util.stripped_op_list_for_graph(graph_def)` {#stripped_op_list_for_graph}
-
-Collect the stripped OpDefs for ops used by a graph.
-
-This function computes the `stripped_op_list` field of `MetaGraphDef` and
-similar protos. The result can be communicated from the producer to the
-consumer, which can then use the C++ function
-`RemoveNewDefaultAttrsFromGraphDef` to improve forwards compatibility.
-
-##### Args:
-
-
-* <b>`graph_def`</b>: A `GraphDef` proto, as from `graph.as_graph_def()`.
-
-##### Returns:
-
- An `OpList` of ops used by the graph.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If an unregistered op is used.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/control_flow_ops.md b/tensorflow/g3doc/api_docs/python/control_flow_ops.md
deleted file mode 100644
index cffc790d60..0000000000
--- a/tensorflow/g3doc/api_docs/python/control_flow_ops.md
+++ /dev/null
@@ -1,808 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Control Flow
-
-Note: Functions taking `Tensor` arguments can also take anything accepted by
-[`tf.convert_to_tensor`](framework.md#convert_to_tensor).
-
-[TOC]
-
-Control Flow Operations. See the @{python/control_flow_ops} guide.
-
-- - -
-
-### `tf.identity(input, name=None)` {#identity}
-
-Return a tensor with the same shape and contents as the input tensor or value.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
-
-- - -
-
-### `tf.tuple(tensors, name=None, control_inputs=None)` {#tuple}
-
-Group tensors together.
-
-This creates a tuple of tensors with the same values as the `tensors`
-argument, except that the value of each tensor is only returned after the
-values of all tensors have been computed.
-
-`control_inputs` contains additional ops that have to finish before this op
-finishes, but whose outputs are not returned.
-
-This can be used as a "join" mechanism for parallel computations: all the
-argument tensors can be computed in parallel, but the values of any tensor
-returned by `tuple` are only available after all the parallel computations
-are done.
-
-See also `group` and `with_dependencies`.
-
-##### Args:
-
-
-* <b>`tensors`</b>: A list of `Tensor`s or `IndexedSlices`, some entries can be `None`.
-* <b>`name`</b>: (optional) A name to use as a `name_scope` for the operation.
-* <b>`control_inputs`</b>: List of additional ops to finish before returning.
-
-##### Returns:
-
- Same as `tensors`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `tensors` does not contain any `Tensor` or `IndexedSlices`.
-* <b>`TypeError`</b>: If `control_inputs` is not a list of `Operation` or `Tensor`
- objects.
-
-
-- - -
-
-### `tf.group(*inputs, **kwargs)` {#group}
-
-Create an op that groups multiple operations.
-
-When this op finishes, all ops in `input` have finished. This op has no
-output.
-
-See also `tuple` and `with_dependencies`.
-
-##### Args:
-
-
-* <b>`*inputs`</b>: Zero or more tensors to group.
-* <b>`**kwargs`</b>: Optional parameters to pass when constructing the NodeDef.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- An Operation that executes all its inputs.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If an unknown keyword argument is provided.
-
-
-- - -
-
-### `tf.no_op(name=None)` {#no_op}
-
-Does nothing. Only useful as a placeholder for control edges.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-### `tf.count_up_to(ref, limit, name=None)` {#count_up_to}
-
-Increments 'ref' until it reaches 'limit'.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `int32`, `int64`.
- Should be from a scalar `Variable` node.
-* <b>`limit`</b>: An `int`.
- If incrementing ref would bring it above limit, instead generates an
- 'OutOfRange' error.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `ref`.
- A copy of the input before increment. If nothing else modifies the
- input, the values produced will all be distinct.
-
-
-- - -
-
-### `tf.cond(pred, fn1, fn2, name=None)` {#cond}
-
-Return either fn1() or fn2() based on the boolean predicate `pred`.
-
-`fn1` and `fn2` both return lists of output tensors. `fn1` and `fn2` must have
-the same non-zero number and type of outputs.
-
-Note that the conditional execution applies only to the operations defined in
-fn1 and fn2. Consider the following simple program:
-
-```python
-z = tf.multiply(a, b)
-result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y))
-```
-
-If x < y, the `tf.add` operation will be executed and `tf.square`
-operation will not be executed. Since z is needed for at least one
-branch of the cond, the `tf.multiply` operation is always executed, unconditionally.
-Although this behavior is consistent with the dataflow model of TensorFlow,
-it has occasionally surprised some users who expected a lazier semantics.
-
-##### Args:
-
-
-* <b>`pred`</b>: A scalar determining whether to return the result of `fn1` or `fn2`.
-* <b>`fn1`</b>: The callable to be performed if pred is true.
-* <b>`fn2`</b>: The callable to be performed if pref is false.
-* <b>`name`</b>: Optional name prefix for the returned tensors.
-
-##### Returns:
-
- Tensors returned by the call to either `fn1` or `fn2`. If the callables
- return a singleton list, the element is extracted from the list.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `fn1` or `fn2` is not callable.
-* <b>`ValueError`</b>: if `fn1` and `fn2` do not return the same number of tensors, or
- return tensors of different types.
-
-
-* <b>`Example`</b>:
-
-```python
- x = tf.constant(2)
- y = tf.constant(5)
- def f1(): return tf.multiply(x, 17)
- def f2(): return tf.add(y, 23)
- r = tf.cond(tf.less(x, y), f1, f2)
- # r is set to f1().
- # Operations in f2 (e.g., tf.add) are not executed.
-```
-
-
-- - -
-
-### `tf.case(pred_fn_pairs, default, exclusive=False, name='case')` {#case}
-
-Create a case operation.
-
-The `pred_fn_pairs` parameter is a dict or list of pairs of size N.
-Each pair contains a boolean scalar tensor and a python callable that
-creates the tensors to be returned if the boolean evaluates to True.
-`default` is a callable generating a list of tensors. All the callables
-in `pred_fn_pairs` as well as `default` should return the same number
-and types of tensors.
-
-If `exclusive==True`, all predicates are evaluated, and an exception is
-thrown if more than one of the predicates evaluates to `True`.
-If `exclusive==False`, execution stops are the first predicate which
-evaluates to True, and the tensors generated by the corresponding function
-are returned immediately. If none of the predicates evaluate to True, this
-operation returns the tensors generated by `default`.
-
-Example 1:
- Pseudocode:
- ```
- if (x < y) return 17;
- else return 23;
- ```
-
- Expressions:
- ```
- f1 = lambda: tf.constant(17)
- f2 = lambda: tf.constant(23)
- r = case([(tf.less(x, y), f1)], default=f2)
- ```
-
-Example 2:
- Pseudocode:
- ```
- if (x < y && x > z) raise OpError("Only one predicate may evaluate true");
- if (x < y) return 17;
- else if (x > z) return 23;
- else return -1;
- ```
-
- Expressions:
- ```
- x = tf.constant(0)
- y = tf.constant(1)
- z = tf.constant(2)
- def f1(): return tf.constant(17)
- def f2(): return tf.constant(23)
- def f3(): return tf.constant(-1)
- r = case({tf.less(x, y): f1, tf.greater(x, z): f2},
- default=f3, exclusive=True)
- ```
-
-##### Args:
-
-
-* <b>`pred_fn_pairs`</b>: Dict or list of pairs of a boolean scalar tensor and a
- callable which returns a list of tensors.
-* <b>`default`</b>: A callable that returns a list of tensors.
-* <b>`exclusive`</b>: True iff at most one predicate is allowed to evaluate to `True`.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- The tensors returned by the first pair whose predicate evaluated to True, or
- those returned by `default` if none does.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `pred_fn_pairs` is not a list/dictionary.
-* <b>`TypeError`</b>: If `pred_fn_pairs` is a list but does not contain 2-tuples.
-* <b>`TypeError`</b>: If `fns[i]` is not callable for any i, or `default` is not
- callable.
-
-
-- - -
-
-### `tf.while_loop(cond, body, loop_vars, shape_invariants=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#while_loop}
-
-Repeat `body` while the condition `cond` is true.
-
-`cond` is a callable returning a boolean scalar tensor. `body` is a callable
-returning a (possibly nested) tuple, namedtuple or list of tensors of the same
-arity (length and structure) and types as `loop_vars`. `loop_vars` is a
-(possibly nested) tuple, namedtuple or list of tensors that is passed to both
-`cond` and `body`. `cond` and `body` both take as many arguments as there are
-`loop_vars`.
-
-While `cond` evaluates to true, `body` is executed.
-
-In addition to regular Tensors or IndexedSlices, the body may accept and
-return TensorArray objects. The flows of the TensorArray objects will
-be appropriately forwarded between loops and during gradient calculations.
-
-For correctness, `tf.while_loop()` strictly enforces shape invariants for
-the loop variables. A shape invariant is a (possibly partial) shape that
-is unchanged across the iterations of the loop. An error will be raised
-if the shape of a loop variable after an iteration is determined to be more
-general than or incompatible with its shape invariant. For example, a shape
-of [11, None] is more general than a shape of [11, 17], and [11, 21] is not
-compatible with [11, 17]. By default (if the argument `shape_invariants` is
-not specified), it is assumed that the initial shape of each tensor in
-`loop_vars` is the same in every iteration. The `shape_invariants` argument
-allows the caller to specify a less specific shape invariant for each loop
-variable, which is needed if the shape varies between iterations. The
-[`Tensor.set_shape()`](../../api_docs/python/framework.md#Tensor.set_shape)
-function may also be used in the `body` function to indicate that
-the output loop variable has a particular shape. The shape invariant for
-SparseTensor and IndexedSlices are treated specially as follows:
-
-a) If a loop variable is a SparseTensor, the shape invariant must be
-TensorShape([r]) where r is the rank of the dense tensor represented
-by the sparse tensor. It means the shapes of the three tensors of the
-SparseTensor are ([None], [None, r], [r]). NOTE: The shape invariant here
-is the shape of the SparseTensor.dense_shape property. It must be the shape of
-a vector.
-
-b) If a loop variable is an IndexedSlices, the shape invariant must be
-a shape invariant of the values tensor of the IndexedSlices. It means
-the shapes of the three tensors of the IndexedSlices are (shape, [shape[0]],
-[shape.ndims]).
-
-`while_loop` implements non-strict semantics, enabling multiple iterations
-to run in parallel. The maximum number of parallel iterations can be
-controlled by `parallel_iterations`, which gives users some control over
-memory consumption and execution order. For correct programs, `while_loop`
-should return the same result for any parallel_iterations > 0.
-
-For training, TensorFlow remembers the tensors that are produced in the
-forward inference but needed in back propagation. These tensors can be a
-main source of memory consumption and often cause OOM problems when training
-on GPUs. When the flag swap_memory is true, we swap out these tensors from
-GPU to CPU. This for example allows us to train RNN models with very long
-sequences and large batches.
-
-##### Args:
-
-
-* <b>`cond`</b>: A callable that represents the termination condition of the loop.
-* <b>`body`</b>: A callable that represents the loop body.
-* <b>`loop_vars`</b>: A (possibly nested) tuple, namedtuple or list of numpy array,
- `Tensor`, and `TensorArray` objects.
-* <b>`shape_invariants`</b>: The shape invariants for the loop variables.
-* <b>`parallel_iterations`</b>: The number of iterations allowed to run in parallel.
- It must be a positive integer.
-* <b>`back_prop`</b>: Whether backprop is enabled for this while loop.
-* <b>`swap_memory`</b>: Whether GPU-CPU memory swap is enabled for this loop.
-* <b>`name`</b>: Optional name prefix for the returned tensors.
-
-##### Returns:
-
- The output tensors for the loop variables after the loop. When the length
- of `loop_vars` is 1 this is a Tensor, TensorArray or IndexedSlice and when
- the length of `loop_vars` is greater than 1 it returns a list.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `cond` or `body` is not callable.
-* <b>`ValueError`</b>: if `loop_vars` is empty.
-
-
-* <b>`Example`</b>:
-
- ```python
- i = tf.constant(0)
- c = lambda i: tf.less(i, 10)
- b = lambda i: tf.add(i, 1)
- r = tf.while_loop(c, b, [i])
- ```
-
-Example with nesting and a namedtuple:
-
- ```python
- import collections
- Pair = collections.namedtuple('Pair', 'j, k')
- ijk_0 = (tf.constant(0), Pair(tf.constant(1), tf.constant(2)))
- c = lambda i, p: i < 10
- b = lambda i, p: (i + 1, Pair((p.j + p.k), (p.j - p.k)))
- ijk_final = tf.while_loop(c, b, ijk_0)
- ```
-
-Example using shape_invariants:
-
- ```python
- i0 = tf.constant(0)
- m0 = tf.ones([2, 2])
- c = lambda i, m: i < 10
- b = lambda i, m: [i+1, tf.concat([m, m], axis=0)]
- tf.while_loop(
- c, b, loop_vars=[i0, m0],
- shape_invariants=[i0.get_shape(), tf.TensorShape([None, 2])])
- ```
-
-
-- - -
-
-### `tf.logical_and(x, y, name=None)` {#logical_and}
-
-Returns the truth value of x AND y element-wise.
-
-*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-### `tf.logical_not(x, name=None)` {#logical_not}
-
-Returns the truth value of NOT x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-### `tf.logical_or(x, y, name=None)` {#logical_or}
-
-Returns the truth value of x OR y element-wise.
-
-*NOTE*: `LogicalOr` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-### `tf.logical_xor(x, y, name='LogicalXor')` {#logical_xor}
-
-x ^ y = (x | y) & ~(x & y).
-
-
-- - -
-
-### `tf.equal(x, y, name=None)` {#equal}
-
-Returns the truth value of (x == y) element-wise.
-
-*NOTE*: `Equal` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `quint8`, `qint8`, `qint32`, `string`, `bool`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-### `tf.not_equal(x, y, name=None)` {#not_equal}
-
-Returns the truth value of (x != y) element-wise.
-
-*NOTE*: `NotEqual` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `quint8`, `qint8`, `qint32`, `string`, `bool`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-### `tf.less(x, y, name=None)` {#less}
-
-Returns the truth value of (x < y) element-wise.
-
-*NOTE*: `Less` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-### `tf.less_equal(x, y, name=None)` {#less_equal}
-
-Returns the truth value of (x <= y) element-wise.
-
-*NOTE*: `LessEqual` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-### `tf.greater(x, y, name=None)` {#greater}
-
-Returns the truth value of (x > y) element-wise.
-
-*NOTE*: `Greater` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-### `tf.greater_equal(x, y, name=None)` {#greater_equal}
-
-Returns the truth value of (x >= y) element-wise.
-
-*NOTE*: `GreaterEqual` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-### `tf.where(condition, x=None, y=None, name=None)` {#where}
-
-Return the elements, either from `x` or `y`, depending on the `condition`.
-
-If both `x` and `y` are None, then this operation returns the coordinates of
-true elements of `condition`. The coordinates are returned in a 2-D tensor
-where the first dimension (rows) represents the number of true elements, and
-the second dimension (columns) represents the coordinates of the true
-elements. Keep in mind, the shape of the output tensor can vary depending on
-how many true values there are in input. Indices are output in row-major
-order.
-
-If both non-None, `x` and `y` must have the same shape.
-The `condition` tensor must be a scalar if `x` and `y` are scalar.
-If `x` and `y` are vectors or higher rank, then `condition` must be either a
-vector with size matching the first dimension of `x`, or must have the same
-shape as `x`.
-
-The `condition` tensor acts as a mask that chooses, based on the value at each
-element, whether the corresponding element / row in the output should be taken
-from `x` (if true) or `y` (if false).
-
-If `condition` is a vector and `x` and `y` are higher rank matrices, then it
-chooses which row (outer dimension) to copy from `x` and `y`. If `condition`
-has the same shape as `x` and `y`, then it chooses which element to copy from
-`x` and `y`.
-
-##### Args:
-
-
-* <b>`condition`</b>: A `Tensor` of type `bool`
-* <b>`x`</b>: A Tensor which may have the same shape as `condition`. If `condition` is
- rank 1, `x` may have higher rank, but its first dimension must match the
- size of `condition`.
-* <b>`y`</b>: A `tensor` with the same shape and type as `x`.
-* <b>`name`</b>: A name of the operation (optional)
-
-##### Returns:
-
- A `Tensor` with the same type and shape as `x`, `y` if they are non-None.
- A `Tensor` with shape `(num_true, dim_size(condition))`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: When exactly one of `x` or `y` is non-None.
-
-
-- - -
-
-### `tf.is_finite(x, name=None)` {#is_finite}
-
-Returns which elements of x are finite.
-
-@compatibility(numpy)
-Equivalent to np.isfinite
-@end_compatibility
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-### `tf.is_inf(x, name=None)` {#is_inf}
-
-Returns which elements of x are Inf.
-
-@compatibility(numpy)
-Equivalent to np.isinf
-@end_compatibility
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-### `tf.is_nan(x, name=None)` {#is_nan}
-
-Returns which elements of x are NaN.
-
-@compatibility(numpy)
-Equivalent to np.isnan
-@end_compatibility
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-### `tf.verify_tensor_all_finite(t, msg, name=None)` {#verify_tensor_all_finite}
-
-Assert that the tensor does not contain any NaN's or Inf's.
-
-##### Args:
-
-
-* <b>`t`</b>: Tensor to check.
-* <b>`msg`</b>: Message to log on failure.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- Same tensor as `t`.
-
-
-- - -
-
-### `tf.check_numerics(tensor, message, name=None)` {#check_numerics}
-
-Checks a tensor for NaN and Inf values.
-
-When run, reports an `InvalidArgument` error if `tensor` has any values
-that are not a number (NaN) or infinity (Inf). Otherwise, passes `tensor` as-is.
-
-##### Args:
-
-
-* <b>`tensor`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`message`</b>: A `string`. Prefix of the error message.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `tensor`.
-
-
-- - -
-
-### `tf.add_check_numerics_ops()` {#add_check_numerics_ops}
-
-Connect a `check_numerics` to every floating point tensor.
-
-`check_numerics` operations themselves are added for each `half`, `float`,
-or `double` tensor in the graph. For all ops in the graph, the
-`check_numerics` op for all of its (`half`, `float`, or `double`) inputs
-is guaranteed to run before the `check_numerics` op on any of its outputs.
-
-##### Returns:
-
- A `group` op depending on all `check_numerics` ops added.
-
-
-- - -
-
-### `tf.Assert(condition, data, summarize=None, name=None)` {#Assert}
-
-Asserts that the given condition is true.
-
-If `condition` evaluates to false, print the list of tensors in `data`.
-`summarize` determines how many entries of the tensors to print.
-
-NOTE: To ensure that Assert executes, one usually attaches a dependency:
-
-```python
-# Ensure maximum element of x is smaller or equal to 1
-assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
-with tf.control_dependencies([assert_op]):
- ... code using x ...
-```
-
-##### Args:
-
-
-* <b>`condition`</b>: The condition to evaluate.
-* <b>`data`</b>: The tensors to print out when condition is false.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
-
-* <b>`assert_op`</b>: An `Operation` that, when executed, raises a
- `tf.errors.InvalidArgumentError` if `condition` is not true.
-
-
-- - -
-
-### `tf.Print(input_, data, message=None, first_n=None, summarize=None, name=None)` {#Print}
-
-Prints a list of tensors.
-
-This is an identity op with the side effect of printing `data` when
-evaluating.
-
-##### Args:
-
-
-* <b>`input_`</b>: A tensor passed through this op.
-* <b>`data`</b>: A list of tensors to print out when op is evaluated.
-* <b>`message`</b>: A string, prefix of the error message.
-* <b>`first_n`</b>: Only log `first_n` number of times. Negative numbers log always;
- this is the default.
-* <b>`summarize`</b>: Only print this many entries of each tensor. If None, then a
- maximum of 3 elements are printed per input tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same tensor as `input_`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/framework.md b/tensorflow/g3doc/api_docs/python/framework.md
deleted file mode 100644
index c3362bd254..0000000000
--- a/tensorflow/g3doc/api_docs/python/framework.md
+++ /dev/null
@@ -1,3969 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Building Graphs
-[TOC]
-
-Classes and functions for building TensorFlow graphs.
-
-## Core graph data structures
-
-- - -
-
-### `class tf.Graph` {#Graph}
-
-A TensorFlow computation, represented as a dataflow graph.
-
-A `Graph` contains a set of
-[`Operation`](../../api_docs/python/framework.md#Operation) objects,
-which represent units of computation; and
-[`Tensor`](../../api_docs/python/framework.md#Tensor) objects, which represent
-the units of data that flow between operations.
-
-A default `Graph` is always registered, and accessible by calling
-[`tf.get_default_graph()`](../../api_docs/python/framework.md#get_default_graph).
-To add an operation to the default graph, simply call one of the functions
-that defines a new `Operation`:
-
-```python
-c = tf.constant(4.0)
-assert c.graph is tf.get_default_graph()
-```
-
-Another typical usage involves the
-[`Graph.as_default()`](../../api_docs/python/framework.md#Graph.as_default)
-context manager, which overrides the current default graph for the
-lifetime of the context:
-
-```python
-g = tf.Graph()
-with g.as_default():
- # Define operations and tensors in `g`.
- c = tf.constant(30.0)
- assert c.graph is g
-```
-
-Important note: This class *is not* thread-safe for graph construction. All
-operations should be created from a single thread, or external
-synchronization must be provided. Unless otherwise specified, all methods
-are not thread-safe.
-
-- - -
-
-#### `tf.Graph.__init__()` {#Graph.__init__}
-
-Creates a new, empty Graph.
-
-
-- - -
-
-#### `tf.Graph.as_default()` {#Graph.as_default}
-
-Returns a context manager that makes this `Graph` the default graph.
-
-This method should be used if you want to create multiple graphs
-in the same process. For convenience, a global default graph is
-provided, and all ops will be added to this graph if you do not
-create a new graph explicitly. Use this method with the `with` keyword
-to specify that ops created within the scope of a block should be
-added to this graph.
-
-The default graph is a property of the current thread. If you
-create a new thread, and wish to use the default graph in that
-thread, you must explicitly add a `with g.as_default():` in that
-thread's function.
-
-The following code examples are equivalent:
-
-```python
-# 1. Using Graph.as_default():
-g = tf.Graph()
-with g.as_default():
- c = tf.constant(5.0)
- assert c.graph is g
-
-# 2. Constructing and making default:
-with tf.Graph().as_default() as g:
- c = tf.constant(5.0)
- assert c.graph is g
-```
-
-##### Returns:
-
- A context manager for using this graph as the default graph.
-
-
-- - -
-
-#### `tf.Graph.as_graph_def(from_version=None, add_shapes=False)` {#Graph.as_graph_def}
-
-Returns a serialized `GraphDef` representation of this graph.
-
-The serialized `GraphDef` can be imported into another `Graph`
-(using [`import_graph_def()`](#import_graph_def)) or used with the
-[C++ Session API](../../api_docs/cc/index.md).
-
-This method is thread-safe.
-
-##### Args:
-
-
-* <b>`from_version`</b>: Optional. If this is set, returns a `GraphDef`
- containing only the nodes that were added to this graph since
- its `version` property had the given value.
-* <b>`add_shapes`</b>: If true, adds an "_output_shapes" list attr to each
- node with the inferred shapes of each of its outputs.
-
-##### Returns:
-
- A [`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto)
- protocol buffer.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `graph_def` would be too large.
-
-
-- - -
-
-#### `tf.Graph.finalize()` {#Graph.finalize}
-
-Finalizes this graph, making it read-only.
-
-After calling `g.finalize()`, no new operations can be added to
-`g`. This method is used to ensure that no operations are added
-to a graph when it is shared between multiple threads, for example
-when using a [`QueueRunner`](../../api_docs/python/train.md#QueueRunner).
-
-
-- - -
-
-#### `tf.Graph.finalized` {#Graph.finalized}
-
-True if this graph has been finalized.
-
-
-
-- - -
-
-#### `tf.Graph.control_dependencies(control_inputs)` {#Graph.control_dependencies}
-
-Returns a context manager that specifies control dependencies.
-
-Use with the `with` keyword to specify that all operations constructed
-within the context should have control dependencies on
-`control_inputs`. For example:
-
-```python
-with g.control_dependencies([a, b, c]):
- # `d` and `e` will only run after `a`, `b`, and `c` have executed.
- d = ...
- e = ...
-```
-
-Multiple calls to `control_dependencies()` can be nested, and in
-that case a new `Operation` will have control dependencies on the union
-of `control_inputs` from all active contexts.
-
-```python
-with g.control_dependencies([a, b]):
- # Ops constructed here run after `a` and `b`.
- with g.control_dependencies([c, d]):
- # Ops constructed here run after `a`, `b`, `c`, and `d`.
-```
-
-You can pass None to clear the control dependencies:
-
-```python
-with g.control_dependencies([a, b]):
- # Ops constructed here run after `a` and `b`.
- with g.control_dependencies(None):
- # Ops constructed here run normally, not waiting for either `a` or `b`.
- with g.control_dependencies([c, d]):
- # Ops constructed here run after `c` and `d`, also not waiting
- # for either `a` or `b`.
-```
-
-*N.B.* The control dependencies context applies *only* to ops that
-are constructed within the context. Merely using an op or tensor
-in the context does not add a control dependency. The following
-example illustrates this point:
-
-```python
-# WRONG
-def my_func(pred, tensor):
- t = tf.matmul(tensor, tensor)
- with tf.control_dependencies([pred]):
- # The matmul op is created outside the context, so no control
- # dependency will be added.
- return t
-
-# RIGHT
-def my_func(pred, tensor):
- with tf.control_dependencies([pred]):
- # The matmul op is created in the context, so a control dependency
- # will be added.
- return tf.matmul(tensor, tensor)
-```
-
-##### Args:
-
-
-* <b>`control_inputs`</b>: A list of `Operation` or `Tensor` objects which
- must be executed or computed before running the operations
- defined in the context. Can also be `None` to clear the control
- dependencies.
-
-##### Returns:
-
- A context manager that specifies control dependencies for all
- operations constructed within the context.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `control_inputs` is not a list of `Operation` or
- `Tensor` objects.
-
-
-- - -
-
-#### `tf.Graph.device(device_name_or_function)` {#Graph.device}
-
-Returns a context manager that specifies the default device to use.
-
-The `device_name_or_function` argument may either be a device name
-string, a device function, or None:
-
-* If it is a device name string, all operations constructed in
- this context will be assigned to the device with that name, unless
- overridden by a nested `device()` context.
-* If it is a function, it will be treated as a function from
- Operation objects to device name strings, and invoked each time
- a new Operation is created. The Operation will be assigned to
- the device with the returned name.
-* If it is None, all `device()` invocations from the enclosing context
- will be ignored.
-
-For information about the valid syntax of device name strings, see
-the documentation in
-[`DeviceNameUtils`](https://www.tensorflow.org/code/tensorflow/core/util/device_name_utils.h).
-
-For example:
-
-```python
-with g.device('/gpu:0'):
- # All operations constructed in this context will be placed
- # on GPU 0.
- with g.device(None):
- # All operations constructed in this context will have no
- # assigned device.
-
-# Defines a function from `Operation` to device string.
-def matmul_on_gpu(n):
- if n.type == "MatMul":
- return "/gpu:0"
- else:
- return "/cpu:0"
-
-with g.device(matmul_on_gpu):
- # All operations of type "MatMul" constructed in this context
- # will be placed on GPU 0; all other operations will be placed
- # on CPU 0.
-```
-
-**N.B.** The device scope may be overridden by op wrappers or
-other library code. For example, a variable assignment op
-`v.assign()` must be colocated with the `tf.Variable` `v`, and
-incompatible device scopes will be ignored.
-
-##### Args:
-
-
-* <b>`device_name_or_function`</b>: The device name or function to use in
- the context.
-
-##### Returns:
-
- A context manager that specifies the default device to use for newly
- created ops.
-
-
-- - -
-
-#### `tf.Graph.name_scope(name)` {#Graph.name_scope}
-
-Returns a context manager that creates hierarchical names for operations.
-
-A graph maintains a stack of name scopes. A `with name_scope(...):`
-statement pushes a new name onto the stack for the lifetime of the context.
-
-The `name` argument will be interpreted as follows:
-
-* A string (not ending with '/') will create a new name scope, in which
- `name` is appended to the prefix of all operations created in the
- context. If `name` has been used before, it will be made unique by
- calling `self.unique_name(name)`.
-* A scope previously captured from a `with g.name_scope(...) as
- scope:` statement will be treated as an "absolute" name scope, which
- makes it possible to re-enter existing scopes.
-* A value of `None` or the empty string will reset the current name scope
- to the top-level (empty) name scope.
-
-For example:
-
-```python
-with tf.Graph().as_default() as g:
- c = tf.constant(5.0, name="c")
- assert c.op.name == "c"
- c_1 = tf.constant(6.0, name="c")
- assert c_1.op.name == "c_1"
-
- # Creates a scope called "nested"
- with g.name_scope("nested") as scope:
- nested_c = tf.constant(10.0, name="c")
- assert nested_c.op.name == "nested/c"
-
- # Creates a nested scope called "inner".
- with g.name_scope("inner"):
- nested_inner_c = tf.constant(20.0, name="c")
- assert nested_inner_c.op.name == "nested/inner/c"
-
- # Create a nested scope called "inner_1".
- with g.name_scope("inner"):
- nested_inner_1_c = tf.constant(30.0, name="c")
- assert nested_inner_1_c.op.name == "nested/inner_1/c"
-
- # Treats `scope` as an absolute name scope, and
- # switches to the "nested/" scope.
- with g.name_scope(scope):
- nested_d = tf.constant(40.0, name="d")
- assert nested_d.op.name == "nested/d"
-
- with g.name_scope(""):
- e = tf.constant(50.0, name="e")
- assert e.op.name == "e"
-```
-
-The name of the scope itself can be captured by `with
-g.name_scope(...) as scope:`, which stores the name of the scope
-in the variable `scope`. This value can be used to name an
-operation that represents the overall result of executing the ops
-in a scope. For example:
-
-```python
-inputs = tf.constant(...)
-with g.name_scope('my_layer') as scope:
- weights = tf.Variable(..., name="weights")
- biases = tf.Variable(..., name="biases")
- affine = tf.matmul(inputs, weights) + biases
- output = tf.nn.relu(affine, name=scope)
-```
-
-NOTE: This constructor validates the given `name`. Valid scope
-names match one of the following regular expressions:
-
- [A-Za-z0-9.][A-Za-z0-9_.\\-/]* (for scopes at the root)
- [A-Za-z0-9_.\\-/]* (for other scopes)
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the scope.
-
-##### Returns:
-
- A context manager that installs `name` as a new name scope.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `name` is not a valid scope name, according to the rules
- above.
-
-
-
-A `Graph` instance supports an arbitrary number of "collections"
-that are identified by name. For convenience when building a large
-graph, collections can store groups of related objects: for
-example, the `tf.Variable` uses a collection (named
-[`tf.GraphKeys.GLOBAL_VARIABLES`](../../api_docs/python/framework.md#GraphKeys)) for
-all variables that are created during the construction of a graph. The caller
-may define additional collections by specifying a new name.
-
-- - -
-
-#### `tf.Graph.add_to_collection(name, value)` {#Graph.add_to_collection}
-
-Stores `value` in the collection with the given `name`.
-
-Note that collections are not sets, so it is possible to add a value to
-a collection several times.
-
-##### Args:
-
-
-* <b>`name`</b>: The key for the collection. The `GraphKeys` class
- contains many standard names for collections.
-* <b>`value`</b>: The value to add to the collection.
-
-
-- - -
-
-#### `tf.Graph.add_to_collections(names, value)` {#Graph.add_to_collections}
-
-Stores `value` in the collections given by `names`.
-
-Note that collections are not sets, so it is possible to add a value to
-a collection several times. This function makes sure that duplicates in
-`names` are ignored, but it will not check for pre-existing membership of
-`value` in any of the collections in `names`.
-
-`names` can be any iterable, but if `names` is a string, it is treated as a
-single collection name.
-
-##### Args:
-
-
-* <b>`names`</b>: The keys for the collections to add to. The `GraphKeys` class
- contains many standard names for collections.
-* <b>`value`</b>: The value to add to the collections.
-
-
-- - -
-
-#### `tf.Graph.get_collection(name, scope=None)` {#Graph.get_collection}
-
-Returns a list of values in the collection with the given `name`.
-
-This is different from `get_collection_ref()` which always returns the
-actual collection list if it exists in that it returns a new list each time
-it is called.
-
-##### Args:
-
-
-* <b>`name`</b>: The key for the collection. For example, the `GraphKeys` class
- contains many standard names for collections.
-* <b>`scope`</b>: (Optional.) If supplied, the resulting list is filtered to include
- only items whose `name` attribute matches using `re.match`. Items
- without a `name` attribute are never returned if a scope is supplied and
- the choice or `re.match` means that a `scope` without special tokens
- filters by prefix.
-
-##### Returns:
-
- The list of values in the collection with the given `name`, or
- an empty list if no value has been added to that collection. The
- list contains the values in the order under which they were
- collected.
-
-
-- - -
-
-#### `tf.Graph.get_collection_ref(name)` {#Graph.get_collection_ref}
-
-Returns a list of values in the collection with the given `name`.
-
-If the collection exists, this returns the list itself, which can
-be modified in place to change the collection. If the collection does
-not exist, it is created as an empty list and the list is returned.
-
-This is different from `get_collection()` which always returns a copy of
-the collection list if it exists and never creates an empty collection.
-
-##### Args:
-
-
-* <b>`name`</b>: The key for the collection. For example, the `GraphKeys` class
- contains many standard names for collections.
-
-##### Returns:
-
- The list of values in the collection with the given `name`, or an empty
- list if no value has been added to that collection.
-
-
-
-- - -
-
-#### `tf.Graph.as_graph_element(obj, allow_tensor=True, allow_operation=True)` {#Graph.as_graph_element}
-
-Returns the object referred to by `obj`, as an `Operation` or `Tensor`.
-
-This function validates that `obj` represents an element of this
-graph, and gives an informative error message if it is not.
-
-This function is the canonical way to get/validate an object of
-one of the allowed types from an external argument reference in the
-Session API.
-
-This method may be called concurrently from multiple threads.
-
-##### Args:
-
-
-* <b>`obj`</b>: A `Tensor`, an `Operation`, or the name of a tensor or operation.
- Can also be any object with an `_as_graph_element()` method that returns
- a value of one of these types.
-* <b>`allow_tensor`</b>: If true, `obj` may refer to a `Tensor`.
-* <b>`allow_operation`</b>: If true, `obj` may refer to an `Operation`.
-
-##### Returns:
-
- The `Tensor` or `Operation` in the Graph corresponding to `obj`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `obj` is not a type we support attempting to convert
- to types.
-* <b>`ValueError`</b>: If `obj` is of an appropriate type but invalid. For
- example, an invalid string.
-* <b>`KeyError`</b>: If `obj` is not an object in the graph.
-
-
-- - -
-
-#### `tf.Graph.get_operation_by_name(name)` {#Graph.get_operation_by_name}
-
-Returns the `Operation` with the given `name`.
-
-This method may be called concurrently from multiple threads.
-
-##### Args:
-
-
-* <b>`name`</b>: The name of the `Operation` to return.
-
-##### Returns:
-
- The `Operation` with the given `name`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `name` is not a string.
-* <b>`KeyError`</b>: If `name` does not correspond to an operation in this graph.
-
-
-- - -
-
-#### `tf.Graph.get_tensor_by_name(name)` {#Graph.get_tensor_by_name}
-
-Returns the `Tensor` with the given `name`.
-
-This method may be called concurrently from multiple threads.
-
-##### Args:
-
-
-* <b>`name`</b>: The name of the `Tensor` to return.
-
-##### Returns:
-
- The `Tensor` with the given `name`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `name` is not a string.
-* <b>`KeyError`</b>: If `name` does not correspond to a tensor in this graph.
-
-
-- - -
-
-#### `tf.Graph.get_operations()` {#Graph.get_operations}
-
-Return the list of operations in the graph.
-
-You can modify the operations in place, but modifications
-to the list such as inserts/delete have no effect on the
-list of operations known to the graph.
-
-This method may be called concurrently from multiple threads.
-
-##### Returns:
-
- A list of Operations.
-
-
-
-- - -
-
-#### `tf.Graph.seed` {#Graph.seed}
-
-The graph-level random seed of this graph.
-
-
-- - -
-
-#### `tf.Graph.unique_name(name, mark_as_used=True)` {#Graph.unique_name}
-
-Return a unique operation name for `name`.
-
-Note: You rarely need to call `unique_name()` directly. Most of
-the time you just need to create `with g.name_scope()` blocks to
-generate structured names.
-
-`unique_name` is used to generate structured names, separated by
-`"/"`, to help identify operations when debugging a graph.
-Operation names are displayed in error messages reported by the
-TensorFlow runtime, and in various visualization tools such as
-TensorBoard.
-
-If `mark_as_used` is set to `True`, which is the default, a new
-unique name is created and marked as in use. If it's set to `False`,
-the unique name is returned without actually being marked as used.
-This is useful when the caller simply wants to know what the name
-to be created will be.
-
-##### Args:
-
-
-* <b>`name`</b>: The name for an operation.
-* <b>`mark_as_used`</b>: Whether to mark this name as being used.
-
-##### Returns:
-
- A string to be passed to `create_op()` that will be used
- to name the operation being created.
-
-
-- - -
-
-#### `tf.Graph.version` {#Graph.version}
-
-Returns a version number that increases as ops are added to the graph.
-
-Note that this is unrelated to the
-[GraphDef version](#Graph.graph_def_version).
-
-
-- - -
-
-#### `tf.Graph.graph_def_versions` {#Graph.graph_def_versions}
-
-The GraphDef version information of this graph.
-
-For details on the meaning of each version, see
-[`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto).
-
-##### Returns:
-
- A `VersionDef`.
-
-
-
-- - -
-
-#### `tf.Graph.create_op(op_type, inputs, dtypes, input_types=None, name=None, attrs=None, op_def=None, compute_shapes=True, compute_device=True)` {#Graph.create_op}
-
-Creates an `Operation` in this graph.
-
-This is a low-level interface for creating an `Operation`. Most
-programs will not call this method directly, and instead use the
-Python op constructors, such as `tf.constant()`, which add ops to
-the default graph.
-
-##### Args:
-
-
-* <b>`op_type`</b>: The `Operation` type to create. This corresponds to the
- `OpDef.name` field for the proto that defines the operation.
-* <b>`inputs`</b>: A list of `Tensor` objects that will be inputs to the `Operation`.
-* <b>`dtypes`</b>: A list of `DType` objects that will be the types of the tensors
- that the operation produces.
-* <b>`input_types`</b>: (Optional.) A list of `DType`s that will be the types of
- the tensors that the operation consumes. By default, uses the base
- `DType` of each input in `inputs`. Operations that expect
- reference-typed inputs must specify `input_types` explicitly.
-* <b>`name`</b>: (Optional.) A string name for the operation. If not specified, a
- name is generated based on `op_type`.
-* <b>`attrs`</b>: (Optional.) A dictionary where the key is the attribute name (a
- string) and the value is the respective `attr` attribute of the
- `NodeDef` proto that will represent the operation (an `AttrValue`
- proto).
-* <b>`op_def`</b>: (Optional.) The `OpDef` proto that describes the `op_type` that
- the operation will have.
-* <b>`compute_shapes`</b>: (Optional.) If True, shape inference will be performed
- to compute the shapes of the outputs.
-* <b>`compute_device`</b>: (Optional.) If True, device functions will be executed
- to compute the device property of the Operation.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if any of the inputs is not a `Tensor`.
-* <b>`ValueError`</b>: if colocation conflicts with existing device assignment.
-
-##### Returns:
-
- An `Operation` object.
-
-
-- - -
-
-#### `tf.Graph.gradient_override_map(op_type_map)` {#Graph.gradient_override_map}
-
-EXPERIMENTAL: A context manager for overriding gradient functions.
-
-This context manager can be used to override the gradient function
-that will be used for ops within the scope of the context.
-
-For example:
-
-```python
-@tf.RegisterGradient("CustomSquare")
-def _custom_square_grad(op, grad):
- # ...
-
-with tf.Graph().as_default() as g:
- c = tf.constant(5.0)
- s_1 = tf.square(c) # Uses the default gradient for tf.square.
- with g.gradient_override_map({"Square": "CustomSquare"}):
- s_2 = tf.square(s_2) # Uses _custom_square_grad to compute the
- # gradient of s_2.
-```
-
-##### Args:
-
-
-* <b>`op_type_map`</b>: A dictionary mapping op type strings to alternative op
- type strings.
-
-##### Returns:
-
- A context manager that sets the alternative op type to be used for one
- or more ops created in that context.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `op_type_map` is not a dictionary mapping strings to
- strings.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.Graph.building_function` {#Graph.building_function}
-
-Returns True iff this graph represents a function.
-
-
-- - -
-
-#### `tf.Graph.clear_collection(name)` {#Graph.clear_collection}
-
-Clears all values in a collection.
-
-##### Args:
-
-
-* <b>`name`</b>: The key for the collection. The `GraphKeys` class contains many
- standard names for collections.
-
-
-- - -
-
-#### `tf.Graph.colocate_with(op, ignore_existing=False)` {#Graph.colocate_with}
-
-Returns a context manager that specifies an op to colocate with.
-
-Note: this function is not for public use, only for internal libraries.
-
-For example:
-
-```python
-a = tf.Variable([1.0])
-with g.colocate_with(a):
- b = tf.constant(1.0)
- c = tf.add(a, b)
-```
-
-`b` and `c` will always be colocated with `a`, no matter where `a`
-is eventually placed.
-
-**NOTE** Using a colocation scope resets any existing device constraints.
-
-If `op` is `None` then `ignore_existing` must be `True` and the new
-scope resets all colocation and device constraints.
-
-##### Args:
-
-
-* <b>`op`</b>: The op to colocate all created ops with, or `None`.
-* <b>`ignore_existing`</b>: If true, only applies colocation of this op within
- the context, rather than applying all colocation properties
- on the stack. If `op` is `None`, this value must be `True`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if op is None but ignore_existing is False.
-
-##### Yields:
-
- A context manager that specifies the op with which to colocate
- newly created ops.
-
-
-- - -
-
-#### `tf.Graph.container(container_name)` {#Graph.container}
-
-Returns a context manager that specifies the resource container to use.
-
-Stateful operations, such as variables and queues, can maintain their
-states on devices so that they can be shared by multiple processes.
-A resource container is a string name under which these stateful
-operations are tracked. These resources can be released or cleared
-with `tf.Session.reset()`.
-
-For example:
-
-```python
-with g.container('experiment0'):
- # All stateful Operations constructed in this context will be placed
- # in resource container "experiment0".
- v1 = tf.Variable([1.0])
- v2 = tf.Variable([2.0])
- with g.container("experiment1"):
- # All stateful Operations constructed in this context will be
- # placed in resource container "experiment1".
- v3 = tf.Variable([3.0])
- q1 = tf.FIFOQueue(10, tf.float32)
- # All stateful Operations constructed in this context will be
- # be created in the "experiment0".
- v4 = tf.Variable([4.0])
- q1 = tf.FIFOQueue(20, tf.float32)
- with g.container(""):
- # All stateful Operations constructed in this context will be
- # be placed in the default resource container.
- v5 = tf.Variable([5.0])
- q3 = tf.FIFOQueue(30, tf.float32)
-
-# Resets container "experiment0", after which the state of v1, v2, v4, q1
-# will become undefined (such as uninitialized).
-tf.Session.reset(target, ["experiment0"])
-```
-
-##### Args:
-
-
-* <b>`container_name`</b>: container name string.
-
-##### Returns:
-
- A context manager for defining resource containers for stateful ops,
- yields the container name.
-
-
-- - -
-
-#### `tf.Graph.get_all_collection_keys()` {#Graph.get_all_collection_keys}
-
-Returns a list of collections used in this graph.
-
-
-- - -
-
-#### `tf.Graph.is_feedable(tensor)` {#Graph.is_feedable}
-
-Returns `True` if and only if `tensor` is feedable.
-
-
-- - -
-
-#### `tf.Graph.is_fetchable(tensor_or_op)` {#Graph.is_fetchable}
-
-Returns `True` if and only if `tensor_or_op` is fetchable.
-
-
-- - -
-
-#### `tf.Graph.prevent_feeding(tensor)` {#Graph.prevent_feeding}
-
-Marks the given `tensor` as unfeedable in this graph.
-
-
-- - -
-
-#### `tf.Graph.prevent_fetching(op)` {#Graph.prevent_fetching}
-
-Marks the given `op` as unfetchable in this graph.
-
-
-
-- - -
-
-### `class tf.Operation` {#Operation}
-
-Represents a graph node that performs computation on tensors.
-
-An `Operation` is a node in a TensorFlow `Graph` that takes zero or
-more `Tensor` objects as input, and produces zero or more `Tensor`
-objects as output. Objects of type `Operation` are created by
-calling a Python op constructor (such as
-[`tf.matmul()`](../../api_docs/python/math_ops.md#matmul))
-or [`Graph.create_op()`](../../api_docs/python/framework.md#Graph.create_op).
-
-For example `c = tf.matmul(a, b)` creates an `Operation` of type
-"MatMul" that takes tensors `a` and `b` as input, and produces `c`
-as output.
-
-After the graph has been launched in a session, an `Operation` can
-be executed by passing it to
-[`Session.run()`](../../api_docs/python/client.md#Session.run).
-`op.run()` is a shortcut for calling `tf.get_default_session().run(op)`.
-- - -
-
-#### `tf.Operation.__init__(node_def, g, inputs=None, output_types=None, control_inputs=None, input_types=None, original_op=None, op_def=None)` {#Operation.__init__}
-
-Creates an `Operation`.
-
-NOTE: This constructor validates the name of the `Operation` (passed
-as `node_def.name`). Valid `Operation` names match the following
-regular expression:
-
- [A-Za-z0-9.][A-Za-z0-9_.\\-/]*
-
-##### Args:
-
-
-* <b>`node_def`</b>: `node_def_pb2.NodeDef`. `NodeDef` for the `Operation`.
- Used for attributes of `node_def_pb2.NodeDef`, typically `name`,
- `op`, and `device`. The `input` attribute is irrelevant here
- as it will be computed when generating the model.
-* <b>`g`</b>: `Graph`. The parent graph.
-* <b>`inputs`</b>: list of `Tensor` objects. The inputs to this `Operation`.
-* <b>`output_types`</b>: list of `DType` objects. List of the types of the
- `Tensors` computed by this operation. The length of this list indicates
- the number of output endpoints of the `Operation`.
-* <b>`control_inputs`</b>: list of operations or tensors from which to have a
- control dependency.
-* <b>`input_types`</b>: List of `DType` objects representing the
- types of the tensors accepted by the `Operation`. By default
- uses `[x.dtype.base_dtype for x in inputs]`. Operations that expect
- reference-typed inputs must specify these explicitly.
-* <b>`original_op`</b>: Optional. Used to associate the new `Operation` with an
- existing `Operation` (for example, a replica with the op that was
- replicated).
-* <b>`op_def`</b>: Optional. The `op_def_pb2.OpDef` proto that describes the
- op type that this `Operation` represents.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if control inputs are not Operations or Tensors,
- or if `node_def` is not a `NodeDef`,
- or if `g` is not a `Graph`,
- or if `inputs` are not tensors,
- or if `inputs` and `input_types` are incompatible.
-* <b>`ValueError`</b>: if the `node_def` name is not valid.
-
-
-- - -
-
-#### `tf.Operation.__repr__()` {#Operation.__repr__}
-
-
-
-
-- - -
-
-#### `tf.Operation.__str__()` {#Operation.__str__}
-
-
-
-
-- - -
-
-#### `tf.Operation.colocation_groups()` {#Operation.colocation_groups}
-
-Returns the list of colocation groups of the op.
-
-
-- - -
-
-#### `tf.Operation.control_inputs` {#Operation.control_inputs}
-
-The `Operation` objects on which this op has a control dependency.
-
-Before this op is executed, TensorFlow will ensure that the
-operations in `self.control_inputs` have finished executing. This
-mechanism can be used to run ops sequentially for performance
-reasons, or to ensure that the side effects of an op are observed
-in the correct order.
-
-##### Returns:
-
- A list of `Operation` objects.
-
-
-- - -
-
-#### `tf.Operation.device` {#Operation.device}
-
-The name of the device to which this op has been assigned, if any.
-
-##### Returns:
-
- The string name of the device to which this op has been
- assigned, or an empty string if it has not been assigned to a
- device.
-
-
-- - -
-
-#### `tf.Operation.get_attr(name)` {#Operation.get_attr}
-
-Returns the value of the attr of this op with the given `name`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name of the attr to fetch.
-
-##### Returns:
-
- The value of the attr, as a Python object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If this op does not have an attr with the given `name`.
-
-
-- - -
-
-#### `tf.Operation.graph` {#Operation.graph}
-
-The `Graph` that contains this operation.
-
-
-- - -
-
-#### `tf.Operation.inputs` {#Operation.inputs}
-
-The list of `Tensor` objects representing the data inputs of this op.
-
-
-- - -
-
-#### `tf.Operation.name` {#Operation.name}
-
-The full name of this operation.
-
-
-- - -
-
-#### `tf.Operation.node_def` {#Operation.node_def}
-
-Returns a serialized `NodeDef` representation of this operation.
-
-##### Returns:
-
- A
- [`NodeDef`](https://www.tensorflow.org/code/tensorflow/core/framework/node_def.proto)
- protocol buffer.
-
-
-- - -
-
-#### `tf.Operation.op_def` {#Operation.op_def}
-
-Returns the `OpDef` proto that represents the type of this op.
-
-##### Returns:
-
- An
- [`OpDef`](https://www.tensorflow.org/code/tensorflow/core/framework/op_def.proto)
- protocol buffer.
-
-
-- - -
-
-#### `tf.Operation.outputs` {#Operation.outputs}
-
-The list of `Tensor` objects representing the outputs of this op.
-
-
-- - -
-
-#### `tf.Operation.run(feed_dict=None, session=None)` {#Operation.run}
-
-Runs this operation in a `Session`.
-
-Calling this method will execute all preceding operations that
-produce the inputs needed for this operation.
-
-*N.B.* Before invoking `Operation.run()`, its graph must have been
-launched in a session, and either a default session must be
-available, or `session` must be specified explicitly.
-
-##### Args:
-
-
-* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
- See [`Session.run()`](../../api_docs/python/client.md#Session.run)
- for a description of the valid feed values.
-* <b>`session`</b>: (Optional.) The `Session` to be used to run to this operation. If
- none, the default session will be used.
-
-
-- - -
-
-#### `tf.Operation.traceback` {#Operation.traceback}
-
-Returns the call stack from when this operation was constructed.
-
-
-- - -
-
-#### `tf.Operation.type` {#Operation.type}
-
-The type of the op (e.g. `"MatMul"`).
-
-
-- - -
-
-#### `tf.Operation.values()` {#Operation.values}
-
-DEPRECATED: Use outputs.
-
-
-
-- - -
-
-### `class tf.Tensor` {#Tensor}
-
-Represents one of the outputs of an `Operation`.
-
-A `Tensor` is a symbolic handle to one of the outputs of an
-`Operation`. It does not hold the values of that operation's output,
-but instead provides a means of computing those values in a
-TensorFlow [`Session`](../../api_docs/python/client.md#Session).
-
-This class has two primary purposes:
-
-1. A `Tensor` can be passed as an input to another `Operation`.
- This builds a dataflow connection between operations, which
- enables TensorFlow to execute an entire `Graph` that represents a
- large, multi-step computation.
-
-2. After the graph has been launched in a session, the value of the
- `Tensor` can be computed by passing it to
- [`Session.run()`](../../api_docs/python/client.md#Session.run).
- `t.eval()` is a shortcut for calling
- `tf.get_default_session().run(t)`.
-
-In the following example, `c`, `d`, and `e` are symbolic `Tensor`
-objects, whereas `result` is a numpy array that stores a concrete
-value:
-
-```python
-# Build a dataflow graph.
-c = tf.constant([[1.0, 2.0], [3.0, 4.0]])
-d = tf.constant([[1.0, 1.0], [0.0, 1.0]])
-e = tf.matmul(c, d)
-
-# Construct a `Session` to execute the graph.
-sess = tf.Session()
-
-# Execute the graph and store the value that `e` represents in `result`.
-result = sess.run(e)
-```
-- - -
-
-#### `tf.Tensor.__abs__(x, name=None)` {#Tensor.__abs__}
-
-Computes the absolute value of a tensor.
-
-Given a tensor of real numbers `x`, this operation returns a tensor
-containing the absolute value of each element in `x`. For example, if x is
-an input element and y is an output element, this operation computes
-\\(y = |x|\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor` of type `float32`, `float64`, `int32`, or
- `int64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` the same size and type as `x` with absolute
- values.
-
-
-- - -
-
-#### `tf.Tensor.__add__(x, y)` {#Tensor.__add__}
-
-Returns x + y element-wise.
-
-*NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Tensor.__and__(x, y)` {#Tensor.__and__}
-
-Returns the truth value of x AND y element-wise.
-
-*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__bool__()` {#Tensor.__bool__}
-
-Dummy method to prevent a tensor from being used as a Python `bool`.
-
-This overload raises a `TypeError` when the user inadvertently
-treats a `Tensor` as a boolean (e.g. in an `if` statement). For
-example:
-
-```python
-if tf.constant(True): # Will raise.
- # ...
-
-if tf.constant(5) < tf.constant(7): # Will raise.
- # ...
-```
-
-This disallows ambiguities between testing the Python value vs testing the
-dynamic condition of the `Tensor`.
-
-##### Raises:
-
- `TypeError`.
-
-
-- - -
-
-#### `tf.Tensor.__div__(x, y)` {#Tensor.__div__}
-
-Divide two values using Python 2 semantics. Used for Tensor.__div__.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` returns the quotient of x and y.
-
-
-- - -
-
-#### `tf.Tensor.__eq__(other)` {#Tensor.__eq__}
-
-
-
-
-- - -
-
-#### `tf.Tensor.__floordiv__(x, y)` {#Tensor.__floordiv__}
-
-Divides `x / y` elementwise, rounding toward the most negative integer.
-
-The same as `tf.div(x,y)` for integers, but uses `tf.floor(tf.div(x,y))` for
-floating point arguments so that the result is always an integer (though
-possibly an integer represented as floating point). This op is generated by
-`x // y` floor division in Python 3 and in Python 2.7 with
-`from __future__ import division`.
-
-Note that for efficiency, `floordiv` uses C semantics for negative numbers
-(unlike Python and Numpy).
-
-`x` and `y` must have the same type, and the result will have the same type
-as well.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` rounded down (except possibly towards zero for negative integers).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the inputs are complex.
-
-
-- - -
-
-#### `tf.Tensor.__ge__(x, y, name=None)` {#Tensor.__ge__}
-
-Returns the truth value of (x >= y) element-wise.
-
-*NOTE*: `GreaterEqual` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__getitem__(tensor, slice_spec, var=None)` {#Tensor.__getitem__}
-
-Overload for Tensor.__getitem__.
-
-This operation extracts the specified region from the tensor.
-The notation is similar to NumPy with the restriction that
-currently only support basic indexing. That means that
-using a tensor as input is not currently allowed
-
-Some useful examples:
-
-```python
-# strip leading and trailing 2 elements
-foo = tf.constant([1,2,3,4,5,6])
-print(foo[2:-2].eval()) # => [3,4]
-
-# skip every row and reverse every column
-foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
-print(foo[::2,::-1].eval()) # => [[3,2,1], [9,8,7]]
-
-# Insert another dimension
-foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
-print(foo[tf.newaxis, :, :].eval()) # => [[[3,2,1], [9,8,7]]]
-print(foo[:, tf.newaxis, :].eval()) # => [[[3,2,1]], [[9,8,7]]]
-print(foo[:, :, tf.newaxis].eval()) # => [[[3],[2],[1]], [[9],[8],[7]]]
-
-# Ellipses (3 equivalent operations)
-print(foo[tf.newaxis, :, :].eval()) # => [[[3,2,1], [9,8,7]]]
-print(foo[tf.newaxis, ...].eval()) # => [[[3,2,1], [9,8,7]]]
-print(foo[tf.newaxis].eval()) # => [[[3,2,1], [9,8,7]]]
-```
-
-##### Notes:
-
- - `tf.newaxis` is `None` as in NumPy.
- - An implicit ellipsis is placed at the end of the `slice_spec`
- - NumPy advanced indexing is currently not supported.
-
-##### Args:
-
-
-* <b>`tensor`</b>: An ops.Tensor object.
-* <b>`slice_spec`</b>: The arguments to Tensor.__getitem__.
-* <b>`var`</b>: In the case of variable slice assignment, the Variable
- object to slice (i.e. tensor is the read-only view of this
- variable).
-
-##### Returns:
-
- The appropriate slice of "tensor", based on "slice_spec".
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If a slice range is negative size.
-* <b>`TypeError`</b>: If the slice indices aren't int, slice, or Ellipsis.
-
-
-- - -
-
-#### `tf.Tensor.__gt__(x, y, name=None)` {#Tensor.__gt__}
-
-Returns the truth value of (x > y) element-wise.
-
-*NOTE*: `Greater` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__hash__()` {#Tensor.__hash__}
-
-
-
-
-- - -
-
-#### `tf.Tensor.__init__(op, value_index, dtype)` {#Tensor.__init__}
-
-Creates a new `Tensor`.
-
-##### Args:
-
-
-* <b>`op`</b>: An `Operation`. `Operation` that computes this tensor.
-* <b>`value_index`</b>: An `int`. Index of the operation's endpoint that produces
- this tensor.
-* <b>`dtype`</b>: A `DType`. Type of elements stored in this tensor.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the op is not an `Operation`.
-
-
-- - -
-
-#### `tf.Tensor.__invert__(x, name=None)` {#Tensor.__invert__}
-
-Returns the truth value of NOT x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__iter__()` {#Tensor.__iter__}
-
-Dummy method to prevent iteration. Do not call.
-
-NOTE(mrry): If we register __getitem__ as an overloaded operator,
-Python will valiantly attempt to iterate over the Tensor from 0 to
-infinity. Declaring this method prevents this unintended
-behavior.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: when invoked.
-
-
-- - -
-
-#### `tf.Tensor.__le__(x, y, name=None)` {#Tensor.__le__}
-
-Returns the truth value of (x <= y) element-wise.
-
-*NOTE*: `LessEqual` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__lt__(x, y, name=None)` {#Tensor.__lt__}
-
-Returns the truth value of (x < y) element-wise.
-
-*NOTE*: `Less` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__mod__(x, y)` {#Tensor.__mod__}
-
-Returns element-wise remainder of division. When `x < 0` xor `y < 0` is
-
-true, this follows Python semantics in that the result here is consistent
-with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.
-
-*NOTE*: `FloorMod` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Tensor.__mul__(x, y)` {#Tensor.__mul__}
-
-Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse".
-
-
-- - -
-
-#### `tf.Tensor.__neg__(x, name=None)` {#Tensor.__neg__}
-
-Computes numerical negative value element-wise.
-
-I.e., \\(y = -x\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Tensor.__nonzero__()` {#Tensor.__nonzero__}
-
-Dummy method to prevent a tensor from being used as a Python `bool`.
-
-This is the Python 2.x counterpart to `__bool__()` above.
-
-##### Raises:
-
- `TypeError`.
-
-
-- - -
-
-#### `tf.Tensor.__or__(x, y)` {#Tensor.__or__}
-
-Returns the truth value of x OR y element-wise.
-
-*NOTE*: `LogicalOr` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__pow__(x, y)` {#Tensor.__pow__}
-
-Computes the power of one value to another.
-
-Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for
-corresponding elements in `x` and `y`. For example:
-
-```
-# tensor 'x' is [[2, 2], [3, 3]]
-# tensor 'y' is [[8, 16], [2, 3]]
-tf.pow(x, y) ==> [[256, 65536], [9, 27]]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`y`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`.
-
-
-- - -
-
-#### `tf.Tensor.__radd__(y, x)` {#Tensor.__radd__}
-
-Returns x + y element-wise.
-
-*NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Tensor.__rand__(y, x)` {#Tensor.__rand__}
-
-Returns the truth value of x AND y element-wise.
-
-*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__rdiv__(y, x)` {#Tensor.__rdiv__}
-
-Divide two values using Python 2 semantics. Used for Tensor.__div__.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` returns the quotient of x and y.
-
-
-- - -
-
-#### `tf.Tensor.__repr__()` {#Tensor.__repr__}
-
-
-
-
-- - -
-
-#### `tf.Tensor.__rfloordiv__(y, x)` {#Tensor.__rfloordiv__}
-
-Divides `x / y` elementwise, rounding toward the most negative integer.
-
-The same as `tf.div(x,y)` for integers, but uses `tf.floor(tf.div(x,y))` for
-floating point arguments so that the result is always an integer (though
-possibly an integer represented as floating point). This op is generated by
-`x // y` floor division in Python 3 and in Python 2.7 with
-`from __future__ import division`.
-
-Note that for efficiency, `floordiv` uses C semantics for negative numbers
-(unlike Python and Numpy).
-
-`x` and `y` must have the same type, and the result will have the same type
-as well.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` rounded down (except possibly towards zero for negative integers).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the inputs are complex.
-
-
-- - -
-
-#### `tf.Tensor.__rmod__(y, x)` {#Tensor.__rmod__}
-
-Returns element-wise remainder of division. When `x < 0` xor `y < 0` is
-
-true, this follows Python semantics in that the result here is consistent
-with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.
-
-*NOTE*: `FloorMod` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Tensor.__rmul__(y, x)` {#Tensor.__rmul__}
-
-Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse".
-
-
-- - -
-
-#### `tf.Tensor.__ror__(y, x)` {#Tensor.__ror__}
-
-Returns the truth value of x OR y element-wise.
-
-*NOTE*: `LogicalOr` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__rpow__(y, x)` {#Tensor.__rpow__}
-
-Computes the power of one value to another.
-
-Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for
-corresponding elements in `x` and `y`. For example:
-
-```
-# tensor 'x' is [[2, 2], [3, 3]]
-# tensor 'y' is [[8, 16], [2, 3]]
-tf.pow(x, y) ==> [[256, 65536], [9, 27]]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`y`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`.
-
-
-- - -
-
-#### `tf.Tensor.__rsub__(y, x)` {#Tensor.__rsub__}
-
-Returns x - y element-wise.
-
-*NOTE*: `Sub` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Tensor.__rtruediv__(y, x)` {#Tensor.__rtruediv__}
-
-
-
-
-- - -
-
-#### `tf.Tensor.__rxor__(y, x)` {#Tensor.__rxor__}
-
-x ^ y = (x | y) & ~(x & y).
-
-
-- - -
-
-#### `tf.Tensor.__str__()` {#Tensor.__str__}
-
-
-
-
-- - -
-
-#### `tf.Tensor.__sub__(x, y)` {#Tensor.__sub__}
-
-Returns x - y element-wise.
-
-*NOTE*: `Sub` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Tensor.__truediv__(x, y)` {#Tensor.__truediv__}
-
-
-
-
-- - -
-
-#### `tf.Tensor.__xor__(x, y)` {#Tensor.__xor__}
-
-x ^ y = (x | y) & ~(x & y).
-
-
-- - -
-
-#### `tf.Tensor.consumers()` {#Tensor.consumers}
-
-Returns a list of `Operation`s that consume this tensor.
-
-##### Returns:
-
- A list of `Operation`s.
-
-
-- - -
-
-#### `tf.Tensor.device` {#Tensor.device}
-
-The name of the device on which this tensor will be produced, or None.
-
-
-- - -
-
-#### `tf.Tensor.dtype` {#Tensor.dtype}
-
-The `DType` of elements in this tensor.
-
-
-- - -
-
-#### `tf.Tensor.eval(feed_dict=None, session=None)` {#Tensor.eval}
-
-Evaluates this tensor in a `Session`.
-
-Calling this method will execute all preceding operations that
-produce the inputs needed for the operation that produces this
-tensor.
-
-*N.B.* Before invoking `Tensor.eval()`, its graph must have been
-launched in a session, and either a default session must be
-available, or `session` must be specified explicitly.
-
-##### Args:
-
-
-* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
- See [`Session.run()`](../../api_docs/python/client.md#Session.run) for a
- description of the valid feed values.
-* <b>`session`</b>: (Optional.) The `Session` to be used to evaluate this tensor. If
- none, the default session will be used.
-
-##### Returns:
-
- A numpy array corresponding to the value of this tensor.
-
-
-- - -
-
-#### `tf.Tensor.get_shape()` {#Tensor.get_shape}
-
-Alias of Tensor.shape.
-
-
-- - -
-
-#### `tf.Tensor.graph` {#Tensor.graph}
-
-The `Graph` that contains this tensor.
-
-
-- - -
-
-#### `tf.Tensor.name` {#Tensor.name}
-
-The string name of this tensor.
-
-
-- - -
-
-#### `tf.Tensor.op` {#Tensor.op}
-
-The `Operation` that produces this tensor as an output.
-
-
-- - -
-
-#### `tf.Tensor.set_shape(shape)` {#Tensor.set_shape}
-
-Updates the shape of this tensor.
-
-This method can be called multiple times, and will merge the given
-`shape` with the current shape of this tensor. It can be used to
-provide additional information about the shape of this tensor that
-cannot be inferred from the graph alone. For example, this can be used
-to provide additional information about the shapes of images:
-
-```python
-_, image_data = tf.TFRecordReader(...).read(...)
-image = tf.image.decode_png(image_data, channels=3)
-
-# The height and width dimensions of `image` are data dependent, and
-# cannot be computed without executing the op.
-print(image.shape)
-==> TensorShape([Dimension(None), Dimension(None), Dimension(3)])
-
-# We know that each image in this dataset is 28 x 28 pixels.
-image.set_shape([28, 28, 3])
-print(image.shape)
-==> TensorShape([Dimension(28), Dimension(28), Dimension(3)])
-```
-
-##### Args:
-
-
-* <b>`shape`</b>: A `TensorShape` representing the shape of this tensor.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `shape` is not compatible with the current shape of
- this tensor.
-
-
-- - -
-
-#### `tf.Tensor.shape` {#Tensor.shape}
-
-Returns the `TensorShape` that represents the shape of this tensor.
-
-The shape is computed using shape inference functions that are
-registered in the Op for each `Operation`. See
-[`TensorShape`](../../api_docs/python/framework.md#TensorShape)
-for more details of what a shape represents.
-
-The inferred shape of a tensor is used to provide shape
-information without having to launch the graph in a session. This
-can be used for debugging, and providing early error messages. For
-example:
-
-```python
-c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
-
-print(c.shape)
-==> TensorShape([Dimension(2), Dimension(3)])
-
-d = tf.constant([[1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]])
-
-print(d.shape)
-==> TensorShape([Dimension(4), Dimension(2)])
-
-# Raises a ValueError, because `c` and `d` do not have compatible
-# inner dimensions.
-e = tf.matmul(c, d)
-
-f = tf.matmul(c, d, transpose_a=True, transpose_b=True)
-
-print(f.shape)
-==> TensorShape([Dimension(3), Dimension(4)])
-```
-
-In some cases, the inferred shape may have unknown dimensions. If
-the caller has additional information about the values of these
-dimensions, `Tensor.set_shape()` can be used to augment the
-inferred shape.
-
-##### Returns:
-
- A `TensorShape` representing the shape of this tensor.
-
-
-- - -
-
-#### `tf.Tensor.value_index` {#Tensor.value_index}
-
-The index of this tensor in the outputs of its `Operation`.
-
-
-
-
-## Tensor types
-
-- - -
-
-### `class tf.DType` {#DType}
-
-Represents the type of the elements in a `Tensor`.
-
-The following `DType` objects are defined:
-
-* `tf.float16`: 16-bit half-precision floating-point.
-* `tf.float32`: 32-bit single-precision floating-point.
-* `tf.float64`: 64-bit double-precision floating-point.
-* `tf.bfloat16`: 16-bit truncated floating-point.
-* `tf.complex64`: 64-bit single-precision complex.
-* `tf.complex128`: 128-bit double-precision complex.
-* `tf.int8`: 8-bit signed integer.
-* `tf.uint8`: 8-bit unsigned integer.
-* `tf.uint16`: 16-bit unsigned integer.
-* `tf.int16`: 16-bit signed integer.
-* `tf.int32`: 32-bit signed integer.
-* `tf.int64`: 64-bit signed integer.
-* `tf.bool`: Boolean.
-* `tf.string`: String.
-* `tf.qint8`: Quantized 8-bit signed integer.
-* `tf.quint8`: Quantized 8-bit unsigned integer.
-* `tf.qint16`: Quantized 16-bit signed integer.
-* `tf.quint16`: Quantized 16-bit unsigned integer.
-* `tf.qint32`: Quantized 32-bit signed integer.
-* `tf.resource`: Handle to a mutable resource.
-
-In addition, variants of these types with the `_ref` suffix are
-defined for reference-typed tensors.
-
-The `tf.as_dtype()` function converts numpy types and string type
-names to a `DType` object.
-- - -
-
-#### `tf.DType.__eq__(other)` {#DType.__eq__}
-
-Returns True iff this DType refers to the same type as `other`.
-
-
-- - -
-
-#### `tf.DType.__hash__()` {#DType.__hash__}
-
-
-
-
-- - -
-
-#### `tf.DType.__init__(type_enum)` {#DType.__init__}
-
-Creates a new `DataType`.
-
-NOTE(mrry): In normal circumstances, you should not need to
-construct a `DataType` object directly. Instead, use the
-`tf.as_dtype()` function.
-
-##### Args:
-
-
-* <b>`type_enum`</b>: A `types_pb2.DataType` enum value.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `type_enum` is not a value `types_pb2.DataType`.
-
-
-- - -
-
-#### `tf.DType.__ne__(other)` {#DType.__ne__}
-
-Returns True iff self != other.
-
-
-- - -
-
-#### `tf.DType.__repr__()` {#DType.__repr__}
-
-
-
-
-- - -
-
-#### `tf.DType.__str__()` {#DType.__str__}
-
-
-
-
-- - -
-
-#### `tf.DType.as_datatype_enum` {#DType.as_datatype_enum}
-
-Returns a `types_pb2.DataType` enum value based on this `DType`.
-
-
-- - -
-
-#### `tf.DType.as_numpy_dtype` {#DType.as_numpy_dtype}
-
-Returns a `numpy.dtype` based on this `DType`.
-
-
-- - -
-
-#### `tf.DType.base_dtype` {#DType.base_dtype}
-
-Returns a non-reference `DType` based on this `DType`.
-
-
-- - -
-
-#### `tf.DType.is_bool` {#DType.is_bool}
-
-Returns whether this is a boolean data type
-
-
-- - -
-
-#### `tf.DType.is_compatible_with(other)` {#DType.is_compatible_with}
-
-Returns True if the `other` DType will be converted to this DType.
-
-The conversion rules are as follows:
-
-```python
-DType(T) .is_compatible_with(DType(T)) == True
-DType(T) .is_compatible_with(DType(T).as_ref) == True
-DType(T).as_ref.is_compatible_with(DType(T)) == False
-DType(T).as_ref.is_compatible_with(DType(T).as_ref) == True
-```
-
-##### Args:
-
-
-* <b>`other`</b>: A `DType` (or object that may be converted to a `DType`).
-
-##### Returns:
-
- True if a Tensor of the `other` `DType` will be implicitly converted to
- this `DType`.
-
-
-- - -
-
-#### `tf.DType.is_complex` {#DType.is_complex}
-
-Returns whether this is a complex floating point type.
-
-
-- - -
-
-#### `tf.DType.is_floating` {#DType.is_floating}
-
-Returns whether this is a (non-quantized, real) floating point type.
-
-
-- - -
-
-#### `tf.DType.is_integer` {#DType.is_integer}
-
-Returns whether this is a (non-quantized) integer type.
-
-
-- - -
-
-#### `tf.DType.is_numpy_compatible` {#DType.is_numpy_compatible}
-
-
-
-
-- - -
-
-#### `tf.DType.is_quantized` {#DType.is_quantized}
-
-Returns whether this is a quantized data type.
-
-
-- - -
-
-#### `tf.DType.is_unsigned` {#DType.is_unsigned}
-
-Returns whether this type is unsigned.
-
-Non-numeric, unordered, and quantized types are not considered unsigned, and
-this function returns `False`.
-
-##### Returns:
-
- Whether a `DType` is unsigned.
-
-
-- - -
-
-#### `tf.DType.limits` {#DType.limits}
-
-Return intensity limits, i.e. (min, max) tuple, of the dtype.
-
-##### Args:
-
- clip_negative : bool, optional
- If True, clip the negative range (i.e. return 0 for min intensity)
- even if the image dtype allows negative values.
-Returns
- min, max : tuple
- Lower and upper intensity limits.
-
-
-- - -
-
-#### `tf.DType.max` {#DType.max}
-
-Returns the maximum representable value in this data type.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if this is a non-numeric, unordered, or quantized type.
-
-
-- - -
-
-#### `tf.DType.min` {#DType.min}
-
-Returns the minimum representable value in this data type.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if this is a non-numeric, unordered, or quantized type.
-
-
-- - -
-
-#### `tf.DType.name` {#DType.name}
-
-Returns the string name for this `DType`.
-
-
-- - -
-
-#### `tf.DType.real_dtype` {#DType.real_dtype}
-
-Returns the dtype correspond to this dtype's real part.
-
-
-- - -
-
-#### `tf.DType.size` {#DType.size}
-
-
-
-
-
-- - -
-
-### `tf.as_dtype(type_value)` {#as_dtype}
-
-Converts the given `type_value` to a `DType`.
-
-##### Args:
-
-
-* <b>`type_value`</b>: A value that can be converted to a `tf.DType`
- object. This may currently be a `tf.DType` object, a
- [`DataType` enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto),
- a string type name, or a `numpy.dtype`.
-
-##### Returns:
-
- A `DType` corresponding to `type_value`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `type_value` cannot be converted to a `DType`.
-
-
-
-## Utility functions
-
-- - -
-
-### `tf.device(device_name_or_function)` {#device}
-
-Wrapper for `Graph.device()` using the default graph.
-
-See
-[`Graph.device()`](../../api_docs/python/framework.md#Graph.device)
-for more details.
-
-##### Args:
-
-
-* <b>`device_name_or_function`</b>: The device name or function to use in
- the context.
-
-##### Returns:
-
- A context manager that specifies the default device to use for newly
- created ops.
-
-
-- - -
-
-### `tf.container(container_name)` {#container}
-
-Wrapper for `Graph.container()` using the default graph.
-
-##### Args:
-
-
-* <b>`container_name`</b>: The container string to use in the context.
-
-##### Returns:
-
- A context manager that specifies the default container to use for newly
- created stateful ops.
-
-
-- - -
-
-### `tf.name_scope(name, default_name=None, values=None)` {#name_scope}
-
-Returns a context manager for use when defining a Python op.
-
-This context manager validates that the given `values` are from the
-same graph, makes that graph the default graph, and pushes a
-name scope in that graph (see
-[`Graph.name_scope()`](../../api_docs/python/framework.md#Graph.name_scope)
-for more details on that).
-
-For example, to define a new Python op called `my_op`:
-
-```python
-def my_op(a, b, c, name=None):
- with tf.name_scope(name, "MyOp", [a, b, c]) as scope:
- a = tf.convert_to_tensor(a, name="a")
- b = tf.convert_to_tensor(b, name="b")
- c = tf.convert_to_tensor(c, name="c")
- # Define some computation that uses `a`, `b`, and `c`.
- return foo_op(..., name=scope)
-```
-
-##### Args:
-
-
-* <b>`name`</b>: The name argument that is passed to the op function.
-* <b>`default_name`</b>: The default name to use if the `name` argument is `None`.
-* <b>`values`</b>: The list of `Tensor` arguments that are passed to the op function.
-
-##### Returns:
-
- A context manager for use in defining Python ops. Yields the name scope.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if neither `name` nor `default_name` is provided
- but `values` are.
-
-
-- - -
-
-### `tf.control_dependencies(control_inputs)` {#control_dependencies}
-
-Wrapper for `Graph.control_dependencies()` using the default graph.
-
-See [`Graph.control_dependencies()`](../../api_docs/python/framework.md#Graph.control_dependencies)
-for more details.
-
-##### Args:
-
-
-* <b>`control_inputs`</b>: A list of `Operation` or `Tensor` objects which
- must be executed or computed before running the operations
- defined in the context. Can also be `None` to clear the control
- dependencies.
-
-##### Returns:
-
- A context manager that specifies control dependencies for all
- operations constructed within the context.
-
-
-- - -
-
-### `tf.convert_to_tensor(value, dtype=None, name=None, preferred_dtype=None)` {#convert_to_tensor}
-
-Converts the given `value` to a `Tensor`.
-
-This function converts Python objects of various types to `Tensor`
-objects. It accepts `Tensor` objects, numpy arrays, Python lists,
-and Python scalars. For example:
-
-```python
-import numpy as np
-
-def my_func(arg):
- arg = tf.convert_to_tensor(arg, dtype=tf.float32)
- return tf.matmul(arg, arg) + arg
-
-# The following calls are equivalent.
-value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]]))
-value_2 = my_func([[1.0, 2.0], [3.0, 4.0]])
-value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32))
-```
-
-This function can be useful when composing a new operation in Python
-(such as `my_func` in the example above). All standard Python op
-constructors apply this function to each of their Tensor-valued
-inputs, which allows those ops to accept numpy arrays, Python lists,
-and scalars in addition to `Tensor` objects.
-
-##### Args:
-
-
-* <b>`value`</b>: An object whose type has a registered `Tensor` conversion function.
-* <b>`dtype`</b>: Optional element type for the returned tensor. If missing, the
- type is inferred from the type of `value`.
-* <b>`name`</b>: Optional name to use if a new `Tensor` is created.
-* <b>`preferred_dtype`</b>: Optional element type for the returned tensor,
- used when dtype is None. In some cases, a caller may not have a
- dtype in mind when converting to a tensor, so preferred_dtype
- can be used as a soft preference. If the conversion to
- `preferred_dtype` is not possible, this argument has no effect.
-
-##### Returns:
-
- An `Output` based on `value`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If no conversion function is registered for `value`.
-* <b>`RuntimeError`</b>: If a registered conversion function returns an invalid value.
-
-
-- - -
-
-### `tf.convert_to_tensor_or_indexed_slices(value, dtype=None, name=None)` {#convert_to_tensor_or_indexed_slices}
-
-Converts the given object to a `Tensor` or an `IndexedSlices`.
-
-If `value` is an `IndexedSlices` or `SparseTensor` it is returned
-unmodified. Otherwise, it is converted to a `Tensor` using
-`convert_to_tensor()`.
-
-##### Args:
-
-
-* <b>`value`</b>: An `IndexedSlices`, `SparseTensor`, or an object that can be consumed
- by `convert_to_tensor()`.
-* <b>`dtype`</b>: (Optional.) The required `DType` of the returned `Tensor` or
- `IndexedSlices`.
-* <b>`name`</b>: (Optional.) A name to use if a new `Tensor` is created.
-
-##### Returns:
-
- An `Tensor`, `IndexedSlices`, or `SparseTensor` based on `value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `dtype` does not match the element type of `value`.
-
-
-- - -
-
-### `tf.convert_to_tensor_or_sparse_tensor(value, dtype=None, name=None)` {#convert_to_tensor_or_sparse_tensor}
-
-Converts value to a `SparseTensor` or `Tensor`.
-
-##### Args:
-
-
-* <b>`value`</b>: A `SparseTensor`, `SparseTensorValue`, or an object whose type has a
- registered `Tensor` conversion function.
-* <b>`dtype`</b>: Optional element type for the returned tensor. If missing, the
- type is inferred from the type of `value`.
-* <b>`name`</b>: Optional name to use if a new `Tensor` is created.
-
-##### Returns:
-
- A `SparseTensor` or `Tensor` based on `value`.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If result type is incompatible with `dtype`.
-
-
-- - -
-
-### `tf.get_default_graph()` {#get_default_graph}
-
-Returns the default graph for the current thread.
-
-The returned graph will be the innermost graph on which a
-`Graph.as_default()` context has been entered, or a global default
-graph if none has been explicitly created.
-
-NOTE: The default graph is a property of the current thread. If you
-create a new thread, and wish to use the default graph in that
-thread, you must explicitly add a `with g.as_default():` in that
-thread's function.
-
-##### Returns:
-
- The default `Graph` being used in the current thread.
-
-
-- - -
-
-### `tf.reset_default_graph()` {#reset_default_graph}
-
-Clears the default graph stack and resets the global default graph.
-
-NOTE: The default graph is a property of the current thread. This
-function applies only to the current thread. Calling this function while
-a `tf.Session` or `tf.InteractiveSession` is active will result in undefined
-behavior. Using any previously created `tf.Operation` or `tf.Tensor` objects
-after calling this function will result in undefined behavior.
-
-
-- - -
-
-### `tf.import_graph_def(graph_def, input_map=None, return_elements=None, name=None, op_dict=None, producer_op_list=None)` {#import_graph_def}
-
-Imports the graph from `graph_def` into the current default `Graph`.
-
-This function provides a way to import a serialized TensorFlow
-[`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto)
-protocol buffer, and extract individual objects in the `GraphDef` as
-[`Tensor`](#Tensor) and [`Operation`](#Operation) objects. Once extracted,
-these objects are placed into the current default `Graph`. See
-[`Graph.as_graph_def()`](#Graph.as_graph_def) for a way to create a `GraphDef`
-proto.
-
-##### Args:
-
-
-* <b>`graph_def`</b>: A `GraphDef` proto containing operations to be imported into
- the default graph.
-* <b>`input_map`</b>: A dictionary mapping input names (as strings) in `graph_def`
- to `Tensor` objects. The values of the named input tensors in the
- imported graph will be re-mapped to the respective `Tensor` values.
-* <b>`return_elements`</b>: A list of strings containing operation names in
- `graph_def` that will be returned as `Operation` objects; and/or
- tensor names in `graph_def` that will be returned as `Tensor` objects.
-* <b>`name`</b>: (Optional.) A prefix that will be prepended to the names in
- `graph_def`. Defaults to `"import"`.
-* <b>`op_dict`</b>: (Optional.) A dictionary mapping op type names to `OpDef` protos.
- Must contain an `OpDef` proto for each op type named in `graph_def`.
- If omitted, uses the `OpDef` protos registered in the global registry.
-* <b>`producer_op_list`</b>: (Optional.) An `OpList` proto with the (possibly stripped)
- list of `OpDef`s used by the producer of the graph. If provided, attrs
- for ops in `graph_def` that are not in `op_dict` that have their default
- value according to `producer_op_list` will be removed. This will allow
- some more `GraphDef`s produced by later binaries to be accepted by
- earlier binaries.
-
-##### Returns:
-
- A list of `Operation` and/or `Tensor` objects from the imported graph,
- corresponding to the names in `return_elements`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `graph_def` is not a `GraphDef` proto,
- `input_map` is not a dictionary mapping strings to `Tensor` objects,
- or `return_elements` is not a list of strings.
-* <b>`ValueError`</b>: If `input_map`, or `return_elements` contains names that
- do not appear in `graph_def`, or `graph_def` is not well-formed (e.g.
- it refers to an unknown tensor).
-
-
-- - -
-
-### `tf.load_file_system_library(library_filename)` {#load_file_system_library}
-
-Loads a TensorFlow plugin, containing file system implementation.
-
-Pass `library_filename` to a platform-specific mechanism for dynamically
-loading a library. The rules for determining the exact location of the
-library are platform-specific and are not documented here.
-
-##### Args:
-
-
-* <b>`library_filename`</b>: Path to the plugin.
- Relative or absolute filesystem path to a dynamic library file.
-
-##### Returns:
-
- None.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: when unable to load the library.
-
-
-- - -
-
-### `tf.load_op_library(library_filename)` {#load_op_library}
-
-Loads a TensorFlow plugin, containing custom ops and kernels.
-
-Pass "library_filename" to a platform-specific mechanism for dynamically
-loading a library. The rules for determining the exact location of the
-library are platform-specific and are not documented here. When the
-library is loaded, ops and kernels registered in the library via the
-`REGISTER_*` macros are made available in the TensorFlow process. Note
-that ops with the same name as an existing op are rejected and not
-registered with the process.
-
-##### Args:
-
-
-* <b>`library_filename`</b>: Path to the plugin.
- Relative or absolute filesystem path to a dynamic library file.
-
-##### Returns:
-
- A python module containing the Python wrappers for Ops defined in
- the plugin.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: when unable to load the library or get the python wrappers.
-
-
-
-## Graph collections
-
-- - -
-
-### `tf.add_to_collection(name, value)` {#add_to_collection}
-
-Wrapper for `Graph.add_to_collection()` using the default graph.
-
-See [`Graph.add_to_collection()`](../../api_docs/python/framework.md#Graph.add_to_collection)
-for more details.
-
-##### Args:
-
-
-* <b>`name`</b>: The key for the collection. For example, the `GraphKeys` class
- contains many standard names for collections.
-* <b>`value`</b>: The value to add to the collection.
-
-
-- - -
-
-### `tf.get_collection(key, scope=None)` {#get_collection}
-
-Wrapper for `Graph.get_collection()` using the default graph.
-
-See [`Graph.get_collection()`](../../api_docs/python/framework.md#Graph.get_collection)
-for more details.
-
-##### Args:
-
-
-* <b>`key`</b>: The key for the collection. For example, the `GraphKeys` class
- contains many standard names for collections.
-* <b>`scope`</b>: (Optional.) If supplied, the resulting list is filtered to include
- only items whose `name` attribute matches using `re.match`. Items
- without a `name` attribute are never returned if a scope is supplied and
- the choice or `re.match` means that a `scope` without special tokens
- filters by prefix.
-
-##### Returns:
-
- The list of values in the collection with the given `name`, or
- an empty list if no value has been added to that collection. The
- list contains the values in the order under which they were
- collected.
-
-
-- - -
-
-### `tf.get_collection_ref(key)` {#get_collection_ref}
-
-Wrapper for `Graph.get_collection_ref()` using the default graph.
-
-See [`Graph.get_collection_ref()`](../../api_docs/python/framework.md#Graph.get_collection_ref)
-for more details.
-
-##### Args:
-
-
-* <b>`key`</b>: The key for the collection. For example, the `GraphKeys` class
- contains many standard names for collections.
-
-##### Returns:
-
- The list of values in the collection with the given `name`, or an empty
- list if no value has been added to that collection. Note that this returns
- the collection list itself, which can be modified in place to change the
- collection.
-
-
-- - -
-
-### `class tf.GraphKeys` {#GraphKeys}
-
-Standard names to use for graph collections.
-
-The standard library uses various well-known names to collect and
-retrieve values associated with a graph. For example, the
-`tf.Optimizer` subclasses default to optimizing the variables
-collected under `tf.GraphKeys.TRAINABLE_VARIABLES` if none is
-specified, but it is also possible to pass an explicit list of
-variables.
-
-The following standard keys are defined:
-
-* `GLOBAL_VARIABLES`: the default collection of `Variable` objects, shared
- across distributed environment (model variables are subset of these). See
- [`tf.global_variables()`](../../api_docs/python/state_ops.md#global_variables)
- for more details.
- Commonly, all `TRAINABLE_VARIABLES` variables will be in `MODEL_VARIABLES`,
- and all `MODEL_VARIABLES` variables will be in `GLOBAL_VARIABLES`.
-* `LOCAL_VARIABLES`: the subset of `Variable` objects that are local to each
- machine. Usually used for temporarily variables, like counters.
- Note: use `tf.contrib.framework.local_variable` to add to this collection.
-* `MODEL_VARIABLES`: the subset of `Variable` objects that are used in the
- model for inference (feed forward). Note: use
- `tf.contrib.framework.model_variable` to add to this collection.
-* `TRAINABLE_VARIABLES`: the subset of `Variable` objects that will
- be trained by an optimizer. See
- [`tf.trainable_variables()`](../../api_docs/python/state_ops.md#trainable_variables)
- for more details.
-* `SUMMARIES`: the summary `Tensor` objects that have been created in the
- graph. See
- [`tf.summary.merge_all()`](../../api_docs/python/summary.md#merge_all)
- for more details.
-* `QUEUE_RUNNERS`: the `QueueRunner` objects that are used to
- produce input for a computation. See
- [`tf.start_queue_runners()`](../../api_docs/python/train.md#start_queue_runners)
- for more details.
-* `MOVING_AVERAGE_VARIABLES`: the subset of `Variable` objects that will also
- keep moving averages. See
- [`tf.moving_average_variables()`](../../api_docs/python/state_ops.md#moving_average_variables)
- for more details.
-* `REGULARIZATION_LOSSES`: regularization losses collected during graph
- construction.
-* `WEIGHTS`: weights inside neural network layers
-* `BIASES`: biases inside neural network layers
-* `ACTIVATIONS`: activations of neural network layers
-
-
-## Defining new operations
-
-- - -
-
-### `class tf.RegisterGradient` {#RegisterGradient}
-
-A decorator for registering the gradient function for an op type.
-
-This decorator is only used when defining a new op type. For an op
-with `m` inputs and `n` outputs, the gradient function is a function
-that takes the original `Operation` and `n` `Tensor` objects
-(representing the gradients with respect to each output of the op),
-and returns `m` `Tensor` objects (representing the partial gradients
-with respect to each input of the op).
-
-For example, assuming that operations of type `"Sub"` take two
-inputs `x` and `y`, and return a single output `x - y`, the
-following gradient function would be registered:
-
-```python
-@tf.RegisterGradient("Sub")
-def _sub_grad(unused_op, grad):
- return grad, tf.negative(grad)
-```
-
-The decorator argument `op_type` is the string type of an
-operation. This corresponds to the `OpDef.name` field for the proto
-that defines the operation.
-
-- - -
-
-#### `tf.RegisterGradient.__init__(op_type)` {#RegisterGradient.__init__}
-
-Creates a new decorator with `op_type` as the Operation type.
-
-##### Args:
-
-
-* <b>`op_type`</b>: The string type of an operation. This corresponds to the
- `OpDef.name` field for the proto that defines the operation.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.RegisterGradient.__call__(f)` {#RegisterGradient.__call__}
-
-Registers the function `f` as gradient function for `op_type`.
-
-
-
-- - -
-
-### `tf.NotDifferentiable(op_type)` {#NotDifferentiable}
-
-Specifies that ops of type `op_type` is not differentiable.
-
-This function should *not* be used for operations that have a
-well-defined gradient that is not yet implemented.
-
-This function is only used when defining a new op type. It may be
-used for ops such as `tf.size()` that are not differentiable. For
-example:
-
-```python
-tf.NotDifferentiable("Size")
-```
-
-The gradient computed for 'op_type' will then propagate zeros.
-
-For ops that have a well-defined gradient but are not yet implemented,
-no declaration should be made, and an error *must* be thrown if
-an attempt to request its gradient is made.
-
-##### Args:
-
-
-* <b>`op_type`</b>: The string type of an operation. This corresponds to the
- `OpDef.name` field for the proto that defines the operation.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `op_type` is not a string.
-
-
-- - -
-
-### `tf.NoGradient(op_type)` {#NoGradient}
-
-Specifies that ops of type `op_type` is not differentiable.
-
-This function should *not* be used for operations that have a
-well-defined gradient that is not yet implemented.
-
-This function is only used when defining a new op type. It may be
-used for ops such as `tf.size()` that are not differentiable. For
-example:
-
-```python
-tf.NotDifferentiable("Size")
-```
-
-The gradient computed for 'op_type' will then propagate zeros.
-
-For ops that have a well-defined gradient but are not yet implemented,
-no declaration should be made, and an error *must* be thrown if
-an attempt to request its gradient is made.
-
-##### Args:
-
-
-* <b>`op_type`</b>: The string type of an operation. This corresponds to the
- `OpDef.name` field for the proto that defines the operation.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `op_type` is not a string.
-
-
-- - -
-
-### `class tf.TensorShape` {#TensorShape}
-
-Represents the shape of a `Tensor`.
-
-A `TensorShape` represents a possibly-partial shape specification for a
-`Tensor`. It may be one of the following:
-
-* *Fully-known shape:* has a known number of dimensions and a known size
- for each dimension.
-* *Partially-known shape:* has a known number of dimensions, and an unknown
- size for one or more dimension.
-* *Unknown shape:* has an unknown number of dimensions, and an unknown
- size in all dimensions.
-
-If a tensor is produced by an operation of type `"Foo"`, its shape
-may be inferred if there is a registered shape function for
-`"Foo"`. See [`Shape functions in
-C++`](../../how_tos/adding_an_op/index.md#shape-functions-in-c) for
-details of shape functions and how to register them. Alternatively,
-the shape may be set explicitly using
-[`Tensor.set_shape()`](../../api_docs/python/framework.md#Tensor.set_shape).
-- - -
-
-#### `tf.TensorShape.__bool__()` {#TensorShape.__bool__}
-
-Returns True if this shape contains non-zero information.
-
-
-- - -
-
-#### `tf.TensorShape.__eq__(other)` {#TensorShape.__eq__}
-
-Returns True if `self` is equivalent to `other`.
-
-
-- - -
-
-#### `tf.TensorShape.__getitem__(key)` {#TensorShape.__getitem__}
-
-Returns the value of a dimension or a shape, depending on the key.
-
-##### Args:
-
-
-* <b>`key`</b>: If `key` is an integer, returns the dimension at that index;
- otherwise if `key` is a slice, returns a TensorShape whose
- dimensions are those selected by the slice from `self`.
-
-##### Returns:
-
- A dimension if `key` is an integer, or a `TensorShape` if `key` is a
- slice.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `key` is a slice, and any of its elements are negative, or
- if `self` is completely unknown and the step is set.
-
-
-- - -
-
-#### `tf.TensorShape.__init__(dims)` {#TensorShape.__init__}
-
-Creates a new TensorShape with the given dimensions.
-
-##### Args:
-
-
-* <b>`dims`</b>: A list of Dimensions, or None if the shape is unspecified.
-* <b>`DEPRECATED`</b>: A single integer is treated as a singleton list.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If dims cannot be converted to a list of dimensions.
-
-
-- - -
-
-#### `tf.TensorShape.__iter__()` {#TensorShape.__iter__}
-
-Returns `self.dims` if the rank is known, otherwise raises ValueError.
-
-
-- - -
-
-#### `tf.TensorShape.__len__()` {#TensorShape.__len__}
-
-Returns the rank of this shape, or raises ValueError if unspecified.
-
-
-- - -
-
-#### `tf.TensorShape.__ne__(other)` {#TensorShape.__ne__}
-
-Returns True if `self` is known to be different from `other`.
-
-
-- - -
-
-#### `tf.TensorShape.__nonzero__()` {#TensorShape.__nonzero__}
-
-Returns True if this shape contains non-zero information.
-
-
-- - -
-
-#### `tf.TensorShape.__repr__()` {#TensorShape.__repr__}
-
-
-
-
-- - -
-
-#### `tf.TensorShape.__str__()` {#TensorShape.__str__}
-
-
-
-
-- - -
-
-#### `tf.TensorShape.as_list()` {#TensorShape.as_list}
-
-Returns a list of integers or `None` for each dimension.
-
-##### Returns:
-
- A list of integers or `None` for each dimension.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` is an unknown shape with an unknown rank.
-
-
-- - -
-
-#### `tf.TensorShape.as_proto()` {#TensorShape.as_proto}
-
-Returns this shape as a `TensorShapeProto`.
-
-
-- - -
-
-#### `tf.TensorShape.assert_has_rank(rank)` {#TensorShape.assert_has_rank}
-
-Raises an exception if `self` is not compatible with the given `rank`.
-
-##### Args:
-
-
-* <b>`rank`</b>: An integer.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` does not represent a shape with the given `rank`.
-
-
-- - -
-
-#### `tf.TensorShape.assert_is_compatible_with(other)` {#TensorShape.assert_is_compatible_with}
-
-Raises exception if `self` and `other` do not represent the same shape.
-
-This method can be used to assert that there exists a shape that both
-`self` and `other` represent.
-
-##### Args:
-
-
-* <b>`other`</b>: Another TensorShape.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` and `other` do not represent the same shape.
-
-
-- - -
-
-#### `tf.TensorShape.assert_is_fully_defined()` {#TensorShape.assert_is_fully_defined}
-
-Raises an exception if `self` is not fully defined in every dimension.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` does not have a known value for every dimension.
-
-
-- - -
-
-#### `tf.TensorShape.assert_same_rank(other)` {#TensorShape.assert_same_rank}
-
-Raises an exception if `self` and `other` do not have compatible ranks.
-
-##### Args:
-
-
-* <b>`other`</b>: Another `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` and `other` do not represent shapes with the
- same rank.
-
-
-- - -
-
-#### `tf.TensorShape.concatenate(other)` {#TensorShape.concatenate}
-
-Returns the concatenation of the dimension in `self` and `other`.
-
-*N.B.* If either `self` or `other` is completely unknown,
-concatenation will discard information about the other shape. In
-future, we might support concatenation that preserves this
-information for use with slicing.
-
-##### Args:
-
-
-* <b>`other`</b>: Another `TensorShape`.
-
-##### Returns:
-
- A `TensorShape` whose dimensions are the concatenation of the
- dimensions in `self` and `other`.
-
-
-- - -
-
-#### `tf.TensorShape.dims` {#TensorShape.dims}
-
-Returns a list of Dimensions, or None if the shape is unspecified.
-
-
-- - -
-
-#### `tf.TensorShape.is_compatible_with(other)` {#TensorShape.is_compatible_with}
-
-Returns True iff `self` is compatible with `other`.
-
-Two possibly-partially-defined shapes are compatible if there
-exists a fully-defined shape that both shapes can represent. Thus,
-compatibility allows the shape inference code to reason about
-partially-defined shapes. For example:
-
-* TensorShape(None) is compatible with all shapes.
-
-* TensorShape([None, None]) is compatible with all two-dimensional
- shapes, such as TensorShape([32, 784]), and also TensorShape(None). It is
- not compatible with, for example, TensorShape([None]) or
- TensorShape([None, None, None]).
-
-* TensorShape([32, None]) is compatible with all two-dimensional shapes
- with size 32 in the 0th dimension, and also TensorShape([None, None])
- and TensorShape(None). It is not compatible with, for example,
- TensorShape([32]), TensorShape([32, None, 1]) or TensorShape([64, None]).
-
-* TensorShape([32, 784]) is compatible with itself, and also
- TensorShape([32, None]), TensorShape([None, 784]), TensorShape([None,
- None]) and TensorShape(None). It is not compatible with, for example,
- TensorShape([32, 1, 784]) or TensorShape([None]).
-
-The compatibility relation is reflexive and symmetric, but not
-transitive. For example, TensorShape([32, 784]) is compatible with
-TensorShape(None), and TensorShape(None) is compatible with
-TensorShape([4, 4]), but TensorShape([32, 784]) is not compatible with
-TensorShape([4, 4]).
-
-##### Args:
-
-
-* <b>`other`</b>: Another TensorShape.
-
-##### Returns:
-
- True iff `self` is compatible with `other`.
-
-
-- - -
-
-#### `tf.TensorShape.is_fully_defined()` {#TensorShape.is_fully_defined}
-
-Returns True iff `self` is fully defined in every dimension.
-
-
-- - -
-
-#### `tf.TensorShape.merge_with(other)` {#TensorShape.merge_with}
-
-Returns a `TensorShape` combining the information in `self` and `other`.
-
-The dimensions in `self` and `other` are merged elementwise,
-according to the rules defined for `Dimension.merge_with()`.
-
-##### Args:
-
-
-* <b>`other`</b>: Another `TensorShape`.
-
-##### Returns:
-
- A `TensorShape` containing the combined information of `self` and
- `other`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` and `other` are not compatible.
-
-
-- - -
-
-#### `tf.TensorShape.ndims` {#TensorShape.ndims}
-
-Returns the rank of this shape, or None if it is unspecified.
-
-
-- - -
-
-#### `tf.TensorShape.num_elements()` {#TensorShape.num_elements}
-
-Returns the total number of elements, or none for incomplete shapes.
-
-
-- - -
-
-#### `tf.TensorShape.with_rank(rank)` {#TensorShape.with_rank}
-
-Returns a shape based on `self` with the given rank.
-
-This method promotes a completely unknown shape to one with a
-known rank.
-
-##### Args:
-
-
-* <b>`rank`</b>: An integer.
-
-##### Returns:
-
- A shape that is at least as specific as `self` with the given rank.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` does not represent a shape with the given `rank`.
-
-
-- - -
-
-#### `tf.TensorShape.with_rank_at_least(rank)` {#TensorShape.with_rank_at_least}
-
-Returns a shape based on `self` with at least the given rank.
-
-##### Args:
-
-
-* <b>`rank`</b>: An integer.
-
-##### Returns:
-
- A shape that is at least as specific as `self` with at least the given
- rank.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` does not represent a shape with at least the given
- `rank`.
-
-
-- - -
-
-#### `tf.TensorShape.with_rank_at_most(rank)` {#TensorShape.with_rank_at_most}
-
-Returns a shape based on `self` with at most the given rank.
-
-##### Args:
-
-
-* <b>`rank`</b>: An integer.
-
-##### Returns:
-
- A shape that is at least as specific as `self` with at most the given
- rank.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` does not represent a shape with at most the given
- `rank`.
-
-
-
-- - -
-
-### `class tf.Dimension` {#Dimension}
-
-Represents the value of one dimension in a TensorShape.
-- - -
-
-#### `tf.Dimension.__add__(other)` {#Dimension.__add__}
-
-Returns the sum of `self` and `other`.
-
-Dimensions are summed as follows:
-
- Dimension(m) + Dimension(n) == Dimension(m + n)
- Dimension(m) + Dimension(None) == Dimension(None)
- Dimension(None) + Dimension(n) == Dimension(None)
- Dimension(None) + Dimension(None) == Dimension(None)
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- A Dimension whose value is the sum of `self` and `other`.
-
-
-- - -
-
-#### `tf.Dimension.__div__(other)` {#Dimension.__div__}
-
-DEPRECATED: Use `__floordiv__` via `x // y` instead.
-
-This function exists only for backwards compatibility purposes; new code
-should use `__floordiv__` via the syntax `x // y`. Using `x // y`
-communicates clearly that the result rounds down, and is forward compatible
-to Python 3.
-
-##### Args:
-
-
-* <b>`other`</b>: Another `Dimension`.
-
-##### Returns:
-
- A `Dimension` whose value is the integer quotient of `self` and `other`.
-
-
-- - -
-
-#### `tf.Dimension.__eq__(other)` {#Dimension.__eq__}
-
-Returns true if `other` has the same known value as this Dimension.
-
-
-- - -
-
-#### `tf.Dimension.__floordiv__(other)` {#Dimension.__floordiv__}
-
-Returns the quotient of `self` and `other` rounded down.
-
-Dimensions are divided as follows:
-
- Dimension(m) // Dimension(n) == Dimension(m // n)
- Dimension(m) // Dimension(None) == Dimension(None)
- Dimension(None) // Dimension(n) == Dimension(None)
- Dimension(None) // Dimension(None) == Dimension(None)
-
-##### Args:
-
-
-* <b>`other`</b>: Another `Dimension`.
-
-##### Returns:
-
- A `Dimension` whose value is the integer quotient of `self` and `other`.
-
-
-- - -
-
-#### `tf.Dimension.__ge__(other)` {#Dimension.__ge__}
-
-Returns True if `self` is known to be greater than or equal to `other`.
-
-Dimensions are compared as follows:
-
- Dimension(m) >= Dimension(n) == m >= n
- Dimension(m) >= Dimension(None) == None
- Dimension(None) >= Dimension(n) == None
- Dimension(None) >= Dimension(None) == None
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- The value of `self.value >= other.value` if both are known, otherwise
- None.
-
-
-- - -
-
-#### `tf.Dimension.__gt__(other)` {#Dimension.__gt__}
-
-Returns True if `self` is known to be greater than `other`.
-
-Dimensions are compared as follows:
-
- Dimension(m) > Dimension(n) == m > n
- Dimension(m) > Dimension(None) == None
- Dimension(None) > Dimension(n) == None
- Dimension(None) > Dimension(None) == None
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- The value of `self.value > other.value` if both are known, otherwise
- None.
-
-
-- - -
-
-#### `tf.Dimension.__index__()` {#Dimension.__index__}
-
-
-
-
-- - -
-
-#### `tf.Dimension.__init__(value)` {#Dimension.__init__}
-
-Creates a new Dimension with the given value.
-
-
-- - -
-
-#### `tf.Dimension.__int__()` {#Dimension.__int__}
-
-
-
-
-- - -
-
-#### `tf.Dimension.__le__(other)` {#Dimension.__le__}
-
-Returns True if `self` is known to be less than or equal to `other`.
-
-Dimensions are compared as follows:
-
- Dimension(m) <= Dimension(n) == m <= n
- Dimension(m) <= Dimension(None) == None
- Dimension(None) <= Dimension(n) == None
- Dimension(None) <= Dimension(None) == None
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- The value of `self.value <= other.value` if both are known, otherwise
- None.
-
-
-- - -
-
-#### `tf.Dimension.__lt__(other)` {#Dimension.__lt__}
-
-Returns True if `self` is known to be less than `other`.
-
-Dimensions are compared as follows:
-
- Dimension(m) < Dimension(n) == m < n
- Dimension(m) < Dimension(None) == None
- Dimension(None) < Dimension(n) == None
- Dimension(None) < Dimension(None) == None
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- The value of `self.value < other.value` if both are known, otherwise
- None.
-
-
-- - -
-
-#### `tf.Dimension.__mod__(other)` {#Dimension.__mod__}
-
-Returns `self` modulo `other.
-
-Dimension moduli are computed as follows:
-
- Dimension(m) % Dimension(n) == Dimension(m % n)
- Dimension(m) % Dimension(None) == Dimension(None)
- Dimension(None) % Dimension(n) == Dimension(None)
- Dimension(None) % Dimension(None) == Dimension(None)
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- A Dimension whose value is `self` modulo `other`.
-
-
-- - -
-
-#### `tf.Dimension.__mul__(other)` {#Dimension.__mul__}
-
-Returns the product of `self` and `other`.
-
-Dimensions are summed as follows:
-
-```
- Dimension(m) * Dimension(n) == Dimension(m * n)
- Dimension(m) * Dimension(None) == Dimension(None)
- Dimension(None) * Dimension(n) == Dimension(None)
- Dimension(None) * Dimension(None) == Dimension(None)
-```
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- A Dimension whose value is the product of `self` and `other`.
-
-
-- - -
-
-#### `tf.Dimension.__ne__(other)` {#Dimension.__ne__}
-
-Returns true if `other` has a different known value from `self`.
-
-
-- - -
-
-#### `tf.Dimension.__repr__()` {#Dimension.__repr__}
-
-
-
-
-- - -
-
-#### `tf.Dimension.__str__()` {#Dimension.__str__}
-
-
-
-
-- - -
-
-#### `tf.Dimension.__sub__(other)` {#Dimension.__sub__}
-
-Returns the subtraction of `other` from `self`.
-
-Dimensions are subtracted as follows:
-
- Dimension(m) - Dimension(n) == Dimension(m - n)
- Dimension(m) - Dimension(None) == Dimension(None)
- Dimension(None) - Dimension(n) == Dimension(None)
- Dimension(None) - Dimension(None) == Dimension(None)
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- A Dimension whose value is the subtraction of sum of `other` from `self`.
-
-
-- - -
-
-#### `tf.Dimension.assert_is_compatible_with(other)` {#Dimension.assert_is_compatible_with}
-
-Raises an exception if `other` is not compatible with this Dimension.
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` and `other` are not compatible (see
- is_compatible_with).
-
-
-- - -
-
-#### `tf.Dimension.is_compatible_with(other)` {#Dimension.is_compatible_with}
-
-Returns true if `other` is compatible with this Dimension.
-
-Two known Dimensions are compatible if they have the same value.
-An unknown Dimension is compatible with all other Dimensions.
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- True if this Dimension and `other` are compatible.
-
-
-- - -
-
-#### `tf.Dimension.merge_with(other)` {#Dimension.merge_with}
-
-Returns a Dimension that combines the information in `self` and `other`.
-
-Dimensions are combined as follows:
-
-```python
- Dimension(n) .merge_with(Dimension(n)) == Dimension(n)
- Dimension(n) .merge_with(Dimension(None)) == Dimension(n)
- Dimension(None).merge_with(Dimension(n)) == Dimension(n)
- Dimension(None).merge_with(Dimension(None)) == Dimension(None)
- Dimension(n) .merge_with(Dimension(m)) raises ValueError for n != m
-```
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- A Dimension containing the combined information of `self` and
- `other`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` and `other` are not compatible (see
- is_compatible_with).
-
-
-- - -
-
-#### `tf.Dimension.value` {#Dimension.value}
-
-The value of this dimension, or None if it is unknown.
-
-
-
-- - -
-
-### `tf.op_scope(values, name, default_name=None)` {#op_scope}
-
-DEPRECATED. Same as name_scope above, just different argument order.
-
-
-- - -
-
-### `tf.get_seed(op_seed)` {#get_seed}
-
-Returns the local seeds an operation should use given an op-specific seed.
-
-Given operation-specific seed, `op_seed`, this helper function returns two
-seeds derived from graph-level and op-level seeds. Many random operations
-internally use the two seeds to allow user to change the seed globally for a
-graph, or for only specific operations.
-
-For details on how the graph-level seed interacts with op seeds, see
-@{tf.set_random_seed}.
-
-##### Args:
-
-
-* <b>`op_seed`</b>: integer.
-
-##### Returns:
-
- A tuple of two integers that should be used for the local seed of this
- operation.
-
-
-
-## For libraries building on TensorFlow
-
-- - -
-
-### `tf.register_tensor_conversion_function(base_type, conversion_func, priority=100)` {#register_tensor_conversion_function}
-
-Registers a function for converting objects of `base_type` to `Tensor`.
-
-The conversion function must have the following signature:
-
-```python
- def conversion_func(value, dtype=None, name=None, as_ref=False):
- # ...
-```
-
-It must return a `Tensor` with the given `dtype` if specified. If the
-conversion function creates a new `Tensor`, it should use the given
-`name` if specified. All exceptions will be propagated to the caller.
-
-The conversion function may return `NotImplemented` for some
-inputs. In this case, the conversion process will continue to try
-subsequent conversion functions.
-
-If `as_ref` is true, the function must return a `Tensor` reference,
-such as a `Variable`.
-
-NOTE: The conversion functions will execute in order of priority,
-followed by order of registration. To ensure that a conversion function
-`F` runs before another conversion function `G`, ensure that `F` is
-registered with a smaller priority than `G`.
-
-##### Args:
-
-
-* <b>`base_type`</b>: The base type or tuple of base types for all objects that
- `conversion_func` accepts.
-* <b>`conversion_func`</b>: A function that converts instances of `base_type` to
- `Tensor`.
-* <b>`priority`</b>: Optional integer that indicates the priority for applying this
- conversion function. Conversion functions with smaller priority values
- run earlier than conversion functions with larger priority values.
- Defaults to 100.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the arguments do not have the appropriate type.
-
-
-
-## Other Functions and Classes
-- - -
-
-### `class tf.DeviceSpec` {#DeviceSpec}
-
-Represents a (possibly partial) specification for a TensorFlow device.
-
-`DeviceSpec`s are used throughout TensorFlow to describe where state is stored
-and computations occur. Using `DeviceSpec` allows you to parse device spec
-strings to verify their validity, merge them or compose them programmatically.
-
-Example:
-
-```python
-# Place the operations on device "GPU:0" in the "ps" job.
-device_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0)
-with tf.device(device_spec):
- # Both my_var and squared_var will be placed on /job:ps/device:GPU:0.
- my_var = tf.Variable(..., name="my_variable")
- squared_var = tf.square(my_var)
-```
-
-If a `DeviceSpec` is partially specified, it will be merged with other
-`DeviceSpec`s according to the scope in which it is defined. `DeviceSpec`
-components defined in inner scopes take precedence over those defined in
-outer scopes.
-
-```python
-with tf.device(DeviceSpec(job="train", )):
- with tf.device(DeviceSpec(job="ps", device_type="GPU", device_index=0):
- # Nodes created here will be assigned to /job:ps/device:GPU:0.
- with tf.device(DeviceSpec(device_type="GPU", device_index=1):
- # Nodes created here will be assigned to /job:train/device:GPU:1.
-```
-
-A `DeviceSpec` consists of 5 components -- each of
-which is optionally specified:
-
-* Job: The job name.
-* Replica: The replica index.
-* Task: The task index.
-* Device type: The device type string (e.g. "CPU" or "GPU").
-* Device index: The device index.
-- - -
-
-#### `tf.DeviceSpec.__init__(job=None, replica=None, task=None, device_type=None, device_index=None)` {#DeviceSpec.__init__}
-
-Create a new `DeviceSpec` object.
-
-##### Args:
-
-
-* <b>`job`</b>: string. Optional job name.
-* <b>`replica`</b>: int. Optional replica index.
-* <b>`task`</b>: int. Optional task index.
-* <b>`device_type`</b>: Optional device type string (e.g. "CPU" or "GPU")
-* <b>`device_index`</b>: int. Optional device index. If left
- unspecified, device represents 'any' device_index.
-
-
-- - -
-
-#### `tf.DeviceSpec.from_string(spec)` {#DeviceSpec.from_string}
-
-Construct a `DeviceSpec` from a string.
-
-##### Args:
-
-
-* <b>`spec`</b>: a string of the form
- /job:<name>/replica:<id>/task:<id>/device:CPU:<id>
- or
- /job:<name>/replica:<id>/task:<id>/device:GPU:<id>
- as cpu and gpu are mutually exclusive.
- All entries are optional.
-
-##### Returns:
-
- A DeviceSpec.
-
-
-- - -
-
-#### `tf.DeviceSpec.job` {#DeviceSpec.job}
-
-
-
-
-- - -
-
-#### `tf.DeviceSpec.merge_from(dev)` {#DeviceSpec.merge_from}
-
-Merge the properties of "dev" into this `DeviceSpec`.
-
-##### Args:
-
-
-* <b>`dev`</b>: a `DeviceSpec`.
-
-
-- - -
-
-#### `tf.DeviceSpec.parse_from_string(spec)` {#DeviceSpec.parse_from_string}
-
-Parse a `DeviceSpec` name into its components.
-
-##### Args:
-
-
-* <b>`spec`</b>: a string of the form
- /job:<name>/replica:<id>/task:<id>/device:CPU:<id>
- or
- /job:<name>/replica:<id>/task:<id>/device:GPU:<id>
- as cpu and gpu are mutually exclusive.
- All entries are optional.
-
-##### Returns:
-
- The `DeviceSpec`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the spec was not valid.
-
-
-- - -
-
-#### `tf.DeviceSpec.replica` {#DeviceSpec.replica}
-
-
-
-
-- - -
-
-#### `tf.DeviceSpec.task` {#DeviceSpec.task}
-
-
-
-
-- - -
-
-#### `tf.DeviceSpec.to_string()` {#DeviceSpec.to_string}
-
-Return a string representation of this `DeviceSpec`.
-
-##### Returns:
-
- a string of the form
- /job:<name>/replica:<id>/task:<id>/device:<device_type>:<id>.
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functional_ops.md b/tensorflow/g3doc/api_docs/python/functional_ops.md
deleted file mode 100644
index f81af171b1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functional_ops.md
+++ /dev/null
@@ -1,299 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Higher Order Functions
-
-Note: Functions taking `Tensor` arguments can also take anything accepted by
-[`tf.convert_to_tensor`](framework.md#convert_to_tensor).
-
-[TOC]
-
-Functional operations. See the @{$python/functional_ops} guide.
-
-- - -
-
-### `tf.map_fn(fn, elems, dtype=None, parallel_iterations=10, back_prop=True, swap_memory=False, infer_shape=True, name=None)` {#map_fn}
-
-map on the list of tensors unpacked from `elems` on dimension 0.
-
-The simplest version of `map` repeatedly applies the callable `fn` to a
-sequence of elements from first to last. The elements are made of the
-tensors unpacked from `elems`. `dtype` is the data type of the return
-value of `fn`. Users must provide `dtype` if it is different from
-the data type of `elems`.
-
-Suppose that `elems` is unpacked into `values`, a list of tensors. The shape
-of the result tensor is `[values.shape[0]] + fn(values[0]).shape`.
-
-This method also allows multi-arity `elems` and output of `fn`. If `elems`
-is a (possibly nested) list or tuple of tensors, then each of these tensors
-must have a matching first (unpack) dimension. The signature of `fn` may
-match the structure of `elems`. That is, if `elems` is
-`(t1, [t2, t3, [t4, t5]])`, then an appropriate signature for `fn` is:
-`fn = lambda (t1, [t2, t3, [t4, t5]]):`.
-
-Furthermore, `fn` may emit a different structure than its input. For example,
-`fn` may look like: `fn = lambda t1: return (t1 + 1, t1 - 1)`. In this case,
-the `dtype` parameter is not optional: `dtype` must be a type or (possibly
-nested) tuple of types matching the output of `fn`.
-
-To apply a functional operation to the nonzero elements of a SparseTensor
-one of the following methods is recommended. First, if the function is
-expressible as TensorFlow ops, use
-
-```python
- result = SparseTensor(input.indices, fn(input.values), input.dense_shape)
-```
-
-If, however, the function is not expressible as a TensorFlow op, then use
-
-```python
-result = SparseTensor(
- input.indices, map_fn(fn, input.values), input.dense_shape)
-```
-
-instead.
-
-##### Args:
-
-
-* <b>`fn`</b>: The callable to be performed. It accepts one argument, which will
- have the same (possibly nested) structure as `elems`. Its output
- must have the same structure as `dtype` if one is provided, otherwise
- it must have the same structure as `elems`.
-* <b>`elems`</b>: A tensor or (possibly nested) sequence of tensors, each of which
- will be unpacked along their first dimension. The nested sequence
- of the resulting slices will be applied to `fn`.
-* <b>`dtype`</b>: (optional) The output type(s) of `fn`. If `fn` returns a structure
- of Tensors differing from the structure of `elems`, then `dtype` is not
- optional and must have the same structure as the output of `fn`.
-* <b>`parallel_iterations`</b>: (optional) The number of iterations allowed to run
- in parallel.
-* <b>`back_prop`</b>: (optional) True enables support for back propagation.
-* <b>`swap_memory`</b>: (optional) True enables GPU-CPU memory swapping.
-* <b>`infer_shape`</b>: (optional) False disables tests for consistent output shapes.
-* <b>`name`</b>: (optional) Name prefix for the returned tensors.
-
-##### Returns:
-
- A tensor or (possibly nested) sequence of tensors. Each tensor packs the
- results of applying `fn` to tensors unpacked from `elems` along the first
- dimension, from first to last.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `fn` is not callable or the structure of the output of
- `fn` and `dtype` do not match, or if elems is a SparseTensor.
-* <b>`ValueError`</b>: if the lengths of the output of `fn` and `dtype` do not match.
-
-##### Examples:
-
- ```python
- elems = np.array([1, 2, 3, 4, 5, 6])
- squares = map_fn(lambda x: x * x, elems)
- # squares == [1, 4, 9, 16, 25, 36]
- ```
-
- ```python
- elems = (np.array([1, 2, 3]), np.array([-1, 1, -1]))
- alternate = map_fn(lambda x: x[0] * x[1], elems, dtype=tf.int64)
- # alternate == [-1, 2, -3]
- ```
-
- ```python
- elems = np.array([1, 2, 3])
- alternates = map_fn(lambda x: (x, -x), elems, dtype=(tf.int64, tf.int64))
- # alternates[0] == [1, 2, 3]
- # alternates[1] == [-1, -2, -3]
- ```
-
-
-- - -
-
-### `tf.foldl(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#foldl}
-
-foldl on the list of tensors unpacked from `elems` on dimension 0.
-
-This foldl operator repeatedly applies the callable `fn` to a sequence
-of elements from first to last. The elements are made of the tensors
-unpacked from `elems` on dimension 0. The callable fn takes two tensors as
-arguments. The first argument is the accumulated value computed from the
-preceding invocation of fn. If `initializer` is None, `elems` must contain
-at least one element, and its first element is used as the initializer.
-
-Suppose that `elems` is unpacked into `values`, a list of tensors. The shape
-of the result tensor is fn(initializer, values[0]).shape`.
-
-##### Args:
-
-
-* <b>`fn`</b>: The callable to be performed.
-* <b>`elems`</b>: A tensor to be unpacked on dimension 0.
-* <b>`initializer`</b>: (optional) The initial value for the accumulator.
-* <b>`parallel_iterations`</b>: (optional) The number of iterations allowed to run
- in parallel.
-* <b>`back_prop`</b>: (optional) True enables support for back propagation.
-* <b>`swap_memory`</b>: (optional) True enables GPU-CPU memory swapping.
-* <b>`name`</b>: (optional) Name prefix for the returned tensors.
-
-##### Returns:
-
- A tensor resulting from applying `fn` consecutively to the list of tensors
- unpacked from `elems`, from first to last.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `fn` is not callable.
-
-##### Example:
-
- ```python
- elems = [1, 2, 3, 4, 5, 6]
- sum = foldl(lambda a, x: a + x, elems)
- # sum == 21
- ```
-
-
-- - -
-
-### `tf.foldr(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#foldr}
-
-foldr on the list of tensors unpacked from `elems` on dimension 0.
-
-This foldr operator repeatedly applies the callable `fn` to a sequence
-of elements from last to first. The elements are made of the tensors
-unpacked from `elems`. The callable fn takes two tensors as arguments.
-The first argument is the accumulated value computed from the preceding
-invocation of fn. If `initializer` is None, `elems` must contain at least
-one element, and its first element is used as the initializer.
-
-Suppose that `elems` is unpacked into `values`, a list of tensors. The shape
-of the result tensor is `fn(initializer, values[0]).shape`.
-
-##### Args:
-
-
-* <b>`fn`</b>: The callable to be performed.
-* <b>`elems`</b>: A tensor that is unpacked into a sequence of tensors to apply `fn`.
-* <b>`initializer`</b>: (optional) The initial value for the accumulator.
-* <b>`parallel_iterations`</b>: (optional) The number of iterations allowed to run
- in parallel.
-* <b>`back_prop`</b>: (optional) True enables support for back propagation.
-* <b>`swap_memory`</b>: (optional) True enables GPU-CPU memory swapping.
-* <b>`name`</b>: (optional) Name prefix for the returned tensors.
-
-##### Returns:
-
- A tensor resulting from applying `fn` consecutively to the list of tensors
- unpacked from `elems`, from last to first.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `fn` is not callable.
-
-##### Example:
-
- ```python
- elems = [1, 2, 3, 4, 5, 6]
- sum = foldr(lambda a, x: a + x, elems)
- # sum == 21
- ```
-
-
-- - -
-
-### `tf.scan(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, infer_shape=True, name=None)` {#scan}
-
-scan on the list of tensors unpacked from `elems` on dimension 0.
-
-The simplest version of `scan` repeatedly applies the callable `fn` to a
-sequence of elements from first to last. The elements are made of the tensors
-unpacked from `elems` on dimension 0. The callable fn takes two tensors as
-arguments. The first argument is the accumulated value computed from the
-preceding invocation of fn. If `initializer` is None, `elems` must contain
-at least one element, and its first element is used as the initializer.
-
-Suppose that `elems` is unpacked into `values`, a list of tensors. The shape
-of the result tensor is `[len(values)] + fn(initializer, values[0]).shape`.
-
-This method also allows multi-arity `elems` and accumulator. If `elems`
-is a (possibly nested) list or tuple of tensors, then each of these tensors
-must have a matching first (unpack) dimension. The second argument of
-`fn` must match the structure of `elems`.
-
-If no `initializer` is provided, the output structure and dtypes of `fn`
-are assumed to be the same as its input; and in this case, the first
-argument of `fn` must match the structure of `elems`.
-
-If an `initializer` is provided, then the output of `fn` must have the same
-structure as `initializer`; and the first argument of `fn` must match
-this structure.
-
-For example, if `elems` is `(t1, [t2, t3])` and `initializer` is
-`[i1, i2]` then an appropriate signature for `fn` in `python2` is:
-`fn = lambda (acc_p1, acc_p2), (t1 [t2, t3]):` and `fn` must return a list,
-`[acc_n1, acc_n2]`. An alternative correct signature for `fn`, and the
- one that works in `python3`, is:
-`fn = lambda a, t:`, where `a` and `t` correspond to the input tuples.
-
-##### Args:
-
-
-* <b>`fn`</b>: The callable to be performed. It accepts two arguments. The first
- will have the same structure as `initializer` if one is provided,
- otherwise it will have the same structure as `elems`. The second
- will have the same (possibly nested) structure as `elems`. Its output
- must have the same structure as `initializer` if one is provided,
- otherwise it must have the same structure as `elems`.
-* <b>`elems`</b>: A tensor or (possibly nested) sequence of tensors, each of which
- will be unpacked along their first dimension. The nested sequence
- of the resulting slices will be the first argument to `fn`.
-* <b>`initializer`</b>: (optional) A tensor or (possibly nested) sequence of tensors,
- initial value for the accumulator, and the expected output type of `fn`.
-* <b>`parallel_iterations`</b>: (optional) The number of iterations allowed to run
- in parallel.
-* <b>`back_prop`</b>: (optional) True enables support for back propagation.
-* <b>`swap_memory`</b>: (optional) True enables GPU-CPU memory swapping.
-* <b>`infer_shape`</b>: (optional) False disables tests for consistent output shapes.
-* <b>`name`</b>: (optional) Name prefix for the returned tensors.
-
-##### Returns:
-
- A tensor or (possibly nested) sequence of tensors. Each tensor packs the
- results of applying `fn` to tensors unpacked from `elems` along the first
- dimension, and the previous accumulator value(s), from first to last.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `fn` is not callable or the structure of the output of
- `fn` and `initializer` do not match.
-* <b>`ValueError`</b>: if the lengths of the output of `fn` and `initializer`
- do not match.
-
-##### Examples:
-
- ```python
- elems = np.array([1, 2, 3, 4, 5, 6])
- sum = scan(lambda a, x: a + x, elems)
- # sum == [1, 3, 6, 10, 15, 21]
- ```
-
- ```python
- elems = np.array([1, 2, 3, 4, 5, 6])
- initializer = np.array(0)
- sum_one = scan(
- lambda a, x: x[0] - x[1] + a, (elems + 1, elems), initializer)
- # sum_one == [1, 2, 3, 4, 5, 6]
- ```
-
- ```python
- elems = np.array([1, 0, 0, 0, 0, 0])
- initializer = (np.array(0), np.array(1))
- fibonaccis = scan(lambda a, _: (a[1], a[0] + a[1]), elems, initializer)
- # fibonaccis == ([1, 1, 2, 3, 5, 8], [1, 2, 3, 5, 8, 13])
- ```
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.PriorityQueue.from_list.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.PriorityQueue.from_list.md
deleted file mode 100644
index 1fbd1a6f03..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.PriorityQueue.from_list.md
+++ /dev/null
@@ -1,21 +0,0 @@
-#### `tf.PriorityQueue.from_list(index, queues)` {#PriorityQueue.from_list}
-
-Create a queue using the queue reference from `queues[index]`.
-
-##### Args:
-
-
-* <b>`index`</b>: An integer scalar tensor that determines the input that gets
- selected.
-* <b>`queues`</b>: A list of `QueueBase` objects.
-
-##### Returns:
-
- A `QueueBase` object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
- or when the data types of `queues` are not all the same.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.ReaderBase.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.ReaderBase.md
deleted file mode 100644
index 68a60bc33a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.ReaderBase.md
+++ /dev/null
@@ -1,183 +0,0 @@
-Base class for different Reader types, that produce a record every step.
-
-Conceptually, Readers convert string 'work units' into records (key,
-value pairs). Typically the 'work units' are filenames and the
-records are extracted from the contents of those files. We want a
-single record produced per step, but a work unit can correspond to
-many records.
-
-Therefore we introduce some decoupling using a queue. The queue
-contains the work units and the Reader dequeues from the queue when
-it is asked to produce a record (via Read()) but it has finished the
-last work unit.
-- - -
-
-#### `tf.ReaderBase.__init__(reader_ref, supports_serialize=False)` {#ReaderBase.__init__}
-
-Creates a new ReaderBase.
-
-##### Args:
-
-
-* <b>`reader_ref`</b>: The operation that implements the reader.
-* <b>`supports_serialize`</b>: True if the reader implementation can
- serialize its state.
-
-
-- - -
-
-#### `tf.ReaderBase.num_records_produced(name=None)` {#ReaderBase.num_records_produced}
-
-Returns the number of records this reader has produced.
-
-This is the same as the number of Read executions that have
-succeeded.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.ReaderBase.num_work_units_completed(name=None)` {#ReaderBase.num_work_units_completed}
-
-Returns the number of work units this reader has finished processing.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.ReaderBase.read(queue, name=None)` {#ReaderBase.read}
-
-Returns the next record (key, value pair) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g. when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (key, value).
-
-* <b>`key`</b>: A string scalar Tensor.
-* <b>`value`</b>: A string scalar Tensor.
-
-
-- - -
-
-#### `tf.ReaderBase.read_up_to(queue, num_records, name=None)` {#ReaderBase.read_up_to}
-
-Returns up to num_records (key, value pairs) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g., when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-It may return less than num_records even before the last batch.
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`num_records`</b>: Number of records to read.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (keys, values).
-
-* <b>`keys`</b>: A 1-D string Tensor.
-* <b>`values`</b>: A 1-D string Tensor.
-
-
-- - -
-
-#### `tf.ReaderBase.reader_ref` {#ReaderBase.reader_ref}
-
-Op that implements the reader.
-
-
-- - -
-
-#### `tf.ReaderBase.reset(name=None)` {#ReaderBase.reset}
-
-Restore a reader to its initial clean state.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.ReaderBase.restore_state(state, name=None)` {#ReaderBase.restore_state}
-
-Restore a reader to a previously saved state.
-
-Not all Readers support being restored, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`state`</b>: A string Tensor.
- Result of a SerializeState of a Reader with matching type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.ReaderBase.serialize_state(name=None)` {#ReaderBase.serialize_state}
-
-Produce a string tensor that encodes the state of a reader.
-
-Not all Readers support being serialized, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A string Tensor.
-
-
-- - -
-
-#### `tf.ReaderBase.supports_serialize` {#ReaderBase.supports_serialize}
-
-Whether the Reader implementation can serialize its state.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.SparseFeature.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.SparseFeature.md
deleted file mode 100644
index fd0950c328..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.SparseFeature.md
+++ /dev/null
@@ -1,78 +0,0 @@
-Configuration for parsing a sparse input feature.
-
-Fields:
- index_key: Name of index feature. The underlying feature's type must
- be `int64` and its length must always match that of the `value_key`
- feature.
- value_key: Name of value feature. The underlying feature's type must
- be `dtype` and its length must always match that of the `index_key`
- feature.
- dtype: Data type of the `value_key` feature.
- size: A Python int to specify a dimension of the dense shape. Each value in
- the `index_key` feature must be in `[0, size)`.
- already_sorted: A Python boolean to specify whether the values in
- `index_key` are already sorted. If so skip sorting.
- False by default (optional).
-- - -
-
-#### `tf.SparseFeature.__getnewargs__()` {#SparseFeature.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.SparseFeature.__getstate__()` {#SparseFeature.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.SparseFeature.__new__(_cls, index_key, value_key, dtype, size, already_sorted=False)` {#SparseFeature.__new__}
-
-Create new instance of SparseFeature(index_key, value_key, dtype, size, already_sorted)
-
-
-- - -
-
-#### `tf.SparseFeature.__repr__()` {#SparseFeature.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.SparseFeature.already_sorted` {#SparseFeature.already_sorted}
-
-Alias for field number 4
-
-
-- - -
-
-#### `tf.SparseFeature.dtype` {#SparseFeature.dtype}
-
-Alias for field number 2
-
-
-- - -
-
-#### `tf.SparseFeature.index_key` {#SparseFeature.index_key}
-
-Alias for field number 0
-
-
-- - -
-
-#### `tf.SparseFeature.size` {#SparseFeature.size}
-
-Alias for field number 3
-
-
-- - -
-
-#### `tf.SparseFeature.value_key` {#SparseFeature.value_key}
-
-Alias for field number 1
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.SparseTensorValue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.SparseTensorValue.md
deleted file mode 100644
index 7454442559..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.SparseTensorValue.md
+++ /dev/null
@@ -1,50 +0,0 @@
-SparseTensorValue(indices, values, dense_shape)
-- - -
-
-#### `tf.SparseTensorValue.__getnewargs__()` {#SparseTensorValue.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.SparseTensorValue.__getstate__()` {#SparseTensorValue.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.SparseTensorValue.__new__(_cls, indices, values, dense_shape)` {#SparseTensorValue.__new__}
-
-Create new instance of SparseTensorValue(indices, values, dense_shape)
-
-
-- - -
-
-#### `tf.SparseTensorValue.__repr__()` {#SparseTensorValue.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.SparseTensorValue.dense_shape` {#SparseTensorValue.dense_shape}
-
-Alias for field number 2
-
-
-- - -
-
-#### `tf.SparseTensorValue.indices` {#SparseTensorValue.indices}
-
-Alias for field number 0
-
-
-- - -
-
-#### `tf.SparseTensorValue.values` {#SparseTensorValue.values}
-
-Alias for field number 1
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorArray.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorArray.md
deleted file mode 100644
index 240628114e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorArray.md
+++ /dev/null
@@ -1,281 +0,0 @@
-Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays.
-
-This class is meant to be used with dynamic iteration primitives such as
-`while_loop` and `map_fn`. It supports gradient back-propagation via special
-"flow" control flow dependencies.
-- - -
-
-#### `tf.TensorArray.__init__(dtype, size=None, dynamic_size=None, clear_after_read=None, tensor_array_name=None, handle=None, flow=None, infer_shape=True, element_shape=None, name=None)` {#TensorArray.__init__}
-
-Construct a new TensorArray or wrap an existing TensorArray handle.
-
-A note about the parameter `name`:
-
-The name of the `TensorArray` (even if passed in) is uniquified: each time
-a new `TensorArray` is created at runtime it is assigned its own name for
-the duration of the run. This avoids name collisions if a `TensorArray`
-is created within a `while_loop`.
-
-##### Args:
-
-
-* <b>`dtype`</b>: (required) data type of the TensorArray.
-* <b>`size`</b>: (optional) int32 scalar `Tensor`: the size of the TensorArray.
- Required if handle is not provided.
-* <b>`dynamic_size`</b>: (optional) Python bool: If true, writes to the TensorArray
- can grow the TensorArray past its initial size. Default: False.
-* <b>`clear_after_read`</b>: Boolean (optional, default: True). If True, clear
- TensorArray values after reading them. This disables read-many
- semantics, but allows early release of memory.
-* <b>`tensor_array_name`</b>: (optional) Python string: the name of the TensorArray.
- This is used when creating the TensorArray handle. If this value is
- set, handle should be None.
-* <b>`handle`</b>: (optional) A `Tensor` handle to an existing TensorArray. If this
- is set, tensor_array_name should be None.
-* <b>`flow`</b>: (optional) A float `Tensor` scalar coming from an existing
- `TensorArray.flow`.
-* <b>`infer_shape`</b>: (optional, default: True) If True, shape inference
- is enabled. In this case, all elements must have the same shape.
-* <b>`element_shape`</b>: (optional, default: None) A `TensorShape` object specifying
- the shape constraints of each of the elements of the TensorArray.
- Need not be fully defined.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if both handle and tensor_array_name are provided.
-* <b>`TypeError`</b>: if handle is provided but is not a Tensor.
-
-
-- - -
-
-#### `tf.TensorArray.close(name=None)` {#TensorArray.close}
-
-Close the current TensorArray.
-
-
-- - -
-
-#### `tf.TensorArray.concat(name=None)` {#TensorArray.concat}
-
-Return the values in the TensorArray as a concatenated `Tensor`.
-
-All of the values must have been written, their ranks must match, and
-and their shapes must all match for all dimensions except the first.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- All the tensors in the TensorArray concatenated into one tensor.
-
-
-- - -
-
-#### `tf.TensorArray.dtype` {#TensorArray.dtype}
-
-The data type of this TensorArray.
-
-
-- - -
-
-#### `tf.TensorArray.flow` {#TensorArray.flow}
-
-The flow `Tensor` forcing ops leading to this TensorArray state.
-
-
-- - -
-
-#### `tf.TensorArray.gather(indices, name=None)` {#TensorArray.gather}
-
-Return selected values in the TensorArray as a packed `Tensor`.
-
-All of selected values must have been written and their shapes
-must all match.
-
-##### Args:
-
-
-* <b>`indices`</b>: A `1-D` `Tensor` taking values in `[0, max_value)`. If
- the `TensorArray` is not dynamic, `max_value=size()`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The in the `TensorArray` selected by `indices`, packed into one tensor.
-
-
-- - -
-
-#### `tf.TensorArray.grad(source, flow=None, name=None)` {#TensorArray.grad}
-
-
-
-
-- - -
-
-#### `tf.TensorArray.handle` {#TensorArray.handle}
-
-The reference to the TensorArray.
-
-
-- - -
-
-#### `tf.TensorArray.identity()` {#TensorArray.identity}
-
-Returns a TensorArray with the same content and properties.
-
-##### Returns:
-
- A new TensorArray object with flow that ensures the control dependencies
- from the contexts will become control dependencies for writes, reads, etc.
- Use this object all for subsequent operations.
-
-
-- - -
-
-#### `tf.TensorArray.read(index, name=None)` {#TensorArray.read}
-
-Read the value at location `index` in the TensorArray.
-
-##### Args:
-
-
-* <b>`index`</b>: 0-D. int32 tensor with the index to read from.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tensor at index `index`.
-
-
-- - -
-
-#### `tf.TensorArray.scatter(indices, value, name=None)` {#TensorArray.scatter}
-
-Scatter the values of a `Tensor` in specific indices of a `TensorArray`.
-
-##### Args:
-
-
-* <b>`indices`</b>: A `1-D` `Tensor` taking values in `[0, max_value)`. If
- the `TensorArray` is not dynamic, `max_value=size()`.
-* <b>`value`</b>: (N+1)-D. Tensor of type `dtype`. The Tensor to unpack.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A new TensorArray object with flow that ensures the scatter occurs.
- Use this object all for subsequent operations.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape inference fails.
-
-
-- - -
-
-#### `tf.TensorArray.size(name=None)` {#TensorArray.size}
-
-Return the size of the TensorArray.
-
-
-- - -
-
-#### `tf.TensorArray.split(value, lengths, name=None)` {#TensorArray.split}
-
-Split the values of a `Tensor` into the TensorArray.
-
-##### Args:
-
-
-* <b>`value`</b>: (N+1)-D. Tensor of type `dtype`. The Tensor to split.
-* <b>`lengths`</b>: 1-D. int32 vector with the lengths to use when splitting
- `value` along its first dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A new TensorArray object with flow that ensures the split occurs.
- Use this object all for subsequent operations.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape inference fails.
-
-
-- - -
-
-#### `tf.TensorArray.stack(name=None)` {#TensorArray.stack}
-
-Return the values in the TensorArray as a stacked `Tensor`.
-
-All of the values must have been written and their shapes must all match.
-If input shapes have rank-`R`, then output shape will have rank-`(R+1)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- All the tensors in the TensorArray stacked into one tensor.
-
-
-- - -
-
-#### `tf.TensorArray.unstack(value, name=None)` {#TensorArray.unstack}
-
-Unstack the values of a `Tensor` in the TensorArray.
-
-If input value shapes have rank-`R`, then the output TensorArray will
-contain elements whose shapes are rank-`(R-1)`.
-
-##### Args:
-
-
-* <b>`value`</b>: (N+1)-D. Tensor of type `dtype`. The Tensor to unstack.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A new TensorArray object with flow that ensures the unstack occurs.
- Use this object all for subsequent operations.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape inference fails.
-
-
-- - -
-
-#### `tf.TensorArray.write(index, value, name=None)` {#TensorArray.write}
-
-Write `value` into index `index` of the TensorArray.
-
-##### Args:
-
-
-* <b>`index`</b>: 0-D. int32 scalar with the index to write to.
-* <b>`value`</b>: N-D. Tensor of type `dtype`. The Tensor to write to this index.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A new TensorArray object with flow that ensures the write occurs.
- Use this object all for subsequent operations.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if there are more writers than specified.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorShape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorShape.md
deleted file mode 100644
index 0ff6c80e23..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.TensorShape.md
+++ /dev/null
@@ -1,397 +0,0 @@
-Represents the shape of a `Tensor`.
-
-A `TensorShape` represents a possibly-partial shape specification for a
-`Tensor`. It may be one of the following:
-
-* *Fully-known shape:* has a known number of dimensions and a known size
- for each dimension.
-* *Partially-known shape:* has a known number of dimensions, and an unknown
- size for one or more dimension.
-* *Unknown shape:* has an unknown number of dimensions, and an unknown
- size in all dimensions.
-
-If a tensor is produced by an operation of type `"Foo"`, its shape
-may be inferred if there is a registered shape function for
-`"Foo"`. See [`Shape functions in
-C++`](../../how_tos/adding_an_op/index.md#shape-functions-in-c) for
-details of shape functions and how to register them. Alternatively,
-the shape may be set explicitly using
-[`Tensor.set_shape()`](../../api_docs/python/framework.md#Tensor.set_shape).
-- - -
-
-#### `tf.TensorShape.__bool__()` {#TensorShape.__bool__}
-
-Returns True if this shape contains non-zero information.
-
-
-- - -
-
-#### `tf.TensorShape.__eq__(other)` {#TensorShape.__eq__}
-
-Returns True if `self` is equivalent to `other`.
-
-
-- - -
-
-#### `tf.TensorShape.__getitem__(key)` {#TensorShape.__getitem__}
-
-Returns the value of a dimension or a shape, depending on the key.
-
-##### Args:
-
-
-* <b>`key`</b>: If `key` is an integer, returns the dimension at that index;
- otherwise if `key` is a slice, returns a TensorShape whose
- dimensions are those selected by the slice from `self`.
-
-##### Returns:
-
- A dimension if `key` is an integer, or a `TensorShape` if `key` is a
- slice.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `key` is a slice, and any of its elements are negative, or
- if `self` is completely unknown and the step is set.
-
-
-- - -
-
-#### `tf.TensorShape.__init__(dims)` {#TensorShape.__init__}
-
-Creates a new TensorShape with the given dimensions.
-
-##### Args:
-
-
-* <b>`dims`</b>: A list of Dimensions, or None if the shape is unspecified.
-* <b>`DEPRECATED`</b>: A single integer is treated as a singleton list.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If dims cannot be converted to a list of dimensions.
-
-
-- - -
-
-#### `tf.TensorShape.__iter__()` {#TensorShape.__iter__}
-
-Returns `self.dims` if the rank is known, otherwise raises ValueError.
-
-
-- - -
-
-#### `tf.TensorShape.__len__()` {#TensorShape.__len__}
-
-Returns the rank of this shape, or raises ValueError if unspecified.
-
-
-- - -
-
-#### `tf.TensorShape.__ne__(other)` {#TensorShape.__ne__}
-
-Returns True if `self` is known to be different from `other`.
-
-
-- - -
-
-#### `tf.TensorShape.__nonzero__()` {#TensorShape.__nonzero__}
-
-Returns True if this shape contains non-zero information.
-
-
-- - -
-
-#### `tf.TensorShape.__repr__()` {#TensorShape.__repr__}
-
-
-
-
-- - -
-
-#### `tf.TensorShape.__str__()` {#TensorShape.__str__}
-
-
-
-
-- - -
-
-#### `tf.TensorShape.as_list()` {#TensorShape.as_list}
-
-Returns a list of integers or `None` for each dimension.
-
-##### Returns:
-
- A list of integers or `None` for each dimension.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` is an unknown shape with an unknown rank.
-
-
-- - -
-
-#### `tf.TensorShape.as_proto()` {#TensorShape.as_proto}
-
-Returns this shape as a `TensorShapeProto`.
-
-
-- - -
-
-#### `tf.TensorShape.assert_has_rank(rank)` {#TensorShape.assert_has_rank}
-
-Raises an exception if `self` is not compatible with the given `rank`.
-
-##### Args:
-
-
-* <b>`rank`</b>: An integer.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` does not represent a shape with the given `rank`.
-
-
-- - -
-
-#### `tf.TensorShape.assert_is_compatible_with(other)` {#TensorShape.assert_is_compatible_with}
-
-Raises exception if `self` and `other` do not represent the same shape.
-
-This method can be used to assert that there exists a shape that both
-`self` and `other` represent.
-
-##### Args:
-
-
-* <b>`other`</b>: Another TensorShape.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` and `other` do not represent the same shape.
-
-
-- - -
-
-#### `tf.TensorShape.assert_is_fully_defined()` {#TensorShape.assert_is_fully_defined}
-
-Raises an exception if `self` is not fully defined in every dimension.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` does not have a known value for every dimension.
-
-
-- - -
-
-#### `tf.TensorShape.assert_same_rank(other)` {#TensorShape.assert_same_rank}
-
-Raises an exception if `self` and `other` do not have compatible ranks.
-
-##### Args:
-
-
-* <b>`other`</b>: Another `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` and `other` do not represent shapes with the
- same rank.
-
-
-- - -
-
-#### `tf.TensorShape.concatenate(other)` {#TensorShape.concatenate}
-
-Returns the concatenation of the dimension in `self` and `other`.
-
-*N.B.* If either `self` or `other` is completely unknown,
-concatenation will discard information about the other shape. In
-future, we might support concatenation that preserves this
-information for use with slicing.
-
-##### Args:
-
-
-* <b>`other`</b>: Another `TensorShape`.
-
-##### Returns:
-
- A `TensorShape` whose dimensions are the concatenation of the
- dimensions in `self` and `other`.
-
-
-- - -
-
-#### `tf.TensorShape.dims` {#TensorShape.dims}
-
-Returns a list of Dimensions, or None if the shape is unspecified.
-
-
-- - -
-
-#### `tf.TensorShape.is_compatible_with(other)` {#TensorShape.is_compatible_with}
-
-Returns True iff `self` is compatible with `other`.
-
-Two possibly-partially-defined shapes are compatible if there
-exists a fully-defined shape that both shapes can represent. Thus,
-compatibility allows the shape inference code to reason about
-partially-defined shapes. For example:
-
-* TensorShape(None) is compatible with all shapes.
-
-* TensorShape([None, None]) is compatible with all two-dimensional
- shapes, such as TensorShape([32, 784]), and also TensorShape(None). It is
- not compatible with, for example, TensorShape([None]) or
- TensorShape([None, None, None]).
-
-* TensorShape([32, None]) is compatible with all two-dimensional shapes
- with size 32 in the 0th dimension, and also TensorShape([None, None])
- and TensorShape(None). It is not compatible with, for example,
- TensorShape([32]), TensorShape([32, None, 1]) or TensorShape([64, None]).
-
-* TensorShape([32, 784]) is compatible with itself, and also
- TensorShape([32, None]), TensorShape([None, 784]), TensorShape([None,
- None]) and TensorShape(None). It is not compatible with, for example,
- TensorShape([32, 1, 784]) or TensorShape([None]).
-
-The compatibility relation is reflexive and symmetric, but not
-transitive. For example, TensorShape([32, 784]) is compatible with
-TensorShape(None), and TensorShape(None) is compatible with
-TensorShape([4, 4]), but TensorShape([32, 784]) is not compatible with
-TensorShape([4, 4]).
-
-##### Args:
-
-
-* <b>`other`</b>: Another TensorShape.
-
-##### Returns:
-
- True iff `self` is compatible with `other`.
-
-
-- - -
-
-#### `tf.TensorShape.is_fully_defined()` {#TensorShape.is_fully_defined}
-
-Returns True iff `self` is fully defined in every dimension.
-
-
-- - -
-
-#### `tf.TensorShape.merge_with(other)` {#TensorShape.merge_with}
-
-Returns a `TensorShape` combining the information in `self` and `other`.
-
-The dimensions in `self` and `other` are merged elementwise,
-according to the rules defined for `Dimension.merge_with()`.
-
-##### Args:
-
-
-* <b>`other`</b>: Another `TensorShape`.
-
-##### Returns:
-
- A `TensorShape` containing the combined information of `self` and
- `other`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` and `other` are not compatible.
-
-
-- - -
-
-#### `tf.TensorShape.ndims` {#TensorShape.ndims}
-
-Returns the rank of this shape, or None if it is unspecified.
-
-
-- - -
-
-#### `tf.TensorShape.num_elements()` {#TensorShape.num_elements}
-
-Returns the total number of elements, or none for incomplete shapes.
-
-
-- - -
-
-#### `tf.TensorShape.with_rank(rank)` {#TensorShape.with_rank}
-
-Returns a shape based on `self` with the given rank.
-
-This method promotes a completely unknown shape to one with a
-known rank.
-
-##### Args:
-
-
-* <b>`rank`</b>: An integer.
-
-##### Returns:
-
- A shape that is at least as specific as `self` with the given rank.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` does not represent a shape with the given `rank`.
-
-
-- - -
-
-#### `tf.TensorShape.with_rank_at_least(rank)` {#TensorShape.with_rank_at_least}
-
-Returns a shape based on `self` with at least the given rank.
-
-##### Args:
-
-
-* <b>`rank`</b>: An integer.
-
-##### Returns:
-
- A shape that is at least as specific as `self` with at least the given
- rank.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` does not represent a shape with at least the given
- `rank`.
-
-
-- - -
-
-#### `tf.TensorShape.with_rank_at_most(rank)` {#TensorShape.with_rank_at_most}
-
-Returns a shape based on `self` with at most the given rank.
-
-##### Args:
-
-
-* <b>`rank`</b>: An integer.
-
-##### Returns:
-
- A shape that is at least as specific as `self` with at most the given
- rank.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` does not represent a shape with at most the given
- `rank`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.VarLenFeature.__new__.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.VarLenFeature.__new__.md
deleted file mode 100644
index 282ca37e0b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.VarLenFeature.__new__.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.VarLenFeature.__new__(_cls, dtype)` {#VarLenFeature.__new__}
-
-Create new instance of VarLenFeature(dtype,)
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.VariableScope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.VariableScope.md
deleted file mode 100644
index c6fd98dd98..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.VariableScope.md
+++ /dev/null
@@ -1,159 +0,0 @@
-Variable scope object to carry defaults to provide to `get_variable`.
-
-Many of the arguments we need for `get_variable` in a variable store are most
-easily handled with a context. This object is used for the defaults.
-
-Attributes:
- name: name of the current scope, used as prefix in get_variable.
- initializer: default initializer passed to get_variable.
- regularizer: default regularizer passed to get_variable.
- reuse: Boolean or None, setting the reuse in get_variable.
- caching_device: string, callable, or None: the caching device passed to
- get_variable.
- partitioner: callable or `None`: the partitioner passed to `get_variable`.
- custom_getter: default custom getter passed to get_variable.
- name_scope: The name passed to `tf.name_scope`.
- dtype: default type passed to get_variable (defaults to DT_FLOAT).
- use_resource: if False, create a normal Variable; if True create an
- experimental ResourceVariable with well-defined semantics. Defaults
- to False (will later change to True).
-- - -
-
-#### `tf.VariableScope.__init__(reuse, name='', initializer=None, regularizer=None, caching_device=None, partitioner=None, custom_getter=None, name_scope='', dtype=tf.float32, use_resource=None)` {#VariableScope.__init__}
-
-Creates a new VariableScope with the given properties.
-
-
-- - -
-
-#### `tf.VariableScope.caching_device` {#VariableScope.caching_device}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.custom_getter` {#VariableScope.custom_getter}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.dtype` {#VariableScope.dtype}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.get_variable(var_store, name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True, use_resource=None, custom_getter=None)` {#VariableScope.get_variable}
-
-Gets an existing variable with this name or create a new one.
-
-
-- - -
-
-#### `tf.VariableScope.initializer` {#VariableScope.initializer}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.name` {#VariableScope.name}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.original_name_scope` {#VariableScope.original_name_scope}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.partitioner` {#VariableScope.partitioner}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.regularizer` {#VariableScope.regularizer}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.reuse` {#VariableScope.reuse}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.reuse_variables()` {#VariableScope.reuse_variables}
-
-Reuse variables in this scope.
-
-
-- - -
-
-#### `tf.VariableScope.set_caching_device(caching_device)` {#VariableScope.set_caching_device}
-
-Set caching_device for this scope.
-
-
-- - -
-
-#### `tf.VariableScope.set_custom_getter(custom_getter)` {#VariableScope.set_custom_getter}
-
-Set custom getter for this scope.
-
-
-- - -
-
-#### `tf.VariableScope.set_dtype(dtype)` {#VariableScope.set_dtype}
-
-Set data type for this scope.
-
-
-- - -
-
-#### `tf.VariableScope.set_initializer(initializer)` {#VariableScope.set_initializer}
-
-Set initializer for this scope.
-
-
-- - -
-
-#### `tf.VariableScope.set_partitioner(partitioner)` {#VariableScope.set_partitioner}
-
-Set partitioner for this scope.
-
-
-- - -
-
-#### `tf.VariableScope.set_regularizer(regularizer)` {#VariableScope.set_regularizer}
-
-Set regularizer for this scope.
-
-
-- - -
-
-#### `tf.VariableScope.set_use_resource(use_resource)` {#VariableScope.set_use_resource}
-
-Sets whether to use ResourceVariables for this scope.
-
-
-- - -
-
-#### `tf.VariableScope.use_resource` {#VariableScope.use_resource}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.add_check_numerics_ops.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.add_check_numerics_ops.md
deleted file mode 100644
index 5895f744b8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.add_check_numerics_ops.md
+++ /dev/null
@@ -1,13 +0,0 @@
-### `tf.add_check_numerics_ops()` {#add_check_numerics_ops}
-
-Connect a `check_numerics` to every floating point tensor.
-
-`check_numerics` operations themselves are added for each `half`, `float`,
-or `double` tensor in the graph. For all ops in the graph, the
-`check_numerics` op for all of its (`half`, `float`, or `double`) inputs
-is guaranteed to run before the `check_numerics` op on any of its outputs.
-
-##### Returns:
-
- A `group` op depending on all `check_numerics` ops added.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.add_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.add_n.md
deleted file mode 100644
index 306aaf4ddd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.add_n.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.add_n(inputs, name=None)` {#add_n}
-
-Adds all input tensors element-wise.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A list of `Tensor` objects, each with same shape and type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of same shape and type as the elements of `inputs`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `inputs` don't all have same shape and dtype or the shape
- cannot be inferred.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.add_to_collection.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.add_to_collection.md
deleted file mode 100644
index 1d8d752917..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.add_to_collection.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.add_to_collection(name, value)` {#add_to_collection}
-
-Wrapper for `Graph.add_to_collection()` using the default graph.
-
-See [`Graph.add_to_collection()`](../../api_docs/python/framework.md#Graph.add_to_collection)
-for more details.
-
-##### Args:
-
-
-* <b>`name`</b>: The key for the collection. For example, the `GraphKeys` class
- contains many standard names for collections.
-* <b>`value`</b>: The value to add to the collection.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_greater_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_greater_equal.md
deleted file mode 100644
index 3674f530e7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_greater_equal.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.assert_greater_equal(x, y, data=None, summarize=None, message=None, name=None)` {#assert_greater_equal}
-
-Assert the condition `x >= y` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_greater_equal(x, y)]):
- output = tf.reduce_sum(x)
-```
-
-This condition holds if for every pair of (possibly broadcast) elements
-`x[i]`, `y[i]`, we have `x[i] >= y[i]`.
-If both `x` and `y` are empty, this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`y`</b>: Numeric `Tensor`, same dtype as and broadcastable to `x`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`, `y`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to
- "assert_greater_equal"
-
-##### Returns:
-
- Op that raises `InvalidArgumentError` if `x >= y` is False.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_non_positive.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_non_positive.md
deleted file mode 100644
index 7f9547f39d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.assert_non_positive.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.assert_non_positive(x, data=None, summarize=None, message=None, name=None)` {#assert_non_positive}
-
-Assert the condition `x <= 0` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_non_positive(x)]):
- output = tf.reduce_sum(x)
-```
-
-Non-positive means, for every element `x[i]` of `x`, we have `x[i] <= 0`.
-If `x` is empty this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional).
- Defaults to "assert_non_positive".
-
-##### Returns:
-
- Op raising `InvalidArgumentError` unless `x` is all non-positive.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.case.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.case.md
deleted file mode 100644
index 02bae13a15..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.case.md
+++ /dev/null
@@ -1,75 +0,0 @@
-### `tf.case(pred_fn_pairs, default, exclusive=False, name='case')` {#case}
-
-Create a case operation.
-
-The `pred_fn_pairs` parameter is a dict or list of pairs of size N.
-Each pair contains a boolean scalar tensor and a python callable that
-creates the tensors to be returned if the boolean evaluates to True.
-`default` is a callable generating a list of tensors. All the callables
-in `pred_fn_pairs` as well as `default` should return the same number
-and types of tensors.
-
-If `exclusive==True`, all predicates are evaluated, and an exception is
-thrown if more than one of the predicates evaluates to `True`.
-If `exclusive==False`, execution stops are the first predicate which
-evaluates to True, and the tensors generated by the corresponding function
-are returned immediately. If none of the predicates evaluate to True, this
-operation returns the tensors generated by `default`.
-
-Example 1:
- Pseudocode:
- ```
- if (x < y) return 17;
- else return 23;
- ```
-
- Expressions:
- ```
- f1 = lambda: tf.constant(17)
- f2 = lambda: tf.constant(23)
- r = case([(tf.less(x, y), f1)], default=f2)
- ```
-
-Example 2:
- Pseudocode:
- ```
- if (x < y && x > z) raise OpError("Only one predicate may evaluate true");
- if (x < y) return 17;
- else if (x > z) return 23;
- else return -1;
- ```
-
- Expressions:
- ```
- x = tf.constant(0)
- y = tf.constant(1)
- z = tf.constant(2)
- def f1(): return tf.constant(17)
- def f2(): return tf.constant(23)
- def f3(): return tf.constant(-1)
- r = case({tf.less(x, y): f1, tf.greater(x, z): f2},
- default=f3, exclusive=True)
- ```
-
-##### Args:
-
-
-* <b>`pred_fn_pairs`</b>: Dict or list of pairs of a boolean scalar tensor and a
- callable which returns a list of tensors.
-* <b>`default`</b>: A callable that returns a list of tensors.
-* <b>`exclusive`</b>: True iff at most one predicate is allowed to evaluate to `True`.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- The tensors returned by the first pair whose predicate evaluated to True, or
- those returned by `default` if none does.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `pred_fn_pairs` is not a list/dictionary.
-* <b>`TypeError`</b>: If `pred_fn_pairs` is a list but does not contain 2-tuples.
-* <b>`TypeError`</b>: If `fns[i]` is not callable for any i, or `default` is not
- callable.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.cholesky.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.cholesky.md
deleted file mode 100644
index 046c443925..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.cholesky.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.cholesky(input, name=None)` {#cholesky}
-
-Computes the Cholesky decomposition of one or more square matrices.
-
-The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
-form square matrices, with the same constraints as the single matrix Cholesky
-decomposition above. The output is a tensor of the same shape as the input
-containing the Cholesky decompositions for all input submatrices `[..., :, :]`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`.
- Shape is `[..., M, M]`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`. Shape is `[..., M, M]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.cond.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.cond.md
deleted file mode 100644
index bb94a0610a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.cond.md
+++ /dev/null
@@ -1,54 +0,0 @@
-### `tf.cond(pred, fn1, fn2, name=None)` {#cond}
-
-Return either fn1() or fn2() based on the boolean predicate `pred`.
-
-`fn1` and `fn2` both return lists of output tensors. `fn1` and `fn2` must have
-the same non-zero number and type of outputs.
-
-Note that the conditional execution applies only to the operations defined in
-fn1 and fn2. Consider the following simple program:
-
-```python
-z = tf.multiply(a, b)
-result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y))
-```
-
-If x < y, the `tf.add` operation will be executed and `tf.square`
-operation will not be executed. Since z is needed for at least one
-branch of the cond, the `tf.multiply` operation is always executed, unconditionally.
-Although this behavior is consistent with the dataflow model of TensorFlow,
-it has occasionally surprised some users who expected a lazier semantics.
-
-##### Args:
-
-
-* <b>`pred`</b>: A scalar determining whether to return the result of `fn1` or `fn2`.
-* <b>`fn1`</b>: The callable to be performed if pred is true.
-* <b>`fn2`</b>: The callable to be performed if pref is false.
-* <b>`name`</b>: Optional name prefix for the returned tensors.
-
-##### Returns:
-
- Tensors returned by the call to either `fn1` or `fn2`. If the callables
- return a singleton list, the element is extracted from the list.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `fn1` or `fn2` is not callable.
-* <b>`ValueError`</b>: if `fn1` and `fn2` do not return the same number of tensors, or
- return tensors of different types.
-
-
-* <b>`Example`</b>:
-
-```python
- x = tf.constant(2)
- y = tf.constant(5)
- def f1(): return tf.multiply(x, 17)
- def f2(): return tf.add(y, 23)
- r = tf.cond(tf.less(x, y), f1, f2)
- # r is set to f1().
- # Operations in f2 (e.g., tf.add) are not executed.
-```
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.md
deleted file mode 100644
index da7082ffb6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.md
+++ /dev/null
@@ -1,89 +0,0 @@
-A StochasticTensor with an observed value.
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.__init__(dist, value, name=None)` {#ObservedStochasticTensor.__init__}
-
-Construct an `ObservedStochasticTensor`.
-
-`ObservedStochasticTensor` is backed by distribution `dist` and uses the
-provided value instead of using the current value type to draw a value from
-the distribution. The provided value argument must be appropriately shaped
-to have come from the distribution.
-
-##### Args:
-
-
-* <b>`dist`</b>: an instance of `Distribution`.
-* <b>`value`</b>: a Tensor containing the observed value
-* <b>`name`</b>: a name for this `ObservedStochasticTensor` and its ops.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `dist` is not an instance of `Distribution`.
-* <b>`ValueError`</b>: if `value` is not compatible with the distribution.
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.distribution` {#ObservedStochasticTensor.distribution}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.dtype` {#ObservedStochasticTensor.dtype}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.entropy(name='entropy')` {#ObservedStochasticTensor.entropy}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.graph` {#ObservedStochasticTensor.graph}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.loss(final_loss, name=None)` {#ObservedStochasticTensor.loss}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.mean(name='mean')` {#ObservedStochasticTensor.mean}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.name` {#ObservedStochasticTensor.name}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.value(name='value')` {#ObservedStochasticTensor.value}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.ObservedStochasticTensor.value_type` {#ObservedStochasticTensor.value_type}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Bernoulli.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Bernoulli.md
deleted file mode 100644
index 0a5bca5052..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Bernoulli.md
+++ /dev/null
@@ -1,593 +0,0 @@
-Bernoulli distribution.
-
-The Bernoulli distribution with `probs` parameter, i.e., the probability of a
-`1` outcome (vs a `0` outcome).
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.__init__(logits=None, probs=None, dtype=tf.int32, validate_args=False, allow_nan_stats=True, name='Bernoulli')` {#Bernoulli.__init__}
-
-Construct Bernoulli distributions.
-
-##### Args:
-
-
-* <b>`logits`</b>: An N-D `Tensor` representing the log-odds of a `1` event. Each
- entry in the `Tensor` parametrizes an independent Bernoulli distribution
- where the probability of an event is sigmoid(logits). Only one of
- `logits` or `probs` should be passed in.
-* <b>`probs`</b>: An N-D `Tensor` representing the probability of a `1`
- event. Each entry in the `Tensor` parameterizes an independent
- Bernoulli distribution. Only one of `logits` or `probs` should be passed
- in.
-* <b>`dtype`</b>: The type of the event samples. Default: `int32`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`,
- statistics (e.g., mean, mode, variance) use the value "`NaN`" to
- indicate the result is undefined. When `False`, an exception is raised
- if one or more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If p and logits are passed, or if neither are passed.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.allow_nan_stats` {#Bernoulli.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.batch_shape` {#Bernoulli.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.batch_shape_tensor(name='batch_shape_tensor')` {#Bernoulli.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.cdf(value, name='cdf')` {#Bernoulli.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.copy(**override_parameters_kwargs)` {#Bernoulli.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.covariance(name='covariance')` {#Bernoulli.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.dtype` {#Bernoulli.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.entropy(name='entropy')` {#Bernoulli.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.event_shape` {#Bernoulli.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.event_shape_tensor(name='event_shape_tensor')` {#Bernoulli.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.is_continuous` {#Bernoulli.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.is_scalar_batch(name='is_scalar_batch')` {#Bernoulli.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.is_scalar_event(name='is_scalar_event')` {#Bernoulli.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.log_cdf(value, name='log_cdf')` {#Bernoulli.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.log_prob(value, name='log_prob')` {#Bernoulli.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.log_survival_function(value, name='log_survival_function')` {#Bernoulli.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.logits` {#Bernoulli.logits}
-
-Log-odds of a `1` outcome (vs `0`).
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.mean(name='mean')` {#Bernoulli.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.mode(name='mode')` {#Bernoulli.mode}
-
-Mode.
-
-Additional documentation from `Bernoulli`:
-
-Returns `1` if `prob > 0.5` and `0` otherwise.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.name` {#Bernoulli.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Bernoulli.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.param_static_shapes(cls, sample_shape)` {#Bernoulli.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.parameters` {#Bernoulli.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.prob(value, name='prob')` {#Bernoulli.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.probs` {#Bernoulli.probs}
-
-Probability of a `1` outcome (vs `0`).
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.reparameterization_type` {#Bernoulli.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.sample(sample_shape=(), seed=None, name='sample')` {#Bernoulli.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.stddev(name='stddev')` {#Bernoulli.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.survival_function(value, name='survival_function')` {#Bernoulli.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.validate_args` {#Bernoulli.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Bernoulli.variance(name='variance')` {#Bernoulli.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Chi2WithAbsDf.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Chi2WithAbsDf.md
deleted file mode 100644
index a6b66f8560..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Chi2WithAbsDf.md
+++ /dev/null
@@ -1,572 +0,0 @@
-Chi2 with parameter transform `df = floor(abs(df))`.
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.__init__(df, validate_args=False, allow_nan_stats=True, name='Chi2WithAbsDf')` {#Chi2WithAbsDf.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.allow_nan_stats` {#Chi2WithAbsDf.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.batch_shape` {#Chi2WithAbsDf.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.batch_shape_tensor(name='batch_shape_tensor')` {#Chi2WithAbsDf.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.cdf(value, name='cdf')` {#Chi2WithAbsDf.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.concentration` {#Chi2WithAbsDf.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.copy(**override_parameters_kwargs)` {#Chi2WithAbsDf.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.covariance(name='covariance')` {#Chi2WithAbsDf.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.df` {#Chi2WithAbsDf.df}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.dtype` {#Chi2WithAbsDf.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.entropy(name='entropy')` {#Chi2WithAbsDf.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.event_shape` {#Chi2WithAbsDf.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.event_shape_tensor(name='event_shape_tensor')` {#Chi2WithAbsDf.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.is_continuous` {#Chi2WithAbsDf.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.is_scalar_batch(name='is_scalar_batch')` {#Chi2WithAbsDf.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.is_scalar_event(name='is_scalar_event')` {#Chi2WithAbsDf.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.log_cdf(value, name='log_cdf')` {#Chi2WithAbsDf.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.log_prob(value, name='log_prob')` {#Chi2WithAbsDf.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.log_survival_function(value, name='log_survival_function')` {#Chi2WithAbsDf.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.mean(name='mean')` {#Chi2WithAbsDf.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.mode(name='mode')` {#Chi2WithAbsDf.mode}
-
-Mode.
-
-Additional documentation from `Gamma`:
-
-The mode of a gamma distribution is `(shape - 1) / rate` when
-`shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`,
-an exception will be raised rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.name` {#Chi2WithAbsDf.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Chi2WithAbsDf.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.param_static_shapes(cls, sample_shape)` {#Chi2WithAbsDf.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.parameters` {#Chi2WithAbsDf.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.prob(value, name='prob')` {#Chi2WithAbsDf.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.rate` {#Chi2WithAbsDf.rate}
-
-Rate parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.reparameterization_type` {#Chi2WithAbsDf.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.sample(sample_shape=(), seed=None, name='sample')` {#Chi2WithAbsDf.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.stddev(name='stddev')` {#Chi2WithAbsDf.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.survival_function(value, name='survival_function')` {#Chi2WithAbsDf.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.validate_args` {#Chi2WithAbsDf.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2WithAbsDf.variance(name='variance')` {#Chi2WithAbsDf.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Dirichlet.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Dirichlet.md
deleted file mode 100644
index 3d6690ab9a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Dirichlet.md
+++ /dev/null
@@ -1,682 +0,0 @@
-Dirichlet distribution.
-
-The Dirichlet distribution is defined over the
-[`(k-1)`-simplex](https://en.wikipedia.org/wiki/Simplex) using a positive,
-length-`k` vector `concentration` (`k > 1`). The Dirichlet is identically the
-Beta distribution when `k = 2`.
-
-#### Mathematical Details
-
-The Dirichlet is a distribution over the open `(k-1)`-simplex, i.e.,
-
-```none
-S^{k-1} = { (x_0, ..., x_{k-1}) in R^k : sum_j x_j = 1 and all_j x_j > 0 }.
-```
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; alpha) = prod_j x_j**(alpha_j - 1) / Z
-Z = prod_j Gamma(alpha_j) / Gamma(sum_j alpha_j)
-```
-
-where:
-
-* `x in S^{k-1}`, i.e., the `(k-1)`-simplex,
-* `concentration = alpha = [alpha_0, ..., alpha_{k-1}]`, `alpha_j > 0`,
-* `Z` is the normalization constant aka the [multivariate beta function](
- https://en.wikipedia.org/wiki/Beta_function#Multivariate_beta_function),
- and,
-* `Gamma` is the [gamma function](
- https://en.wikipedia.org/wiki/Gamma_function).
-
-The `concentration` represents mean total counts of class occurrence, i.e.,
-
-```none
-concentration = alpha = mean * total_concentration
-```
-
-where `mean` in `S^{k-1}` and `total_concentration` is a positive real number
-representing a mean total count.
-
-Distribution parameters are automatically broadcast in all functions; see
-examples for details.
-
-#### Examples
-
-```python
-# Create a single trivariate Dirichlet, with the 3rd class being three times
-# more frequent than the first. I.e., batch_shape=[], event_shape=[3].
-alpha = [1., 2, 3]
-dist = Dirichlet(alpha)
-
-dist.sample([4, 5]) # shape: [4, 5, 3]
-
-# x has one sample, one batch, three classes:
-x = [.2, .3, .5] # shape: [3]
-dist.prob(x) # shape: []
-
-# x has two samples from one batch:
-x = [[.1, .4, .5],
- [.2, .3, .5]]
-dist.prob(x) # shape: [2]
-
-# alpha will be broadcast to shape [5, 7, 3] to match x.
-x = [[...]] # shape: [5, 7, 3]
-dist.prob(x) # shape: [5, 7]
-```
-
-```python
-# Create batch_shape=[2], event_shape=[3]:
-alpha = [[1., 2, 3],
- [4, 5, 6]] # shape: [2, 3]
-dist = Dirichlet(alpha)
-
-dist.sample([4, 5]) # shape: [4, 5, 2, 3]
-
-x = [.2, .3, .5]
-# x will be broadcast as [[.2, .3, .5],
-# [.2, .3, .5]],
-# thus matching batch_shape [2, 3].
-dist.prob(x) # shape: [2]
-```
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.__init__(concentration, validate_args=False, allow_nan_stats=True, name='Dirichlet')` {#Dirichlet.__init__}
-
-Initialize a batch of Dirichlet distributions.
-
-##### Args:
-
-
-* <b>`concentration`</b>: Positive floating-point `Tensor` indicating mean number
- of class occurrences; aka "alpha". Implies `self.dtype`, and
- `self.batch_shape`, `self.event_shape`, i.e., if
- `concentration.shape = [N1, N2, ..., Nm, k]` then
- `batch_shape = [N1, N2, ..., Nm]` and
- `event_shape = [k]`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.allow_nan_stats` {#Dirichlet.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.batch_shape` {#Dirichlet.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.batch_shape_tensor(name='batch_shape_tensor')` {#Dirichlet.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.cdf(value, name='cdf')` {#Dirichlet.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.concentration` {#Dirichlet.concentration}
-
-Concentration parameter; expected counts for that coordinate.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.copy(**override_parameters_kwargs)` {#Dirichlet.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.covariance(name='covariance')` {#Dirichlet.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.dtype` {#Dirichlet.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.entropy(name='entropy')` {#Dirichlet.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.event_shape` {#Dirichlet.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.event_shape_tensor(name='event_shape_tensor')` {#Dirichlet.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.is_continuous` {#Dirichlet.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.is_scalar_batch(name='is_scalar_batch')` {#Dirichlet.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.is_scalar_event(name='is_scalar_event')` {#Dirichlet.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.log_cdf(value, name='log_cdf')` {#Dirichlet.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.log_prob(value, name='log_prob')` {#Dirichlet.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Dirichlet`:
-
-Note: `value` must be a non-negative tensor with
-dtype `self.dtype` and be in the `(self.event_shape() - 1)`-simplex, i.e.,
-`tf.reduce_sum(value, -1) = 1`. It must have a shape compatible with
-`self.batch_shape() + self.event_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.log_survival_function(value, name='log_survival_function')` {#Dirichlet.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.mean(name='mean')` {#Dirichlet.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.mode(name='mode')` {#Dirichlet.mode}
-
-Mode.
-
-Additional documentation from `Dirichlet`:
-
-Note: The mode is undefined when any `concentration <= 1`. If
-`self.allow_nan_stats` is `True`, `NaN` is used for undefined modes. If
-`self.allow_nan_stats` is `False` an exception is raised when one or more
-modes are undefined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.name` {#Dirichlet.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Dirichlet.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.param_static_shapes(cls, sample_shape)` {#Dirichlet.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.parameters` {#Dirichlet.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.prob(value, name='prob')` {#Dirichlet.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Dirichlet`:
-
-Note: `value` must be a non-negative tensor with
-dtype `self.dtype` and be in the `(self.event_shape() - 1)`-simplex, i.e.,
-`tf.reduce_sum(value, -1) = 1`. It must have a shape compatible with
-`self.batch_shape() + self.event_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.reparameterization_type` {#Dirichlet.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.sample(sample_shape=(), seed=None, name='sample')` {#Dirichlet.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.stddev(name='stddev')` {#Dirichlet.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.survival_function(value, name='survival_function')` {#Dirichlet.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.total_concentration` {#Dirichlet.total_concentration}
-
-Sum of last dim of concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.validate_args` {#Dirichlet.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Dirichlet.variance(name='variance')` {#Dirichlet.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Distribution.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Distribution.md
deleted file mode 100644
index 0076cdc4ff..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.Distribution.md
+++ /dev/null
@@ -1,690 +0,0 @@
-A generic probability distribution base class.
-
-`Distribution` is a base class for constructing and organizing properties
-(e.g., mean, variance) of random variables (e.g, Bernoulli, Gaussian).
-
-### Subclassing
-
-Subclasses are expected to implement a leading-underscore version of the
-same-named function. The argument signature should be identical except for
-the omission of `name="..."`. For example, to enable `log_prob(value,
-name="log_prob")` a subclass should implement `_log_prob(value)`.
-
-Subclasses can append to public-level docstrings by providing
-docstrings for their method specializations. For example:
-
-```python
-@distribution_util.AppendDocstring("Some other details.")
-def _log_prob(self, value):
- ...
-```
-
-would add the string "Some other details." to the `log_prob` function
-docstring. This is implemented as a simple decorator to avoid python
-linter complaining about missing Args/Returns/Raises sections in the
-partial docstrings.
-
-### Broadcasting, batching, and shapes
-
-All distributions support batches of independent distributions of that type.
-The batch shape is determined by broadcasting together the parameters.
-
-The shape of arguments to `__init__`, `cdf`, `log_cdf`, `prob`, and
-`log_prob` reflect this broadcasting, as does the return value of `sample` and
-`sample_n`.
-
-`sample_n_shape = [n] + batch_shape + event_shape`, where `sample_n_shape` is
-the shape of the `Tensor` returned from `sample_n`, `n` is the number of
-samples, `batch_shape` defines how many independent distributions there are,
-and `event_shape` defines the shape of samples from each of those independent
-distributions. Samples are independent along the `batch_shape` dimensions, but
-not necessarily so along the `event_shape` dimensions (depending on the
-particulars of the underlying distribution).
-
-Using the `Uniform` distribution as an example:
-
-```python
-minval = 3.0
-maxval = [[4.0, 6.0],
- [10.0, 12.0]]
-
-# Broadcasting:
-# This instance represents 4 Uniform distributions. Each has a lower bound at
-# 3.0 as the `minval` parameter was broadcasted to match `maxval`'s shape.
-u = Uniform(minval, maxval)
-
-# `event_shape` is `TensorShape([])`.
-event_shape = u.event_shape
-# `event_shape_t` is a `Tensor` which will evaluate to [].
-event_shape_t = u.event_shape_tensor()
-
-# Sampling returns a sample per distribution. `samples` has shape
-# [5, 2, 2], which is [n] + batch_shape + event_shape, where n=5,
-# batch_shape=[2, 2], and event_shape=[].
-samples = u.sample_n(5)
-
-# The broadcasting holds across methods. Here we use `cdf` as an example. The
-# same holds for `log_cdf` and the likelihood functions.
-
-# `cum_prob` has shape [2, 2] as the `value` argument was broadcasted to the
-# shape of the `Uniform` instance.
-cum_prob_broadcast = u.cdf(4.0)
-
-# `cum_prob`'s shape is [2, 2], one per distribution. No broadcasting
-# occurred.
-cum_prob_per_dist = u.cdf([[4.0, 5.0],
- [6.0, 7.0]])
-
-# INVALID as the `value` argument is not broadcastable to the distribution's
-# shape.
-cum_prob_invalid = u.cdf([4.0, 5.0, 6.0])
-```
-
-### Parameter values leading to undefined statistics or distributions.
-
-Some distributions do not have well-defined statistics for all initialization
-parameter values. For example, the beta distribution is parameterized by
-positive real numbers `concentration1` and `concentration0`, and does not have
-well-defined mode if `concentration1 < 1` or `concentration0 < 1`.
-
-The user is given the option of raising an exception or returning `NaN`.
-
-```python
-a = tf.exp(tf.matmul(logits, weights_a))
-b = tf.exp(tf.matmul(logits, weights_b))
-
-# Will raise exception if ANY batch member has a < 1 or b < 1.
-dist = distributions.beta(a, b, allow_nan_stats=False)
-mode = dist.mode().eval()
-
-# Will return NaN for batch members with either a < 1 or b < 1.
-dist = distributions.beta(a, b, allow_nan_stats=True) # Default behavior
-mode = dist.mode().eval()
-```
-
-In all cases, an exception is raised if *invalid* parameters are passed, e.g.
-
-```python
-# Will raise an exception if any Op is run.
-negative_a = -1.0 * a # beta distribution by definition has a > 0.
-dist = distributions.beta(negative_a, b, allow_nan_stats=True)
-dist.mean().eval()
-```
-- - -
-
-#### `tf.contrib.distributions.Distribution.__init__(dtype, is_continuous, reparameterization_type, validate_args, allow_nan_stats, parameters=None, graph_parents=None, name=None)` {#Distribution.__init__}
-
-Constructs the `Distribution`.
-
-**This is a private method for subclass use.**
-
-##### Args:
-
-
-* <b>`dtype`</b>: The type of the event samples. `None` implies no type-enforcement.
-* <b>`is_continuous`</b>: Python `bool`. If `True` this `Distribution` is continuous
- over its supported domain.
-* <b>`reparameterization_type`</b>: Instance of `ReparameterizationType`.
- If `distributions.FULLY_REPARAMETERIZED`, this
- `Distribution` can be reparameterized in terms of some standard
- distribution with a function whose Jacobian is constant for the support
- of the standard distribution. If `distributions.NOT_REPARAMETERIZED`,
- then no such reparameterization is available.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`parameters`</b>: Python `dict` of parameters used to instantiate this
- `Distribution`.
-* <b>`graph_parents`</b>: Python `list` of graph prerequisites of this
- `Distribution`.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class. Default:
- subclass name.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if any member of graph_parents is `None` or not a `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.allow_nan_stats` {#Distribution.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.batch_shape` {#Distribution.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.batch_shape_tensor(name='batch_shape_tensor')` {#Distribution.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.cdf(value, name='cdf')` {#Distribution.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.copy(**override_parameters_kwargs)` {#Distribution.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.covariance(name='covariance')` {#Distribution.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.dtype` {#Distribution.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.entropy(name='entropy')` {#Distribution.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.event_shape` {#Distribution.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.event_shape_tensor(name='event_shape_tensor')` {#Distribution.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.is_continuous` {#Distribution.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.is_scalar_batch(name='is_scalar_batch')` {#Distribution.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.is_scalar_event(name='is_scalar_event')` {#Distribution.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.log_cdf(value, name='log_cdf')` {#Distribution.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.log_prob(value, name='log_prob')` {#Distribution.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.log_survival_function(value, name='log_survival_function')` {#Distribution.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.mean(name='mean')` {#Distribution.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.mode(name='mode')` {#Distribution.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.name` {#Distribution.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Distribution.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.param_static_shapes(cls, sample_shape)` {#Distribution.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.parameters` {#Distribution.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.prob(value, name='prob')` {#Distribution.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.reparameterization_type` {#Distribution.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.sample(sample_shape=(), seed=None, name='sample')` {#Distribution.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.stddev(name='stddev')` {#Distribution.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.survival_function(value, name='survival_function')` {#Distribution.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.validate_args` {#Distribution.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Distribution.variance(name='variance')` {#Distribution.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.GammaWithSoftplusConcentrationRate.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.GammaWithSoftplusConcentrationRate.md
deleted file mode 100644
index 059ab2b546..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.GammaWithSoftplusConcentrationRate.md
+++ /dev/null
@@ -1,565 +0,0 @@
-`Gamma` with softplus of `concentration` and `rate`.
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.__init__(concentration, rate, validate_args=False, allow_nan_stats=True, name='GammaWithSoftplusConcentrationRate')` {#GammaWithSoftplusConcentrationRate.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.allow_nan_stats` {#GammaWithSoftplusConcentrationRate.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.batch_shape` {#GammaWithSoftplusConcentrationRate.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.batch_shape_tensor(name='batch_shape_tensor')` {#GammaWithSoftplusConcentrationRate.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.cdf(value, name='cdf')` {#GammaWithSoftplusConcentrationRate.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.concentration` {#GammaWithSoftplusConcentrationRate.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.copy(**override_parameters_kwargs)` {#GammaWithSoftplusConcentrationRate.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.covariance(name='covariance')` {#GammaWithSoftplusConcentrationRate.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.dtype` {#GammaWithSoftplusConcentrationRate.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.entropy(name='entropy')` {#GammaWithSoftplusConcentrationRate.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.event_shape` {#GammaWithSoftplusConcentrationRate.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.event_shape_tensor(name='event_shape_tensor')` {#GammaWithSoftplusConcentrationRate.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.is_continuous` {#GammaWithSoftplusConcentrationRate.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.is_scalar_batch(name='is_scalar_batch')` {#GammaWithSoftplusConcentrationRate.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.is_scalar_event(name='is_scalar_event')` {#GammaWithSoftplusConcentrationRate.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.log_cdf(value, name='log_cdf')` {#GammaWithSoftplusConcentrationRate.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.log_prob(value, name='log_prob')` {#GammaWithSoftplusConcentrationRate.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.log_survival_function(value, name='log_survival_function')` {#GammaWithSoftplusConcentrationRate.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.mean(name='mean')` {#GammaWithSoftplusConcentrationRate.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.mode(name='mode')` {#GammaWithSoftplusConcentrationRate.mode}
-
-Mode.
-
-Additional documentation from `Gamma`:
-
-The mode of a gamma distribution is `(shape - 1) / rate` when
-`shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`,
-an exception will be raised rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.name` {#GammaWithSoftplusConcentrationRate.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#GammaWithSoftplusConcentrationRate.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.param_static_shapes(cls, sample_shape)` {#GammaWithSoftplusConcentrationRate.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.parameters` {#GammaWithSoftplusConcentrationRate.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.prob(value, name='prob')` {#GammaWithSoftplusConcentrationRate.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.rate` {#GammaWithSoftplusConcentrationRate.rate}
-
-Rate parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.reparameterization_type` {#GammaWithSoftplusConcentrationRate.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.sample(sample_shape=(), seed=None, name='sample')` {#GammaWithSoftplusConcentrationRate.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.stddev(name='stddev')` {#GammaWithSoftplusConcentrationRate.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.survival_function(value, name='survival_function')` {#GammaWithSoftplusConcentrationRate.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.validate_args` {#GammaWithSoftplusConcentrationRate.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.GammaWithSoftplusConcentrationRate.variance(name='variance')` {#GammaWithSoftplusConcentrationRate.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.md
deleted file mode 100644
index e99645559c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.md
+++ /dev/null
@@ -1,578 +0,0 @@
-`InverseGamma` with softplus of `concentration` and `rate`.
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.__init__(concentration, rate, validate_args=False, allow_nan_stats=True, name='InverseGammaWithSoftplusConcentrationRate')` {#InverseGammaWithSoftplusConcentrationRate.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.allow_nan_stats` {#InverseGammaWithSoftplusConcentrationRate.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.batch_shape` {#InverseGammaWithSoftplusConcentrationRate.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.batch_shape_tensor(name='batch_shape_tensor')` {#InverseGammaWithSoftplusConcentrationRate.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.cdf(value, name='cdf')` {#InverseGammaWithSoftplusConcentrationRate.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.concentration` {#InverseGammaWithSoftplusConcentrationRate.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.copy(**override_parameters_kwargs)` {#InverseGammaWithSoftplusConcentrationRate.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.covariance(name='covariance')` {#InverseGammaWithSoftplusConcentrationRate.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.dtype` {#InverseGammaWithSoftplusConcentrationRate.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.entropy(name='entropy')` {#InverseGammaWithSoftplusConcentrationRate.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.event_shape` {#InverseGammaWithSoftplusConcentrationRate.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.event_shape_tensor(name='event_shape_tensor')` {#InverseGammaWithSoftplusConcentrationRate.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.is_continuous` {#InverseGammaWithSoftplusConcentrationRate.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.is_scalar_batch(name='is_scalar_batch')` {#InverseGammaWithSoftplusConcentrationRate.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.is_scalar_event(name='is_scalar_event')` {#InverseGammaWithSoftplusConcentrationRate.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.log_cdf(value, name='log_cdf')` {#InverseGammaWithSoftplusConcentrationRate.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.log_prob(value, name='log_prob')` {#InverseGammaWithSoftplusConcentrationRate.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.log_survival_function(value, name='log_survival_function')` {#InverseGammaWithSoftplusConcentrationRate.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.mean(name='mean')` {#InverseGammaWithSoftplusConcentrationRate.mean}
-
-Mean.
-
-Additional documentation from `InverseGamma`:
-
-The mean of an inverse gamma distribution is
-`rate / (concentration - 1)`, when `concentration > 1`, and `NaN`
-otherwise. If `self.allow_nan_stats` is `False`, an exception will be
-raised rather than returning `NaN`
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.mode(name='mode')` {#InverseGammaWithSoftplusConcentrationRate.mode}
-
-Mode.
-
-Additional documentation from `InverseGamma`:
-
-The mode of an inverse gamma distribution is `rate / (concentration +
-1)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.name` {#InverseGammaWithSoftplusConcentrationRate.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#InverseGammaWithSoftplusConcentrationRate.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.param_static_shapes(cls, sample_shape)` {#InverseGammaWithSoftplusConcentrationRate.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.parameters` {#InverseGammaWithSoftplusConcentrationRate.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.prob(value, name='prob')` {#InverseGammaWithSoftplusConcentrationRate.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.rate` {#InverseGammaWithSoftplusConcentrationRate.rate}
-
-Rate parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.reparameterization_type` {#InverseGammaWithSoftplusConcentrationRate.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.sample(sample_shape=(), seed=None, name='sample')` {#InverseGammaWithSoftplusConcentrationRate.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.stddev(name='stddev')` {#InverseGammaWithSoftplusConcentrationRate.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.survival_function(value, name='survival_function')` {#InverseGammaWithSoftplusConcentrationRate.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.validate_args` {#InverseGammaWithSoftplusConcentrationRate.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGammaWithSoftplusConcentrationRate.variance(name='variance')` {#InverseGammaWithSoftplusConcentrationRate.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-
-Additional documentation from `InverseGamma`:
-
-Variance for inverse gamma is defined only for `concentration > 2`. If
-`self.allow_nan_stats` is `False`, an exception will be raised rather
-than returning `NaN`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.RegisterKL.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.RegisterKL.md
deleted file mode 100644
index 07fe04d122..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.RegisterKL.md
+++ /dev/null
@@ -1,43 +0,0 @@
-Decorator to register a KL divergence implementation function.
-
-Usage:
-
-@distributions.RegisterKL(distributions.Normal, distributions.Normal)
-def _kl_normal_mvn(norm_a, norm_b):
- # Return KL(norm_a || norm_b)
-- - -
-
-#### `tf.contrib.distributions.RegisterKL.__call__(kl_fn)` {#RegisterKL.__call__}
-
-Perform the KL registration.
-
-##### Args:
-
-
-* <b>`kl_fn`</b>: The function to use for the KL divergence.
-
-##### Returns:
-
- kl_fn
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if kl_fn is not a callable.
-* <b>`ValueError`</b>: if a KL divergence function has already been registered for
- the given argument classes.
-
-
-- - -
-
-#### `tf.contrib.distributions.RegisterKL.__init__(dist_cls_a, dist_cls_b)` {#RegisterKL.__init__}
-
-Initialize the KL registrar.
-
-##### Args:
-
-
-* <b>`dist_cls_a`</b>: the class of the first argument of the KL divergence.
-* <b>`dist_cls_b`</b>: the class of the second argument of the KL divergence.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.RelaxedOneHotCategorical.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.RelaxedOneHotCategorical.md
deleted file mode 100644
index acd76a6fa6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.RelaxedOneHotCategorical.md
+++ /dev/null
@@ -1,650 +0,0 @@
-RelaxedOneHotCategorical distribution with temperature and logits.
-
-The RelaxedOneHotCategorical is a distribution over random probability
-vectors, vectors of positive real values that sum to one, which continuously
-approximates a OneHotCategorical. The degree of approximation is controlled by
-a temperature: as the temperaturegoes to 0 the RelaxedOneHotCategorical
-becomes discrete with a distribution described by the `logits` or `probs`
-parameters, as the temperature goes to infinity the RelaxedOneHotCategorical
-becomes the constant distribution that is identically the constant vector of
-(1/event_size, ..., 1/event_size).
-
-The RelaxedOneHotCategorical distribution was concurrently introduced as the
-Gumbel-Softmax (Jang et al., 2016) and Concrete (Maddison et al., 2016)
-distributions for use as a reparameterized continuous approximation to the
-`Categorical` one-hot distribution. If you use this distribution, please cite
-both papers.
-
-#### Examples
-
-Creates a continuous distribution, which approximates a 3-class one-hot
-categorical distiribution. The 2nd class is the most likely to be the
-largest component in samples drawn from this distribution.
-
-```python
-temperature = 0.5
-p = [0.1, 0.5, 0.4]
-dist = RelaxedOneHotCategorical(temperature, probs=p)
-```
-
-Creates a continuous distribution, which approximates a 3-class one-hot
-categorical distiribution. The 2nd class is the most likely to be the
-largest component in samples drawn from this distribution.
-
-```python
-temperature = 0.5
-logits = [-2, 2, 0]
-dist = RelaxedOneHotCategorical(temperature, logits=logits)
-```
-
-Creates a continuous distribution, which approximates a 3-class one-hot
-categorical distiribution. Because the temperature is very low, samples from
-this distribution are almost discrete, with one component almost 1 and the
-others nearly 0. The 2nd class is the most likely to be the largest component
-in samples drawn from this distribution.
-
-```python
-temperature = 1e-5
-logits = [-2, 2, 0]
-dist = RelaxedOneHotCategorical(temperature, logits=logits)
-```
-
-Creates a continuous distribution, which approximates a 3-class one-hot
-categorical distiribution. Because the temperature is very high, samples from
-this distribution are usually close to the (1/3, 1/3, 1/3) vector. The 2nd
-class is still the most likely to be the largest component
-in samples drawn from this distribution.
-
-```python
-temperature = 10
-logits = [-2, 2, 0]
-dist = RelaxedOneHotCategorical(temperature, logits=logits)
-```
-
-Eric Jang, Shixiang Gu, and Ben Poole. Categorical Reparameterization with
-Gumbel-Softmax. 2016.
-
-Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete Distribution:
-A Continuous Relaxation of Discrete Random Variables. 2016.
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.__init__(temperature, logits=None, probs=None, dtype=tf.float32, validate_args=False, allow_nan_stats=True, name='RelaxedOneHotCategorical')` {#RelaxedOneHotCategorical.__init__}
-
-Initialize RelaxedOneHotCategorical using class log-probabilities.
-
-##### Args:
-
-
-* <b>`temperature`</b>: An 0-D `Tensor`, representing the temperature
- of a set of RelaxedOneHotCategorical distributions. The temperature
- should be positive.
-* <b>`logits`</b>: An N-D `Tensor`, `N >= 1`, representing the log probabilities
- of a set of RelaxedOneHotCategorical distributions. The first
- `N - 1` dimensions index into a batch of independent distributions and
- the last dimension represents a vector of logits for each class. Only
- one of `logits` or `probs` should be passed in.
-* <b>`probs`</b>: An N-D `Tensor`, `N >= 1`, representing the probabilities
- of a set of RelaxedOneHotCategorical distributions. The first `N - 1`
- dimensions index into a batch of independent distributions and the last
- dimension represents a vector of probabilities for each class. Only one
- of `logits` or `probs` should be passed in.
-* <b>`dtype`</b>: The type of the event samples (default: int32).
-* <b>`validate_args`</b>: Unused in this distribution.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. If `False`, raise an
- exception if a statistic (e.g. mean/mode/etc...) is undefined for any
- batch member. If `True`, batch members with valid parameters leading to
- undefined statistics will return NaN for this statistic.
-* <b>`name`</b>: A name for this distribution (optional).
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.allow_nan_stats` {#RelaxedOneHotCategorical.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.batch_shape` {#RelaxedOneHotCategorical.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.batch_shape_tensor(name='batch_shape_tensor')` {#RelaxedOneHotCategorical.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.bijector` {#RelaxedOneHotCategorical.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.cdf(value, name='cdf')` {#RelaxedOneHotCategorical.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.copy(**override_parameters_kwargs)` {#RelaxedOneHotCategorical.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.covariance(name='covariance')` {#RelaxedOneHotCategorical.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.distribution` {#RelaxedOneHotCategorical.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.dtype` {#RelaxedOneHotCategorical.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.entropy(name='entropy')` {#RelaxedOneHotCategorical.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.event_shape` {#RelaxedOneHotCategorical.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.event_shape_tensor(name='event_shape_tensor')` {#RelaxedOneHotCategorical.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.is_continuous` {#RelaxedOneHotCategorical.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.is_scalar_batch(name='is_scalar_batch')` {#RelaxedOneHotCategorical.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.is_scalar_event(name='is_scalar_event')` {#RelaxedOneHotCategorical.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.log_cdf(value, name='log_cdf')` {#RelaxedOneHotCategorical.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.log_prob(value, name='log_prob')` {#RelaxedOneHotCategorical.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.log_survival_function(value, name='log_survival_function')` {#RelaxedOneHotCategorical.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.mean(name='mean')` {#RelaxedOneHotCategorical.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.mode(name='mode')` {#RelaxedOneHotCategorical.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.name` {#RelaxedOneHotCategorical.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#RelaxedOneHotCategorical.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.param_static_shapes(cls, sample_shape)` {#RelaxedOneHotCategorical.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.parameters` {#RelaxedOneHotCategorical.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.prob(value, name='prob')` {#RelaxedOneHotCategorical.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.reparameterization_type` {#RelaxedOneHotCategorical.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.sample(sample_shape=(), seed=None, name='sample')` {#RelaxedOneHotCategorical.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.stddev(name='stddev')` {#RelaxedOneHotCategorical.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.survival_function(value, name='survival_function')` {#RelaxedOneHotCategorical.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.validate_args` {#RelaxedOneHotCategorical.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedOneHotCategorical.variance(name='variance')` {#RelaxedOneHotCategorical.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.bijector.CholeskyOuterProduct.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.bijector.CholeskyOuterProduct.md
deleted file mode 100644
index 78a9703bba..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.bijector.CholeskyOuterProduct.md
+++ /dev/null
@@ -1,301 +0,0 @@
-Compute `g(X) = X @ X.T`; X is lower-triangular, positive-diagonal matrix.
-
-`event_ndims` must be 0 or 2, i.e., scalar or matrix.
-
-Note: the upper-triangular part of X is ignored (whether or not its zero).
-
-Examples:
-
-```python
-bijector.CholeskyOuterProduct(event_ndims=2).forward(x=[[1., 0], [2, 1]])
-# Result: [[1., 2], [2, 5]], i.e., x @ x.T
-
-bijector.CholeskyOuterProduct(event_ndims=2).inverse(y=[[1., 2], [2, 5]])
-# Result: [[1., 0], [2, 1]], i.e., cholesky(y).
-```
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.__init__(event_ndims=2, validate_args=False, name='cholesky_outer_product')` {#CholeskyOuterProduct.__init__}
-
-Instantiates the `CholeskyOuterProduct` bijector.
-
-##### Args:
-
-
-* <b>`event_ndims`</b>: `constant` `int32` scalar `Tensor` indicating the number of
- dimensions associated with a particular draw from the distribution. Must
- be 0 or 2.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str` name given to ops managed by this object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if event_ndims is neither 0 or 2.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.dtype` {#CholeskyOuterProduct.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.event_ndims` {#CholeskyOuterProduct.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.forward(x, name='forward')` {#CholeskyOuterProduct.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.forward_event_shape(input_shape)` {#CholeskyOuterProduct.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#CholeskyOuterProduct.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#CholeskyOuterProduct.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.graph_parents` {#CholeskyOuterProduct.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.inverse(y, name='inverse')` {#CholeskyOuterProduct.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#CholeskyOuterProduct.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.inverse_event_shape(output_shape)` {#CholeskyOuterProduct.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#CholeskyOuterProduct.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#CholeskyOuterProduct.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.is_constant_jacobian` {#CholeskyOuterProduct.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.name` {#CholeskyOuterProduct.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.CholeskyOuterProduct.validate_args` {#CholeskyOuterProduct.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.bijector.SigmoidCentered.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.bijector.SigmoidCentered.md
deleted file mode 100644
index 0d85d1e9e4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.distributions.bijector.SigmoidCentered.md
+++ /dev/null
@@ -1,276 +0,0 @@
-Bijector which computes Y = g(X) = exp([X 0]) / (1 + exp(-X)).
-
-Equivalent to: `bijector.SoftmaxCentered(event_ndims=0)`.
-
-See `bijector.SoftmaxCentered` for more details.
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.__init__(validate_args=False, name='sigmoid_centered')` {#SigmoidCentered.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.dtype` {#SigmoidCentered.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.event_ndims` {#SigmoidCentered.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.forward(x, name='forward')` {#SigmoidCentered.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.forward_event_shape(input_shape)` {#SigmoidCentered.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#SigmoidCentered.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#SigmoidCentered.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.graph_parents` {#SigmoidCentered.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.inverse(y, name='inverse')` {#SigmoidCentered.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#SigmoidCentered.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.inverse_event_shape(output_shape)` {#SigmoidCentered.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#SigmoidCentered.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#SigmoidCentered.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.is_constant_jacobian` {#SigmoidCentered.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.name` {#SigmoidCentered.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SigmoidCentered.validate_args` {#SigmoidCentered.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.framework.assign_from_checkpoint.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.framework.assign_from_checkpoint.md
deleted file mode 100644
index 1ab8d563e1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.framework.assign_from_checkpoint.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.contrib.framework.assign_from_checkpoint(model_path, var_list)` {#assign_from_checkpoint}
-
-Creates an operation to assign specific variables from a checkpoint.
-
-##### Args:
-
-
-* <b>`model_path`</b>: The full path to the model checkpoint. To get latest checkpoint
- use `model_path = tf.train.latest_checkpoint(checkpoint_dir)`
-* <b>`var_list`</b>: A list of (possibly partitioned) `Variable` objects
- or a dictionary mapping names in the checkpoint to the
- corresponding variables or list of variables to initialize
- from that checkpoint value. For partitioned Variables, the
- name in the checkpoint must be the full variable, not the
- name of the partitioned variable, eg. "my_var" rather than
- "my_var/part_4". If empty, returns no_op(), {}.
-
-##### Returns:
-
- the restore_op and the feed_dict that need to be run to restore var_list.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the checkpoint specified at `model_path` is missing one of
- the variables in `var_list`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.framework.deprecated_args.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.framework.deprecated_args.md
deleted file mode 100644
index 3f81ac9fc1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.framework.deprecated_args.md
+++ /dev/null
@@ -1,41 +0,0 @@
-### `tf.contrib.framework.deprecated_args(date, instructions, *deprecated_arg_names_or_tuples)` {#deprecated_args}
-
-Decorator for marking specific function arguments as deprecated.
-
-This decorator logs a deprecation warning whenever the decorated function is
-called with the deprecated argument. It has the following format:
-
- Calling <function> (from <module>) with <arg> is deprecated and will be
- removed after <date>. Instructions for updating:
- <instructions>
-
-<function> will include the class name if it is a method.
-
-It also edits the docstring of the function: ' (deprecated arguments)' is
-appended to the first line of the docstring and a deprecation notice is
-prepended to the rest of the docstring.
-
-##### Args:
-
-
-* <b>`date`</b>: String. The date the function is scheduled to be removed. Must be
- ISO 8601 (YYYY-MM-DD).
-* <b>`instructions`</b>: String. Instructions on how to update code using the
- deprecated function.
-* <b>`*deprecated_arg_names_or_tuples`</b>: String. or 2-Tuple(String,
- [ok_vals]). The string is the deprecated argument name.
- Optionally, an ok-value may be provided. If the user provided
- argument equals this value, the warning is suppressed.
-
-##### Returns:
-
- Decorated function or method.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If date is not in ISO 8601 format, instructions are
- empty, the deprecated arguments are not present in the function
- signature, or the second element of a deprecated_tuple is not a
- list.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.add_control_inputs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.add_control_inputs.md
deleted file mode 100644
index 18aa510a32..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.add_control_inputs.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.contrib.graph_editor.add_control_inputs(op, cops)` {#add_control_inputs}
-
-Add the control inputs cops to co.
-
-Warning: this function is directly manipulating the internals of the tf.Graph.
-
-##### Args:
-
-
-* <b>`op`</b>: a tf.Operation to which the control inputs are added.
-* <b>`cops`</b>: an object convertible to a list of `tf.Operation`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if op is not a tf.Operation
-* <b>`ValueError`</b>: if any cop in cops is already a control input of op.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.detach_inputs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.detach_inputs.md
deleted file mode 100644
index 56b381f85f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.detach_inputs.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.contrib.graph_editor.detach_inputs(sgv, control_inputs=False)` {#detach_inputs}
-
-Detach the inputs of a subgraph view.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the subgraph view to be detached. This argument is converted to a
- subgraph using the same rules as the function subgraph.make_view.
- Note that sgv is modified in place.
-* <b>`control_inputs`</b>: if True control_inputs are also detached.
-
-##### Returns:
-
- A tuple `(sgv, input_placeholders)` where
- `sgv` is a new subgraph view of the detached subgraph;
- `input_placeholders` is a list of the created input placeholders.
-
-##### Raises:
-
-
-* <b>`StandardError`</b>: if sgv cannot be converted to a SubGraphView using
- the same rules than the function subgraph.make_view.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.filter_ops.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.filter_ops.md
deleted file mode 100644
index a5a24d54cf..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.filter_ops.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.contrib.graph_editor.filter_ops(ops, positive_filter)` {#filter_ops}
-
-Get the ops passing the given filter.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of tf.Operation.
-* <b>`positive_filter`</b>: a function deciding where to keep an operation or not.
- If True, all the operations are returned.
-
-##### Returns:
-
- A list of selected tf.Operation.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ops cannot be converted to a list of tf.Operation.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.make_placeholder_from_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.make_placeholder_from_tensor.md
deleted file mode 100644
index d645695577..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.make_placeholder_from_tensor.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.contrib.graph_editor.make_placeholder_from_tensor(t, scope=None)` {#make_placeholder_from_tensor}
-
-Create a `tf.placeholder` for the Graph Editor.
-
-Note that the correct graph scope must be set by the calling function.
-
-##### Args:
-
-
-* <b>`t`</b>: a `tf.Tensor` whose name will be used to create the placeholder
- (see function placeholder_name).
-* <b>`scope`</b>: absolute scope within which to create the placeholder. None
- means that the scope of `t` is preserved. `""` means the root scope.
-
-##### Returns:
-
- A newly created `tf.placeholder`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `t` is not `None` or a `tf.Tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.placeholder_name.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.placeholder_name.md
deleted file mode 100644
index 6d7a8facc4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.placeholder_name.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.contrib.graph_editor.placeholder_name(t=None, scope=None)` {#placeholder_name}
-
-Create placeholder name for the graph editor.
-
-##### Args:
-
-
-* <b>`t`</b>: optional tensor on which the placeholder operation's name will be based
- on
-* <b>`scope`</b>: absolute scope with which to prefix the placeholder's name. None
- means that the scope of t is preserved. "" means the root scope.
-
-##### Returns:
-
- A new placeholder name prefixed by "geph". Note that "geph" stands for
- Graph Editor PlaceHolder. This convention allows to quickly identify the
- placeholder generated by the Graph Editor.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if t is not None or a tf.Tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.select_ops.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.select_ops.md
deleted file mode 100644
index 513a7cf472..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.select_ops.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.contrib.graph_editor.select_ops(*args, **kwargs)` {#select_ops}
-
-Helper to select operations.
-
-##### Args:
-
-
-* <b>`*args`</b>: list of 1) regular expressions (compiled or not) or 2) (array of)
- `tf.Operation`. `tf.Tensor` instances are silently ignored.
-* <b>`**kwargs`</b>: 'graph': `tf.Graph` in which to perform the regex query.This is
- required when using regex.
- 'positive_filter': an elem if selected only if `positive_filter(elem)` is
- `True`. This is optional.
- 'restrict_ops_regex': a regular expression is ignored if it doesn't start
- with the substring "(?#ops)".
-
-##### Returns:
-
- A list of `tf.Operation`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if the optional keyword argument graph is not a `tf.Graph`
- or if an argument in args is not an (array of) `tf.Operation`
- or an (array of) `tf.Tensor` (silently ignored) or a string
- or a regular expression.
-* <b>`ValueError`</b>: if one of the keyword arguments is unexpected or if a regular
- expression is used without passing a graph as a keyword argument.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.swap_inputs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.swap_inputs.md
deleted file mode 100644
index bd18c89d6b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.swap_inputs.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.graph_editor.swap_inputs(sgv0, sgv1)` {#swap_inputs}
-
-Swap all the inputs of sgv0 and sgv1 (see reroute_inputs).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.swap_ios.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.swap_ios.md
deleted file mode 100644
index aa18c7e0f0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.graph_editor.swap_ios.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.graph_editor.swap_ios(sgv0, sgv1)` {#swap_ios}
-
-Swap the inputs and outputs of sgv1 to sgv0 (see _reroute).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.convolution2d_in_plane.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.convolution2d_in_plane.md
deleted file mode 100644
index ff9c0f77b2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.convolution2d_in_plane.md
+++ /dev/null
@@ -1,49 +0,0 @@
-### `tf.contrib.layers.convolution2d_in_plane(*args, **kwargs)` {#convolution2d_in_plane}
-
-Performs the same in-plane convolution to each channel independently.
-
-This is useful for performing various simple channel-independent convolution
-operations such as image gradients:
-
- image = tf.constant(..., shape=(16, 240, 320, 3))
- vert_gradients = layers.conv2d_in_plane(image,
- kernel=[1, -1],
- kernel_size=[2, 1])
- horz_gradients = layers.conv2d_in_plane(image,
- kernel=[1, -1],
- kernel_size=[1, 2])
-
-##### Args:
-
-
-* <b>`inputs`</b>: A 4-D tensor with dimensions [batch_size, height, width, channels].
-* <b>`kernel_size`</b>: A list of length 2 holding the [kernel_height, kernel_width] of
- of the pooling. Can be an int if both values are the same.
-* <b>`stride`</b>: A list of length 2 `[stride_height, stride_width]`.
- Can be an int if both strides are the same. Note that presently
- both strides must have the same value.
-* <b>`padding`</b>: The padding type to use, either 'SAME' or 'VALID'.
-* <b>`activation_fn`</b>: Activation function. The default value is a ReLU function.
- Explicitly set it to None to skip it and maintain a linear activation.
-* <b>`normalizer_fn`</b>: Normalization function to use instead of `biases`. If
- `normalizer_fn` is provided then `biases_initializer` and
- `biases_regularizer` are ignored and `biases` are not created nor added.
- default set to None for no normalizer function
-* <b>`normalizer_params`</b>: Normalization function parameters.
-* <b>`weights_initializer`</b>: An initializer for the weights.
-* <b>`weights_regularizer`</b>: Optional regularizer for the weights.
-* <b>`biases_initializer`</b>: An initializer for the biases. If None skip biases.
-* <b>`biases_regularizer`</b>: Optional regularizer for the biases.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional list of collections for all the variables or
- a dictionary containing a different list of collection per variable.
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for `variable_scope`.
-
-##### Returns:
-
- A `Tensor` representing the output of the operation.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.l2_regularizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.l2_regularizer.md
deleted file mode 100644
index 60791e81f3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.l2_regularizer.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.contrib.layers.l2_regularizer(scale, scope=None)` {#l2_regularizer}
-
-Returns a function that can be used to apply L2 regularization to weights.
-
-Small values of L2 can help prevent overfitting the training data.
-
-##### Args:
-
-
-* <b>`scale`</b>: A scalar multiplier `Tensor`. 0.0 disables the regularizer.
-* <b>`scope`</b>: An optional scope name.
-
-##### Returns:
-
- A function with signature `l2(weights)` that applies L2 regularization.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If scale is negative or if scale is not a float.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.real_valued_column.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.real_valued_column.md
deleted file mode 100644
index 61b4c76318..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.real_valued_column.md
+++ /dev/null
@@ -1,44 +0,0 @@
-### `tf.contrib.layers.real_valued_column(column_name, dimension=1, default_value=None, dtype=tf.float32, normalizer=None)` {#real_valued_column}
-
-Creates a `_RealValuedColumn` for dense numeric data.
-
-##### Args:
-
-
-* <b>`column_name`</b>: A string defining real valued column name.
-* <b>`dimension`</b>: An integer specifying dimension of the real valued column.
- The default is 1. When dimension is not None, the Tensor representing
- the _RealValuedColumn will have the shape of [batch_size, dimension].
- A None dimension means the feature column should be treat as variable
- length and will be parsed as a `SparseTensor`.
-* <b>`default_value`</b>: A single value compatible with dtype or a list of values
- compatible with dtype which the column takes on during tf.Example parsing
- if data is missing. When dimension is not None, a default value of None
- will cause tf.parse_example to fail if an example does not contain this
- column. If a single value is provided, the same value will be applied as
- the default value for every dimension. If a list of values is provided,
- the length of the list should be equal to the value of `dimension`.
- Only scalar default value is supported in case dimension is not specified.
-* <b>`dtype`</b>: defines the type of values. Default value is tf.float32. Must be a
- non-quantized, real integer or floating point type.
-* <b>`normalizer`</b>: If not None, a function that can be used to normalize the value
- of the real valued column after default_value is applied for parsing.
- Normalizer function takes the input tensor as its argument, and returns
- the output tensor. (e.g. lambda x: (x - 3.0) / 4.2). Note that for
- variable length columns, the normalizer should expect an input_tensor of
- type `SparseTensor`.
-
-##### Returns:
-
- A _RealValuedColumn.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if dimension is not an int
-* <b>`ValueError`</b>: if dimension is not a positive integer
-* <b>`TypeError`</b>: if default_value is a list but its length is not equal to the
- value of `dimension`.
-* <b>`TypeError`</b>: if default_value is not compatible with dtype.
-* <b>`ValueError`</b>: if dtype is not convertable to tf.float32.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.sparse_column_with_keys.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.sparse_column_with_keys.md
deleted file mode 100644
index b32b62cc28..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.sparse_column_with_keys.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.contrib.layers.sparse_column_with_keys(column_name, keys, default_value=-1, combiner='sum')` {#sparse_column_with_keys}
-
-Creates a _SparseColumn with keys.
-
-Look up logic is as follows:
-lookup_id = index_of_feature_in_keys if feature in keys else default_value
-
-##### Args:
-
-
-* <b>`column_name`</b>: A string defining sparse column name.
-* <b>`keys`</b>: a string list defining vocabulary.
-* <b>`default_value`</b>: The value to use for out-of-vocabulary feature values.
- Default is -1.
-* <b>`combiner`</b>: A string specifying how to reduce if the sparse column is
- multivalent. Currently "mean", "sqrtn" and "sum" are supported, with "sum"
- the default. "sqrtn" often achieves good accuracy, in particular with
- bag-of-words columns.
- * "sum": do not normalize features in the column
- * "mean": do l1 normalization on features in the column
- * "sqrtn": do l2 normalization on features in the column
- For more information: `tf.embedding_lookup_sparse`.
-
-##### Returns:
-
- A _SparseColumnKeys with keys configuration.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.weighted_sparse_column.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.weighted_sparse_column.md
deleted file mode 100644
index 1223ea2d77..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.layers.weighted_sparse_column.md
+++ /dev/null
@@ -1,43 +0,0 @@
-### `tf.contrib.layers.weighted_sparse_column(sparse_id_column, weight_column_name, dtype=tf.float32)` {#weighted_sparse_column}
-
-Creates a _SparseColumn by combining sparse_id_column with a weight column.
-
-Example:
-
- ```python
- sparse_feature = sparse_column_with_hash_bucket(column_name="sparse_col",
- hash_bucket_size=1000)
- weighted_feature = weighted_sparse_column(sparse_id_column=sparse_feature,
- weight_column_name="weights_col")
- ```
-
- This configuration assumes that input dictionary of model contains the
- following two items:
- * (key="sparse_col", value=sparse_tensor) where sparse_tensor is
- a SparseTensor.
- * (key="weights_col", value=weights_tensor) where weights_tensor
- is a SparseTensor.
- Following are assumed to be true:
- * sparse_tensor.indices = weights_tensor.indices
- * sparse_tensor.dense_shape = weights_tensor.dense_shape
-
-##### Args:
-
-
-* <b>`sparse_id_column`</b>: A `_SparseColumn` which is created by
- `sparse_column_with_*` functions.
-* <b>`weight_column_name`</b>: A string defining a sparse column name which represents
- weight or value of the corresponding sparse id feature.
-* <b>`dtype`</b>: Type of weights, such as `tf.float32`. Only floating and integer
- weights are supported.
-
-##### Returns:
-
- A _WeightedSparseColumn composed of two sparse features: one represents id,
- the other represents weight (value) of the id feature in that example.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if dtype is not convertible to float.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.LinearRegressor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.LinearRegressor.md
deleted file mode 100644
index 2f12a6f277..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.LinearRegressor.md
+++ /dev/null
@@ -1,412 +0,0 @@
-Linear regressor model.
-
-Train a linear regression model to predict label value given observation of
-feature values.
-
-Example:
-
-```python
-sparse_column_a = sparse_column_with_hash_bucket(...)
-sparse_column_b = sparse_column_with_hash_bucket(...)
-
-sparse_feature_a_x_sparse_feature_b = crossed_column(...)
-
-estimator = LinearRegressor(
- feature_columns=[sparse_column_a, sparse_feature_a_x_sparse_feature_b])
-
-# Input builders
-def input_fn_train: # returns x, y
- ...
-def input_fn_eval: # returns x, y
- ...
-estimator.fit(input_fn=input_fn_train)
-estimator.evaluate(input_fn=input_fn_eval)
-estimator.predict(x=x)
-```
-
-Input of `fit` and `evaluate` should have following features,
- otherwise there will be a KeyError:
-
-* if `weight_column_name` is not `None`:
- key=weight_column_name, value=a `Tensor`
-* for column in `feature_columns`:
- - if isinstance(column, `SparseColumn`):
- key=column.name, value=a `SparseTensor`
- - if isinstance(column, `WeightedSparseColumn`):
- {key=id column name, value=a `SparseTensor`,
- key=weight column name, value=a `SparseTensor`}
- - if isinstance(column, `RealValuedColumn`):
- key=column.name, value=a `Tensor`
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.__init__(feature_columns, model_dir=None, weight_column_name=None, optimizer=None, gradient_clip_norm=None, enable_centered_bias=False, label_dimension=1, _joint_weights=False, config=None, feature_engineering_fn=None)` {#LinearRegressor.__init__}
-
-Construct a `LinearRegressor` estimator object.
-
-##### Args:
-
-
-* <b>`feature_columns`</b>: An iterable containing all the feature columns used by
- the model. All items in the set should be instances of classes derived
- from `FeatureColumn`.
-* <b>`model_dir`</b>: Directory to save model parameters, graph, etc. This can
- also be used to load checkpoints from the directory into a estimator
- to continue training a previously saved model.
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training. It
- will be multiplied by the loss of the example.
-* <b>`optimizer`</b>: An instance of `tf.Optimizer` used to train the model. If
- `None`, will use an Ftrl optimizer.
-* <b>`gradient_clip_norm`</b>: A `float` > 0. If provided, gradients are clipped
- to their global norm with this clipping ratio. See
- `tf.clip_by_global_norm` for more details.
-* <b>`enable_centered_bias`</b>: A bool. If True, estimator will learn a centered
- bias variable for each class. Rest of the model structure learns the
- residual after centered bias.
-* <b>`label_dimension`</b>: Number of regression targets per example. This is the
- size of the last dimension of the labels and logits `Tensor` objects
- (typically, these have shape `[batch_size, label_dimension]`).
- _joint_weights: If True use a single (possibly partitioned) variable to
- store the weights. It's faster, but requires all feature columns are
- sparse and have the 'sum' combiner. Incompatible with SDCAOptimizer.
-
-* <b>`config`</b>: `RunConfig` object to configure the runtime settings.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into the model.
-
-##### Returns:
-
- A `LinearRegressor` estimator.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.__repr__()` {#LinearRegressor.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.bias_` {#LinearRegressor.bias_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.config` {#LinearRegressor.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.evaluate(*args, **kwargs)` {#LinearRegressor.evaluate}
-
-See `Evaluable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` or `y` is provided, and at least one of
- `input_fn` or `feed_fn` is provided.
- Or if `metrics` is not `None` or `dict`.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.export(export_dir, input_fn=None, input_feature_key=None, use_deprecated_input_fn=True, signature_fn=None, default_batch_size=1, exports_to_keep=None)` {#LinearRegressor.export}
-
-See BaseEstimator.export.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#LinearRegressor.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.fit(*args, **kwargs)` {#LinearRegressor.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.get_params(deep=True)` {#LinearRegressor.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.get_variable_names()` {#LinearRegressor.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.get_variable_value(name)` {#LinearRegressor.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.model_dir` {#LinearRegressor.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.partial_fit(*args, **kwargs)` {#LinearRegressor.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.predict(*args, **kwargs)` {#LinearRegressor.predict}
-
-Returns predictions for given features. (deprecated arguments) (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2017-03-01.
-Instructions for updating:
-Please switch to predict_scores, or set `outputs` argument.
-
-By default, returns predicted scores. But this default will be dropped
-soon. Users should either pass `outputs`, or call `predict_scores` method.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns scores.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted scores (or an iterable of predicted scores if
- as_iterable is True). If `label_dimension == 1`, the shape of the output
- is `[batch_size]`, otherwise the shape is `[batch_size, label_dimension]`.
- If `outputs` is set, returns a dict of predictions.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.predict_scores(*args, **kwargs)` {#LinearRegressor.predict_scores}
-
-Returns predicted scores for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted scores (or an iterable of predicted scores if
- as_iterable is True). If `label_dimension == 1`, the shape of the output
- is `[batch_size]`, otherwise the shape is `[batch_size, label_dimension]`.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.set_params(**params)` {#LinearRegressor.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearRegressor.weights_` {#LinearRegressor.weights_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.ModelFnOps.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.ModelFnOps.md
deleted file mode 100644
index 85371ec084..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.ModelFnOps.md
+++ /dev/null
@@ -1,135 +0,0 @@
-Ops returned from a model_fn.
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.__getnewargs__()` {#ModelFnOps.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.__getstate__()` {#ModelFnOps.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.__new__(cls, mode, predictions=None, loss=None, train_op=None, eval_metric_ops=None, output_alternatives=None, training_chief_hooks=None, training_hooks=None, scaffold=None)` {#ModelFnOps.__new__}
-
-Creates a validated `ModelFnOps` instance.
-
-For a multi-headed model, the predictions dict here will contain the outputs
-of all of the heads. However: at serving time, requests will be made
-specifically for one or more heads, and the RPCs used for these requests may
-differ by problem type (i.e., regression, classification, other). The
-purpose of the output_alternatives dict is to aid in exporting a SavedModel
-from which such head-specific queries can be served. These
-output_alternatives will be combined with input_alternatives (see
-`saved_model_export_utils`) to produce a set of `SignatureDef`s specifying
-the valid requests that can be served from this model.
-
-For a single-headed model, it is still adviseable to provide
-output_alternatives with a single entry, because this is how the problem
-type is communicated for export and serving. If output_alternatives is not
-given, the resulting SavedModel will support only one head of unspecified
-type.
-
-##### Args:
-
-
-* <b>`mode`</b>: One of `ModeKeys`. Specifies if this training, evaluation or
- prediction.
-* <b>`predictions`</b>: Predictions `Tensor` or dict of `Tensor`.
-* <b>`loss`</b>: Training loss `Tensor`.
-* <b>`train_op`</b>: Op for the training step.
-* <b>`eval_metric_ops`</b>: Dict of metric results keyed by name. The values of the
- dict are the results of calling a metric function, such as `Tensor`.
-* <b>`output_alternatives`</b>: a dict of
- `{submodel_name: (problem_type, {tensor_name: Tensor})}`, where
- `submodel_name` is a submodel identifier that should be consistent
- across the pipeline (here likely taken from the name of each `Head`,
- for models that use them), `problem_type` is a `ProblemType`,
- `tensor_name` is a symbolic name for an output Tensor possibly but not
- necessarily taken from `PredictionKey`, and `Tensor` is the
- corresponding output Tensor itself.
-* <b>`training_chief_hooks`</b>: A list of `SessionRunHook` objects that will be
- run on the chief worker during training.
-* <b>`training_hooks`</b>: A list of `SessionRunHook` objects that will be run on
- all workers during training.
-* <b>`scaffold`</b>: A `tf.train.Scaffold` object that can be used to set
- initialization, saver, and more to be used in training.
-
-##### Returns:
-
- A validated `ModelFnOps` object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If validation fails.
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.__repr__()` {#ModelFnOps.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.eval_metric_ops` {#ModelFnOps.eval_metric_ops}
-
-Alias for field number 3
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.loss` {#ModelFnOps.loss}
-
-Alias for field number 1
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.output_alternatives` {#ModelFnOps.output_alternatives}
-
-Alias for field number 4
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.predictions` {#ModelFnOps.predictions}
-
-Alias for field number 0
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.scaffold` {#ModelFnOps.scaffold}
-
-Alias for field number 7
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.train_op` {#ModelFnOps.train_op}
-
-Alias for field number 2
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.training_chief_hooks` {#ModelFnOps.training_chief_hooks}
-
-Alias for field number 5
-
-
-- - -
-
-#### `tf.contrib.learn.ModelFnOps.training_hooks` {#ModelFnOps.training_hooks}
-
-Alias for field number 6
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.extract_pandas_matrix.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.extract_pandas_matrix.md
deleted file mode 100644
index a5efd4f09b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.extract_pandas_matrix.md
+++ /dev/null
@@ -1,13 +0,0 @@
-### `tf.contrib.learn.extract_pandas_matrix(data)` {#extract_pandas_matrix}
-
-Extracts numpy matrix from pandas DataFrame.
-
-##### Args:
-
-
-* <b>`data`</b>: `pandas.DataFrame` containing the data to be extracted.
-
-##### Returns:
-
- A numpy `ndarray` of the DataFrame's values.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.make_export_strategy.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.make_export_strategy.md
deleted file mode 100644
index ab4b3a86c6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.make_export_strategy.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.contrib.learn.make_export_strategy(serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, exports_to_keep=5)` {#make_export_strategy}
-
-Create an ExportStrategy for use with Experiment.
-
-##### Args:
-
-
-* <b>`serving_input_fn`</b>: A function that takes no arguments and returns an
- `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when an
- incoming serving request does not explicitly request a specific head.
- Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`exports_to_keep`</b>: Number of exports to keep. Older exports will be
- garbage-collected. Defaults to 5. Set to None to disable garbage
- collection.
-
-##### Returns:
-
- An ExportStrategy that can be passed to the Experiment constructor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.monitors.replace_monitors_with_hooks.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.monitors.replace_monitors_with_hooks.md
deleted file mode 100644
index de84ecb361..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.monitors.replace_monitors_with_hooks.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.contrib.learn.monitors.replace_monitors_with_hooks(monitors_or_hooks, estimator)` {#replace_monitors_with_hooks}
-
-Wraps monitors with a hook.
-
-`Monitor` is deprecated in favor of `SessionRunHook`. If you're using a
-monitor, you can wrap it with a hook using function. It is recommended to
-implement hook version of your monitor.
-
-##### Args:
-
-
-* <b>`monitors_or_hooks`</b>: A `list` may contain both monitors and hooks.
-* <b>`estimator`</b>: An `Estimator` that monitor will be used with.
-
-##### Returns:
-
- Returns a list of hooks. If there is any monitor in the given list, it is
- replaced by a hook.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.read_batch_record_features.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.read_batch_record_features.md
deleted file mode 100644
index 2a114a25c2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.learn.read_batch_record_features.md
+++ /dev/null
@@ -1,32 +0,0 @@
-### `tf.contrib.learn.read_batch_record_features(file_pattern, batch_size, features, randomize_input=True, num_epochs=None, queue_capacity=10000, reader_num_threads=1, name='dequeue_record_examples')` {#read_batch_record_features}
-
-Reads TFRecord, queues, batches and parses `Example` proto.
-
-See more detailed description in `read_examples`.
-
-##### Args:
-
-
-* <b>`file_pattern`</b>: List of files or pattern of file paths containing
- `Example` records. See `tf.gfile.Glob` for pattern rules.
-* <b>`batch_size`</b>: An int or scalar `Tensor` specifying the batch size to use.
-* <b>`features`</b>: A `dict` mapping feature keys to `FixedLenFeature` or
- `VarLenFeature` values.
-* <b>`randomize_input`</b>: Whether the input should be randomized.
-* <b>`num_epochs`</b>: Integer specifying the number of times to read through the
- dataset. If None, cycles through the dataset forever. NOTE - If specified,
- creates a variable that must be initialized, so call
- tf.local_variables_initializer() and run the op in a session.
-* <b>`queue_capacity`</b>: Capacity for input queue.
-* <b>`reader_num_threads`</b>: The number of threads to read examples.
-* <b>`name`</b>: Name of resulting op.
-
-##### Returns:
-
- A dict of `Tensor` or `SparseTensor` objects for each in `features`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: for invalid inputs.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.attention_decoder.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.attention_decoder.md
deleted file mode 100644
index 022ac6fefa..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.attention_decoder.md
+++ /dev/null
@@ -1,60 +0,0 @@
-### `tf.contrib.legacy_seq2seq.attention_decoder(decoder_inputs, initial_state, attention_states, cell, output_size=None, num_heads=1, loop_function=None, dtype=None, scope=None, initial_state_attention=False)` {#attention_decoder}
-
-RNN decoder with attention for the sequence-to-sequence model.
-
-In this context "attention" means that, during decoding, the RNN can look up
-information in the additional tensor attention_states, and it does this by
-focusing on a few entries from the tensor. This model has proven to yield
-especially good results in a number of sequence-to-sequence tasks. This
-implementation is based on http://arxiv.org/abs/1412.7449 (see below for
-details). It is recommended for complex sequence-to-sequence tasks.
-
-##### Args:
-
-
-* <b>`decoder_inputs`</b>: A list of 2D Tensors [batch_size x input_size].
-* <b>`initial_state`</b>: 2D Tensor [batch_size x cell.state_size].
-* <b>`attention_states`</b>: 3D Tensor [batch_size x attn_length x attn_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function and size.
-* <b>`output_size`</b>: Size of the output vectors; if None, we use cell.output_size.
-* <b>`num_heads`</b>: Number of attention heads that read from attention_states.
-* <b>`loop_function`</b>: If not None, this function will be applied to i-th output
- in order to generate i+1-th input, and decoder_inputs will be ignored,
- except for the first element ("GO" symbol). This can be used for decoding,
- but also for training to emulate http://arxiv.org/abs/1506.03099.
- Signature -- loop_function(prev, i) = next
- * prev is a 2D Tensor of shape [batch_size x output_size],
- * i is an integer, the step number (when advanced control is needed),
- * next is a 2D Tensor of shape [batch_size x input_size].
-* <b>`dtype`</b>: The dtype to use for the RNN initial state (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; default: "attention_decoder".
-* <b>`initial_state_attention`</b>: If False (default), initial attentions are zero.
- If True, initialize the attentions from the initial state and attention
- states -- useful when we wish to resume decoding from a previously
- stored decoder state and attention states.
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors of
- shape [batch_size x output_size]. These represent the generated outputs.
- Output i is computed from input i (which is either the i-th element
- of decoder_inputs or loop_function(output {i-1}, i)) as follows.
- First, we run the cell on a combination of the input and previous
- attention masks:
- cell_output, new_state = cell(linear(input, prev_attn), prev_state).
- Then, we calculate new attention masks:
- new_attn = softmax(V^T * tanh(W * attention_states + U * new_state))
- and then we calculate the output:
- output = linear(cell_output, new_attn).
-* <b>`state`</b>: The state of each decoder cell the final time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: when num_heads is not positive, there are no inputs, shapes
- of attention_states are not set, or input size cannot be inferred
- from the input.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.embedding_rnn_decoder.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.embedding_rnn_decoder.md
deleted file mode 100644
index 11a5e81298..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.embedding_rnn_decoder.md
+++ /dev/null
@@ -1,48 +0,0 @@
-### `tf.contrib.legacy_seq2seq.embedding_rnn_decoder(decoder_inputs, initial_state, cell, num_symbols, embedding_size, output_projection=None, feed_previous=False, update_embedding_for_previous=True, scope=None)` {#embedding_rnn_decoder}
-
-RNN decoder with embedding and a pure-decoding option.
-
-##### Args:
-
-
-* <b>`decoder_inputs`</b>: A list of 1D batch-sized int32 Tensors (decoder inputs).
-* <b>`initial_state`</b>: 2D Tensor [batch_size x cell.state_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function.
-* <b>`num_symbols`</b>: Integer, how many symbols come into the embedding.
-* <b>`embedding_size`</b>: Integer, the length of the embedding vector for each symbol.
-* <b>`output_projection`</b>: None or a pair (W, B) of output projection weights and
- biases; W has shape [output_size x num_symbols] and B has
- shape [num_symbols]; if provided and feed_previous=True, each fed
- previous output will first be multiplied by W and added B.
-* <b>`feed_previous`</b>: Boolean; if True, only the first of decoder_inputs will be
- used (the "GO" symbol), and all other decoder inputs will be generated by:
- next = embedding_lookup(embedding, argmax(previous_output)),
- In effect, this implements a greedy decoder. It can also be used
- during training to emulate http://arxiv.org/abs/1506.03099.
- If False, decoder_inputs are used as given (the standard decoder case).
-* <b>`update_embedding_for_previous`</b>: Boolean; if False and feed_previous=True,
- only the embedding for the first symbol of decoder_inputs (the "GO"
- symbol) will be updated by back propagation. Embeddings for the symbols
- generated from the decoder itself remain unchanged. This parameter has
- no effect if feed_previous=False.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "embedding_rnn_decoder".
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors. The
- output is of shape [batch_size x cell.output_size] when
- output_projection is not None (and represents the dense representation
- of predicted tokens). It is of shape [batch_size x num_decoder_symbols]
- when output_projection is None.
-* <b>`state`</b>: The state of each decoder cell in each time-step. This is a list
- with length len(decoder_inputs) -- one item for each time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: When output_projection has the wrong shape.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq.md
deleted file mode 100644
index 5c69dbee40..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq.md
+++ /dev/null
@@ -1,46 +0,0 @@
-### `tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq(encoder_inputs, decoder_inputs, cell, num_encoder_symbols, num_decoder_symbols, embedding_size, output_projection=None, feed_previous=False, dtype=None, scope=None)` {#embedding_rnn_seq2seq}
-
-Embedding RNN sequence-to-sequence model.
-
-This model first embeds encoder_inputs by a newly created embedding (of shape
-[num_encoder_symbols x input_size]). Then it runs an RNN to encode
-embedded encoder_inputs into a state vector. Next, it embeds decoder_inputs
-by another newly created embedding (of shape [num_decoder_symbols x
-input_size]). Then it runs RNN decoder, initialized with the last
-encoder state, on embedded decoder_inputs.
-
-##### Args:
-
-
-* <b>`encoder_inputs`</b>: A list of 1D int32 Tensors of shape [batch_size].
-* <b>`decoder_inputs`</b>: A list of 1D int32 Tensors of shape [batch_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function and size.
-* <b>`num_encoder_symbols`</b>: Integer; number of symbols on the encoder side.
-* <b>`num_decoder_symbols`</b>: Integer; number of symbols on the decoder side.
-* <b>`embedding_size`</b>: Integer, the length of the embedding vector for each symbol.
-* <b>`output_projection`</b>: None or a pair (W, B) of output projection weights and
- biases; W has shape [output_size x num_decoder_symbols] and B has
- shape [num_decoder_symbols]; if provided and feed_previous=True, each
- fed previous output will first be multiplied by W and added B.
-* <b>`feed_previous`</b>: Boolean or scalar Boolean Tensor; if True, only the first
- of decoder_inputs will be used (the "GO" symbol), and all other decoder
- inputs will be taken from previous outputs (as in embedding_rnn_decoder).
- If False, decoder_inputs are used as given (the standard decoder case).
-* <b>`dtype`</b>: The dtype of the initial state for both the encoder and encoder
- rnn cells (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "embedding_rnn_seq2seq"
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors. The
- output is of shape [batch_size x cell.output_size] when
- output_projection is not None (and represents the dense representation
- of predicted tokens). It is of shape [batch_size x num_decoder_symbols]
- when output_projection is None.
-* <b>`state`</b>: The state of each decoder cell in each time-step. This is a list
- with length len(decoder_inputs) -- one item for each time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.sequence_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.sequence_loss.md
deleted file mode 100644
index c0beb1541e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.legacy_seq2seq.sequence_loss.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.contrib.legacy_seq2seq.sequence_loss(logits, targets, weights, average_across_timesteps=True, average_across_batch=True, softmax_loss_function=None, name=None)` {#sequence_loss}
-
-Weighted cross-entropy loss for a sequence of logits, batch-collapsed.
-
-##### Args:
-
-
-* <b>`logits`</b>: List of 2D Tensors of shape [batch_size x num_decoder_symbols].
-* <b>`targets`</b>: List of 1D batch-sized int32 Tensors of the same length as logits.
-* <b>`weights`</b>: List of 1D batch-sized float-Tensors of the same length as logits.
-* <b>`average_across_timesteps`</b>: If set, divide the returned cost by the total
- label weight.
-* <b>`average_across_batch`</b>: If set, divide the returned cost by the batch size.
-* <b>`softmax_loss_function`</b>: Function (inputs-batch, labels-batch) -> loss-batch
- to be used instead of the standard softmax (the default if this is None).
-* <b>`name`</b>: Optional name for this operation, defaults to "sequence_loss".
-
-##### Returns:
-
- A scalar float Tensor: The average log-perplexity per symbol (weighted).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If len(logits) is different from len(targets) or len(weights).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.linalg.LinearOperatorDiag.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.linalg.LinearOperatorDiag.md
deleted file mode 100644
index f4796c04c1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.linalg.LinearOperatorDiag.md
+++ /dev/null
@@ -1,532 +0,0 @@
-`LinearOperator` acting like a [batch] square diagonal matrix.
-
-This operator acts like a [batch] diagonal matrix `A` with shape
-`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a
-batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is
-an `N x N` matrix. This matrix `A` is not materialized, but for
-purposes of broadcasting this shape will be relevant.
-
-`LinearOperatorDiag` is initialized with a (batch) vector.
-
-```python
-# Create a 2 x 2 diagonal linear operator.
-diag = [1., -1.]
-operator = LinearOperatorDiag(diag)
-
-operator.to_dense()
-==> [[1., 0.]
- [0., -1.]]
-
-operator.shape
-==> [2, 2]
-
-operator.log_determinant()
-==> scalar Tensor
-
-x = ... Shape [2, 4] Tensor
-operator.apply(x)
-==> Shape [2, 4] Tensor
-
-# Create a [2, 3] batch of 4 x 4 linear operators.
-diag = tf.random_normal(shape=[2, 3, 4])
-operator = LinearOperatorDiag(diag)
-
-# Create a shape [2, 1, 4, 2] vector. Note that this shape is compatible
-# since the batch dimensions, [2, 1], are brodcast to
-# operator.batch_shape = [2, 3].
-y = tf.random_normal(shape=[2, 1, 4, 2])
-x = operator.solve(y)
-==> operator.apply(x) = y
-```
-
-#### Shape compatibility
-
-This operator acts on [batch] matrix with compatible shape.
-`x` is a batch matrix with compatible shape for `apply` and `solve` if
-
-```
-operator.shape = [B1,...,Bb] + [N, N], with b >= 0
-x.shape = [C1,...,Cc] + [N, R],
-and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]
-```
-
-#### Performance
-
-Suppose `operator` is a `LinearOperatorDiag` of shape `[N, N]`,
-and `x.shape = [N, R]`. Then
-
-* `operator.apply(x)` involves `N * R` multiplications.
-* `operator.solve(x)` involves `N` divisions and `N * R` multiplications.
-* `operator.determinant()` involves a size `N` `reduce_prod`.
-
-If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and
-`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite`.
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.__init__(diag, is_non_singular=None, is_self_adjoint=None, is_positive_definite=None, name='LinearOperatorDiag')` {#LinearOperatorDiag.__init__}
-
-Initialize a `LinearOperatorDiag`.
-
-##### Args:
-
-
-* <b>`diag`</b>: Shape `[B1,...,Bb, N]` `Tensor` with `b >= 0` `N >= 0`.
- The diagonal of the operator. Allowed dtypes: `float32`, `float64`,
- `complex64`, `complex128`.
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose. If `diag.dtype` is real, this is auto-set to `True`.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite,
- meaning the real part of all eigenvalues is positive. We do not require
- the operator to be self-adjoint to be positive-definite. See:
-* <b>`https`</b>: //en.wikipedia.org/wiki/Positive-definite_matrix
- #Extension_for_non_symmetric_matrices
-* <b>`name`</b>: A name for this `LinearOperator`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `diag.dtype` is not an allowed type.
-* <b>`ValueError`</b>: If `diag.dtype` is real, and `is_self_adjoint` is not `True`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.add_to_tensor(x, name='add_to_tensor')` {#LinearOperatorDiag.add_to_tensor}
-
-Add matrix represented by this operator to `x`. Equivalent to `A + x`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with same `dtype` and shape broadcastable to `self.shape`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.apply(x, adjoint=False, name='apply')` {#LinearOperatorDiag.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.assert_non_singular(name='assert_non_singular')` {#LinearOperatorDiag.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.assert_positive_definite(name='assert_positive_definite')` {#LinearOperatorDiag.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperatorDiag.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.batch_shape` {#LinearOperatorDiag.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperatorDiag.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.determinant(name='det')` {#LinearOperatorDiag.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.diag` {#LinearOperatorDiag.diag}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.diag_part(name='diag_part')` {#LinearOperatorDiag.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.domain_dimension` {#LinearOperatorDiag.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperatorDiag.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.dtype` {#LinearOperatorDiag.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.graph_parents` {#LinearOperatorDiag.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.is_non_singular` {#LinearOperatorDiag.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.is_positive_definite` {#LinearOperatorDiag.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.is_self_adjoint` {#LinearOperatorDiag.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.is_square` {#LinearOperatorDiag.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.log_abs_determinant(name='log_abs_det')` {#LinearOperatorDiag.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.name` {#LinearOperatorDiag.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.range_dimension` {#LinearOperatorDiag.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperatorDiag.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.shape` {#LinearOperatorDiag.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.shape_tensor(name='shape_tensor')` {#LinearOperatorDiag.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.solve(rhs, adjoint=False, name='solve')` {#LinearOperatorDiag.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.tensor_rank` {#LinearOperatorDiag.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperatorDiag.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorDiag.to_dense(name='to_dense')` {#LinearOperatorDiag.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.losses.get_losses.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.losses.get_losses.md
deleted file mode 100644
index da8e3ed5bb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.losses.get_losses.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.contrib.losses.get_losses(*args, **kwargs)` {#get_losses}
-
-Gets the list of losses from the loss_collection. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.get_losses instead.
-
-##### Args:
-
-
-* <b>`scope`</b>: an optional scope for filtering the losses to return.
-* <b>`loss_collection`</b>: Optional losses collection.
-
-##### Returns:
-
- a list of loss tensors.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.set_size.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.set_size.md
deleted file mode 100644
index 0a33afb229..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.set_size.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.contrib.metrics.set_size(a, validate_indices=True)` {#set_size}
-
-Compute number of unique elements along last dimension of `a`.
-
-##### Args:
-
-
-* <b>`a`</b>: `SparseTensor`, with indices sorted in row-major order.
-* <b>`validate_indices`</b>: Whether to validate the order and range of sparse indices
- in `a`.
-
-##### Returns:
-
- `int32` `Tensor` of set sizes. For `a` ranked `n`, this is a `Tensor` with
- rank `n-1`, and the same 1st `n-1` dimensions as `a`. Each value is the
- number of unique elements in the corresponding `[0...n-1]` dimension of `a`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `a` is an invalid types.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_mean_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_mean_tensor.md
deleted file mode 100644
index dbaf38c5cc..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.metrics.streaming_mean_tensor.md
+++ /dev/null
@@ -1,48 +0,0 @@
-### `tf.contrib.metrics.streaming_mean_tensor(values, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_tensor}
-
-Computes the element-wise (weighted) mean of the given tensors.
-
-In contrast to the `streaming_mean` function which returns a scalar with the
-mean, this function returns an average tensor with the same shape as the
-input tensors.
-
-The `streaming_mean_tensor` function creates two local variables,
-`total_tensor` and `count_tensor` that are used to compute the average of
-`values`. This average is ultimately returned as `mean` which is an idempotent
-operation that simply divides `total` by `count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the `mean`.
-`update_op` increments `total` with the reduced sum of the product of `values`
-and `weights`, and it increments `count` with the reduced sum of `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`values`</b>: A `Tensor` of arbitrary dimensions.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `values`, and
- must be broadcastable to `values` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `values` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `mean`
- should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op`
- should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`mean`</b>: A float `Tensor` representing the current mean, the value of `total`
- divided by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `mean_value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is not `None` and its shape doesn't match `values`,
- or if either `metrics_collections` or `updates_collections` are not a list
- or tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.AttentionCellWrapper.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.AttentionCellWrapper.md
deleted file mode 100644
index 607aea1f1d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.AttentionCellWrapper.md
+++ /dev/null
@@ -1,77 +0,0 @@
-Basic attention cell wrapper.
-
-Implementation based on https://arxiv.org/abs/1409.0473.
-- - -
-
-#### `tf.contrib.rnn.AttentionCellWrapper.__call__(inputs, state, scope=None)` {#AttentionCellWrapper.__call__}
-
-Long short-term memory cell with attention (LSTMA).
-
-
-- - -
-
-#### `tf.contrib.rnn.AttentionCellWrapper.__init__(cell, attn_length, attn_size=None, attn_vec_size=None, input_size=None, state_is_tuple=False)` {#AttentionCellWrapper.__init__}
-
-Create a cell with attention.
-
-##### Args:
-
-
-* <b>`cell`</b>: an RNNCell, an attention is added to it.
-* <b>`attn_length`</b>: integer, the size of an attention window.
-* <b>`attn_size`</b>: integer, the size of an attention vector. Equal to
- cell.output_size by default.
-* <b>`attn_vec_size`</b>: integer, the number of convolutional features calculated
- on attention state and a size of the hidden layer built from
- base cell state. Equal attn_size to by default.
-* <b>`input_size`</b>: integer, the size of a hidden linear layer,
- built from inputs and attention. Derived from the input tensor
- by default.
-* <b>`state_is_tuple`</b>: If True, accepted and returned states are n-tuples, where
- `n = len(cells)`. By default (False), the states are all
- concatenated along the column axis.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if cell is not an RNNCell.
-* <b>`ValueError`</b>: if cell returns a state tuple but the flag
- `state_is_tuple` is `False` or if attn_length is zero or less.
-
-
-- - -
-
-#### `tf.contrib.rnn.AttentionCellWrapper.output_size` {#AttentionCellWrapper.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.AttentionCellWrapper.state_size` {#AttentionCellWrapper.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.AttentionCellWrapper.zero_state(batch_size, dtype)` {#AttentionCellWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.MultiRNNCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.MultiRNNCell.md
deleted file mode 100644
index 47c1855010..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.MultiRNNCell.md
+++ /dev/null
@@ -1,66 +0,0 @@
-RNN cell composed sequentially of multiple simple cells.
-- - -
-
-#### `tf.contrib.rnn.MultiRNNCell.__call__(inputs, state, scope=None)` {#MultiRNNCell.__call__}
-
-Run this multi-layer cell on inputs, starting from state.
-
-
-- - -
-
-#### `tf.contrib.rnn.MultiRNNCell.__init__(cells, state_is_tuple=True)` {#MultiRNNCell.__init__}
-
-Create a RNN cell composed sequentially of a number of RNNCells.
-
-##### Args:
-
-
-* <b>`cells`</b>: list of RNNCells that will be composed in this order.
-* <b>`state_is_tuple`</b>: If True, accepted and returned states are n-tuples, where
- `n = len(cells)`. If False, the states are all
- concatenated along the column axis. This latter behavior will soon be
- deprecated.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if cells is empty (not allowed), or at least one of the cells
- returns a state tuple but the flag `state_is_tuple` is `False`.
-
-
-- - -
-
-#### `tf.contrib.rnn.MultiRNNCell.output_size` {#MultiRNNCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.MultiRNNCell.state_size` {#MultiRNNCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.MultiRNNCell.zero_state(batch_size, dtype)` {#MultiRNNCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.OutputProjectionWrapper.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.OutputProjectionWrapper.md
deleted file mode 100644
index 87e1024613..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.OutputProjectionWrapper.md
+++ /dev/null
@@ -1,68 +0,0 @@
-Operator adding an output projection to the given cell.
-
-Note: in many cases it may be more efficient to not use this wrapper,
-but instead concatenate the whole sequence of your outputs in time,
-do the projection on this batch-concatenated sequence, then split it
-if needed or directly feed into a softmax.
-- - -
-
-#### `tf.contrib.rnn.OutputProjectionWrapper.__call__(inputs, state, scope=None)` {#OutputProjectionWrapper.__call__}
-
-Run the cell and output projection on inputs, starting from state.
-
-
-- - -
-
-#### `tf.contrib.rnn.OutputProjectionWrapper.__init__(cell, output_size)` {#OutputProjectionWrapper.__init__}
-
-Create a cell with output projection.
-
-##### Args:
-
-
-* <b>`cell`</b>: an RNNCell, a projection to output_size is added to it.
-* <b>`output_size`</b>: integer, the size of the output after projection.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if cell is not an RNNCell.
-* <b>`ValueError`</b>: if output_size is not positive.
-
-
-- - -
-
-#### `tf.contrib.rnn.OutputProjectionWrapper.output_size` {#OutputProjectionWrapper.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.OutputProjectionWrapper.state_size` {#OutputProjectionWrapper.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.OutputProjectionWrapper.zero_state(batch_size, dtype)` {#OutputProjectionWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.stack_bidirectional_dynamic_rnn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.stack_bidirectional_dynamic_rnn.md
deleted file mode 100644
index 3ced8eb13f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.rnn.stack_bidirectional_dynamic_rnn.md
+++ /dev/null
@@ -1,50 +0,0 @@
-### `tf.contrib.rnn.stack_bidirectional_dynamic_rnn(cells_fw, cells_bw, inputs, initial_states_fw=None, initial_states_bw=None, dtype=None, sequence_length=None, scope=None)` {#stack_bidirectional_dynamic_rnn}
-
-Creates a dynamic bidirectional recurrent neural network.
-
-Stacks several bidirectional rnn layers. The combined forward and backward
-layer outputs are used as input of the next layer. tf.bidirectional_rnn
-does not allow to share forward and backward information between layers.
-The input_size of the first forward and backward cells must match.
-The initial state for both directions is zero and no intermediate states
-are returned.
-
-##### Args:
-
-
-* <b>`cells_fw`</b>: List of instances of RNNCell, one per layer,
- to be used for forward direction.
-* <b>`cells_bw`</b>: List of instances of RNNCell, one per layer,
- to be used for backward direction.
-* <b>`inputs`</b>: A length T list of inputs, each a tensor of shape
- [batch_size, input_size], or a nested tuple of such elements.
-* <b>`initial_states_fw`</b>: (optional) A list of the initial states (one per layer)
- for the forward RNN.
- Each tensor must has an appropriate type and shape
- `[batch_size, cell_fw.state_size]`.
-* <b>`initial_states_bw`</b>: (optional) Same as for `initial_states_fw`, but using
- the corresponding properties of `cells_bw`.
-* <b>`dtype`</b>: (optional) The data type for the initial state. Required if
- either of the initial states are not provided.
-* <b>`sequence_length`</b>: (optional) An int32/int64 vector, size `[batch_size]`,
- containing the actual lengths for each of the sequences.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to None.
-
-##### Returns:
-
- A tuple (outputs, output_state_fw, output_state_bw) where:
-
-* <b>`outputs`</b>: Output `Tensor` shaped:
- `batch_size, max_time, layers_output]`. Where layers_output
- are depth-concatenated forward and backward outputs.
- output_states_fw is the final states, one tensor per layer,
- of the forward rnn.
- output_states_bw is the final states, one tensor per layer,
- of the backward rnn.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cell_fw` or `cell_bw` is not an instance of `RNNCell`.
-* <b>`ValueError`</b>: If inputs is `None`, not a list or an empty list.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.training.weighted_resample.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.training.weighted_resample.md
deleted file mode 100644
index 903cad838b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.training.weighted_resample.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.contrib.training.weighted_resample(inputs, weights, overall_rate, scope=None, mean_decay=0.999, seed=None)` {#weighted_resample}
-
-Performs an approximate weighted resampling of `inputs`.
-
-This method chooses elements from `inputs` where each item's rate of
-selection is proportional to its value in `weights`, and the average
-rate of selection across all inputs (and many invocations!) is
-`overall_rate`.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A list of tensors whose first dimension is `batch_size`.
-* <b>`weights`</b>: A `[batch_size]`-shaped tensor with each batch member's weight.
-* <b>`overall_rate`</b>: Desired overall rate of resampling.
-* <b>`scope`</b>: Scope to use for the op.
-* <b>`mean_decay`</b>: How quickly to decay the running estimate of the mean weight.
-* <b>`seed`</b>: Random seed.
-
-##### Returns:
-
- A list of tensors exactly like `inputs`, but with an unknown (and
- possibly zero) first dimension.
- A tensor containing the effective resampling rate used for each output.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.util.ops_used_by_graph_def.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.util.ops_used_by_graph_def.md
deleted file mode 100644
index 38a9cc4f43..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.contrib.util.ops_used_by_graph_def.md
+++ /dev/null
@@ -1,15 +0,0 @@
-### `tf.contrib.util.ops_used_by_graph_def(graph_def)` {#ops_used_by_graph_def}
-
-Collect the list of ops used by a graph.
-
-Does not validate that the ops are all registered.
-
-##### Args:
-
-
-* <b>`graph_def`</b>: A `GraphDef` proto, as from `graph.as_graph_def()`.
-
-##### Returns:
-
- A list of strings, each naming an op used by the graph.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.convert_to_tensor_or_sparse_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.convert_to_tensor_or_sparse_tensor.md
deleted file mode 100644
index 1999e71180..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.convert_to_tensor_or_sparse_tensor.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.convert_to_tensor_or_sparse_tensor(value, dtype=None, name=None)` {#convert_to_tensor_or_sparse_tensor}
-
-Converts value to a `SparseTensor` or `Tensor`.
-
-##### Args:
-
-
-* <b>`value`</b>: A `SparseTensor`, `SparseTensorValue`, or an object whose type has a
- registered `Tensor` conversion function.
-* <b>`dtype`</b>: Optional element type for the returned tensor. If missing, the
- type is inferred from the type of `value`.
-* <b>`name`</b>: Optional name to use if a new `Tensor` is created.
-
-##### Returns:
-
- A `SparseTensor` or `Tensor` based on `value`.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If result type is incompatible with `dtype`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.cumprod.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.cumprod.md
deleted file mode 100644
index 0275374f03..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.cumprod.md
+++ /dev/null
@@ -1,44 +0,0 @@
-### `tf.cumprod(x, axis=0, exclusive=False, reverse=False, name=None)` {#cumprod}
-
-Compute the cumulative product of the tensor `x` along `axis`.
-
-By default, this op performs an inclusive cumprod, which means that the
-first
-element of the input is identical to the first element of the output:
-```prettyprint
-tf.cumprod([a, b, c]) ==> [a, a * b, a * b * c]
-```
-
-By setting the `exclusive` kwarg to `True`, an exclusive cumprod is
-performed
-instead:
-```prettyprint
-tf.cumprod([a, b, c], exclusive=True) ==> [1, a, a * b]
-```
-
-By setting the `reverse` kwarg to `True`, the cumprod is performed in the
-opposite direction:
-```prettyprint
-tf.cumprod([a, b, c], reverse=True) ==> [a * b * c, b * c, c]
-```
-This is more efficient than using separate `tf.reverse` ops.
-
-The `reverse` and `exclusive` kwargs can also be combined:
-```prettyprint
-tf.cumprod([a, b, c], exclusive=True, reverse=True) ==> [b * c, c, 1]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`,
- `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,
- `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`axis`</b>: A `Tensor` of type `int32` (default: 0).
-* <b>`reverse`</b>: A `bool` (default: False).
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.delete_session_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.delete_session_tensor.md
deleted file mode 100644
index 7a43e917ce..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.delete_session_tensor.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.delete_session_tensor(handle, name=None)` {#delete_session_tensor}
-
-Delete the tensor for the given tensor handle.
-
-This is EXPERIMENTAL and subject to change.
-
-Delete the tensor of a given tensor handle. The tensor is produced
-in a previous run() and stored in the state of the session.
-
-##### Args:
-
-
-* <b>`handle`</b>: The string representation of a persistent tensor handle.
-* <b>`name`</b>: Optional name prefix for the return tensor.
-
-##### Returns:
-
- A pair of graph elements. The first is a placeholder for feeding a
- tensor handle and the second is a deletion operation.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.depth_to_space.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.depth_to_space.md
deleted file mode 100644
index 03dc6bb3b0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.depth_to_space.md
+++ /dev/null
@@ -1,95 +0,0 @@
-### `tf.depth_to_space(input, block_size, name=None)` {#depth_to_space}
-
-DepthToSpace for tensors of type T.
-
-Rearranges data from depth into blocks of spatial data.
-This is the reverse transformation of SpaceToDepth. More specifically,
-this op outputs a copy of the input tensor where values from the `depth`
-dimension are moved in spatial blocks to the `height` and `width` dimensions.
-The attr `block_size` indicates the input block size and how the data is moved.
-
- * Chunks of data of size `block_size * block_size` from depth are rearranged
- into non-overlapping blocks of size `block_size x block_size`
- * The width the output tensor is `input_depth * block_size`, whereas the
- height is `input_height * block_size`.
- * The depth of the input tensor must be divisible by
- `block_size * block_size`.
-
-That is, assuming the input is in the shape:
-`[batch, height, width, depth]`,
-the shape of the output will be:
-`[batch, height*block_size, width*block_size, depth/(block_size*block_size)]`
-
-This operation requires that the input tensor be of rank 4, and that
-`block_size` be >=1 and that `block_size * block_size` be a divisor of the
-input depth.
-
-This operation is useful for resizing the activations between convolutions
-(but keeping all data), e.g. instead of pooling. It is also useful for training
-purely convolutional models.
-
-For example, given this input of shape `[1, 1, 1, 4]`, and a block size of 2:
-
-```prettyprint
-x = [[[[1, 2, 3, 4]]]]
-
-```
-
-This operation will output a tensor of shape `[1, 2, 2, 1]`:
-
-```prettyprint
- [[[[1], [2]],
- [[3], [4]]]]
-```
-
-Here, the input has a batch of 1 and each batch element has shape `[1, 1, 4]`,
-the corresponding output will have 2x2 elements and will have a depth of
-1 channel (1 = `4 / (block_size * block_size)`).
-The output element shape is `[2, 2, 1]`.
-
-For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g.
-
-```prettyprint
-x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]
-```
-
-This operation, for block size of 2, will return the following tensor of shape
-`[1, 2, 2, 3]`
-
-```prettyprint
- [[[[1, 2, 3], [4, 5, 6]],
- [[7, 8, 9], [10, 11, 12]]]]
-
-```
-
-Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2:
-
-```prettyprint
-x = [[[[1, 2, 3, 4],
- [5, 6, 7, 8]],
- [[9, 10, 11, 12],
- [13, 14, 15, 16]]]]
-```
-
-the operator will return the following tensor of shape `[1 4 4 1]`:
-
-```prettyprint
-x = [[ [1], [2], [5], [6]],
- [ [3], [4], [7], [8]],
- [ [9], [10], [13], [14]],
- [ [11], [12], [15], [16]]]
-
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`.
-* <b>`block_size`</b>: An `int` that is `>= 2`.
- The size of the spatial block, same as in Space2Depth.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.device.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.device.md
deleted file mode 100644
index 2a5e33203d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.device.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.device(device_name_or_function)` {#device}
-
-Wrapper for `Graph.device()` using the default graph.
-
-See
-[`Graph.device()`](../../api_docs/python/framework.md#Graph.device)
-for more details.
-
-##### Args:
-
-
-* <b>`device_name_or_function`</b>: The device name or function to use in
- the context.
-
-##### Returns:
-
- A context manager that specifies the default device to use for newly
- created ops.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.CancelledError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.CancelledError.md
deleted file mode 100644
index cf20c0e2e3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.CancelledError.md
+++ /dev/null
@@ -1,17 +0,0 @@
-Raised when an operation or step is cancelled.
-
-For example, a long-running operation (e.g.
-[`queue.enqueue()`](../../api_docs/python/io_ops.md#QueueBase.enqueue) may be
-cancelled by running another operation (e.g.
-[`queue.close(cancel_pending_enqueues=True)`](../../api_docs/python/io_ops.md#QueueBase.close),
-or by [closing the session](../../api_docs/python/client.md#Session.close).
-A step that is running such a long-running operation will fail by raising
-`CancelledError`.
-
-- - -
-
-#### `tf.errors.CancelledError.__init__(node_def, op, message)` {#CancelledError.__init__}
-
-Creates a `CancelledError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.DataLossError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.DataLossError.md
deleted file mode 100644
index 3193e77ae3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.DataLossError.md
+++ /dev/null
@@ -1,13 +0,0 @@
-Raised when unrecoverable data loss or corruption is encountered.
-
-For example, this may be raised by running a
-[`tf.WholeFileReader.read()`](../../api_docs/python/io_ops.md#WholeFileReader)
-operation, if the file is truncated while it is being read.
-
-- - -
-
-#### `tf.errors.DataLossError.__init__(node_def, op, message)` {#DataLossError.__init__}
-
-Creates a `DataLossError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.DeadlineExceededError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.DeadlineExceededError.md
deleted file mode 100644
index e8ef3be06e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.errors.DeadlineExceededError.md
+++ /dev/null
@@ -1,11 +0,0 @@
-Raised when a deadline expires before an operation could complete.
-
-This exception is not currently used.
-
-- - -
-
-#### `tf.errors.DeadlineExceededError.__init__(node_def, op, message)` {#DeadlineExceededError.__init__}
-
-Creates a `DeadlineExceededError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fake_quant_with_min_max_vars_gradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fake_quant_with_min_max_vars_gradient.md
deleted file mode 100644
index b363afe7ce..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fake_quant_with_min_max_vars_gradient.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.fake_quant_with_min_max_vars_gradient(gradients, inputs, min, max, name=None)` {#fake_quant_with_min_max_vars_gradient}
-
-Compute gradients for a FakeQuantWithMinMaxVars operation.
-
-##### Args:
-
-
-* <b>`gradients`</b>: A `Tensor` of type `float32`.
- Backpropagated gradients above the FakeQuantWithMinMaxVars operation.
-* <b>`inputs`</b>: A `Tensor` of type `float32`.
- Values passed as inputs to the FakeQuantWithMinMaxVars operation.
- min, max: Quantization interval, scalar floats.
-* <b>`min`</b>: A `Tensor` of type `float32`.
-* <b>`max`</b>: A `Tensor` of type `float32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max).
-
-* <b>`backprops_wrt_input`</b>: A `Tensor` of type `float32`. Backpropagated gradients w.r.t. inputs:
- `gradients * (inputs >= min && inputs <= max)`.
-* <b>`backprop_wrt_min`</b>: A `Tensor` of type `float32`. Backpropagated gradients w.r.t. min parameter:
- `sum(gradients * (inputs < min))`.
-* <b>`backprop_wrt_max`</b>: A `Tensor` of type `float32`. Backpropagated gradients w.r.t. max parameter:
- `sum(gradients * (inputs > max))`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fake_quant_with_min_max_vars_per_channel_gradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fake_quant_with_min_max_vars_per_channel_gradient.md
deleted file mode 100644
index a7a62e29b3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fake_quant_with_min_max_vars_per_channel_gradient.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.fake_quant_with_min_max_vars_per_channel_gradient(gradients, inputs, min, max, name=None)` {#fake_quant_with_min_max_vars_per_channel_gradient}
-
-Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.
-
-##### Args:
-
-
-* <b>`gradients`</b>: A `Tensor` of type `float32`.
- Backpropagated gradients above the FakeQuantWithMinMaxVars operation,
- shape one of: `[d]`, `[b, d]`, `[b, h, w, d]`.
-* <b>`inputs`</b>: A `Tensor` of type `float32`.
- Values passed as inputs to the FakeQuantWithMinMaxVars operation, shape
- same as `gradients`.
- min, max: Quantization interval, floats of shape `[d]`.
-* <b>`min`</b>: A `Tensor` of type `float32`.
-* <b>`max`</b>: A `Tensor` of type `float32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max).
-
-* <b>`backprops_wrt_input`</b>: A `Tensor` of type `float32`. Backpropagated gradients w.r.t. inputs, shape same as
- `inputs`:
- `gradients * (inputs >= min && inputs <= max)`.
-* <b>`backprop_wrt_min`</b>: A `Tensor` of type `float32`. Backpropagated gradients w.r.t. min parameter, shape `[d]`:
- `sum_per_d(gradients * (inputs < min))`.
-* <b>`backprop_wrt_max`</b>: A `Tensor` of type `float32`. Backpropagated gradients w.r.t. max parameter, shape `[d]`:
- `sum_per_d(gradients * (inputs > max))`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fft.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fft.md
deleted file mode 100644
index da37dd4933..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fft.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.fft(input, name=None)` {#fft}
-
-Compute the 1-dimensional discrete Fourier Transform over the inner-most
-
-dimension of `input`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `complex64`. A complex64 tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `complex64`.
- A complex64 tensor of the same shape as `input`. The inner-most
- dimension of `input` is replaced with its 1D Fourier Transform.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fft2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fft2d.md
deleted file mode 100644
index 81b83df8bb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.fft2d.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.fft2d(input, name=None)` {#fft2d}
-
-Compute the 2-dimensional discrete Fourier Transform over the inner-most
-
-2 dimensions of `input`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `complex64`. A complex64 tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `complex64`.
- A complex64 tensor of the same shape as `input`. The inner-most 2
- dimensions of `input` are replaced with their 2D Fourier Transform.
-
- @compatibility(numpy)
- Equivalent to np.fft2
- @end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.floormod.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.floormod.md
deleted file mode 100644
index 5ebd691835..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.floormod.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.floormod(x, y, name=None)` {#floormod}
-
-Returns element-wise remainder of division. When `x < 0` xor `y < 0` is
-
-true, this follows Python semantics in that the result here is consistent
-with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.
-
-*NOTE*: `FloorMod` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.get_seed.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.get_seed.md
deleted file mode 100644
index 8fd602784c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.get_seed.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.get_seed(op_seed)` {#get_seed}
-
-Returns the local seeds an operation should use given an op-specific seed.
-
-Given operation-specific seed, `op_seed`, this helper function returns two
-seeds derived from graph-level and op-level seeds. Many random operations
-internally use the two seeds to allow user to change the seed globally for a
-graph, or for only specific operations.
-
-For details on how the graph-level seed interacts with op seeds, see
-@{tf.set_random_seed}.
-
-##### Args:
-
-
-* <b>`op_seed`</b>: integer.
-
-##### Returns:
-
- A tuple of two integers that should be used for the local seed of this
- operation.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.get_session_handle.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.get_session_handle.md
deleted file mode 100644
index ac3379b93d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.get_session_handle.md
+++ /dev/null
@@ -1,40 +0,0 @@
-### `tf.get_session_handle(data, name=None)` {#get_session_handle}
-
-Return the handle of `data`.
-
-This is EXPERIMENTAL and subject to change.
-
-Keep `data` "in-place" in the runtime and create a handle that can be
-used to retrieve `data` in a subsequent run().
-
-Combined with `get_session_tensor`, we can keep a tensor produced in
-one run call in place, and use it as the input in a future run call.
-
-##### Args:
-
-
-* <b>`data`</b>: A tensor to be stored in the session.
-* <b>`name`</b>: Optional name prefix for the return tensor.
-
-##### Returns:
-
- A scalar string tensor representing a unique handle for `data`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `data` is not a Tensor.
-
-
-* <b>`Example`</b>:
-
-```python
-c = tf.multiply(a, b)
-h = tf.get_session_handle(c)
-h = sess.run(h)
-
-p, a = tf.get_session_tensor(h.handle, tf.float32)
-b = tf.multiply(a, 10)
-c = sess.run(b, feed_dict={p: h.handle})
-```
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.global_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.global_norm.md
deleted file mode 100644
index d37d4228b2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.global_norm.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.global_norm(t_list, name=None)` {#global_norm}
-
-Computes the global norm of multiple tensors.
-
-Given a tuple or list of tensors `t_list`, this operation returns the
-global norm of the elements in all tensors in `t_list`. The global norm is
-computed as:
-
-`global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))`
-
-Any entries in `t_list` that are of type None are ignored.
-
-##### Args:
-
-
-* <b>`t_list`</b>: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A 0-D (scalar) `Tensor` of type `float`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `t_list` is not a sequence.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.ifft3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.ifft3d.md
deleted file mode 100644
index 7d106f24a8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.ifft3d.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.ifft3d(input, name=None)` {#ifft3d}
-
-Compute the inverse 3-dimensional discrete Fourier Transform over the inner-most
-
-3 dimensions of `input`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `complex64`. A complex64 tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `complex64`.
- A complex64 tensor of the same shape as `input`. The inner-most 3
- dimensions of `input` are replaced with their inverse 3D Fourier Transform.
-
- @compatibility(numpy)
- Equivalent to np.fft3
- @end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.adjust_brightness.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.adjust_brightness.md
deleted file mode 100644
index 7743f0180c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.adjust_brightness.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.image.adjust_brightness(image, delta)` {#adjust_brightness}
-
-Adjust the brightness of RGB or Grayscale images.
-
-This is a convenience method that converts an RGB image to float
-representation, adjusts its brightness, and then converts it back to the
-original data type. If several adjustments are chained it is advisable to
-minimize the number of redundant conversions.
-
-The value `delta` is added to all components of the tensor `image`. Both
-`image` and `delta` are converted to `float` before adding (and `image` is
-scaled appropriately if it is in fixed-point representation). For regular
-images, `delta` should be in the range `[0,1)`, as it is added to the image in
-floating point representation, where pixel values are in the `[0,1)` range.
-
-##### Args:
-
-
-* <b>`image`</b>: A tensor.
-* <b>`delta`</b>: A scalar. Amount to add to the pixel values.
-
-##### Returns:
-
- A brightness-adjusted tensor of the same shape and type as `image`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.adjust_gamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.adjust_gamma.md
deleted file mode 100644
index 34fdad226b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.adjust_gamma.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.image.adjust_gamma(image, gamma=1, gain=1)` {#adjust_gamma}
-
-Performs Gamma Correction on the input image.
- Also known as Power Law Transform. This function transforms the
- input image pixelwise according to the equation Out = In**gamma
- after scaling each pixel to the range 0 to 1.
-
-##### Args:
-
- image : A Tensor.
- gamma : A scalar. Non negative real number.
- gain : A scalar. The constant multiplier.
-
-##### Returns:
-
- A Tensor. Gamma corrected output image.
-
-##### Notes:
-
- For gamma greater than 1, the histogram will shift towards left and
- the output image will be darker than the input image.
- For gamma less than 1, the histogram will shift towards right and
- the output image will be brighter than the input image.
-
-##### References:
-
- [1] http://en.wikipedia.org/wiki/Gamma_correction
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.decode_jpeg.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.decode_jpeg.md
deleted file mode 100644
index 1e3b4912b2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.decode_jpeg.md
+++ /dev/null
@@ -1,48 +0,0 @@
-### `tf.image.decode_jpeg(contents, channels=None, ratio=None, fancy_upscaling=None, try_recover_truncated=None, acceptable_fraction=None, dct_method=None, name=None)` {#decode_jpeg}
-
-Decode a JPEG-encoded image to a uint8 tensor.
-
-The attr `channels` indicates the desired number of color channels for the
-decoded image.
-
-Accepted values are:
-
-* 0: Use the number of channels in the JPEG-encoded image.
-* 1: output a grayscale image.
-* 3: output an RGB image.
-
-If needed, the JPEG-encoded image is transformed to match the requested number
-of color channels.
-
-The attr `ratio` allows downscaling the image by an integer factor during
-decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than
-downscaling the image later.
-
-##### Args:
-
-
-* <b>`contents`</b>: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.
-* <b>`channels`</b>: An optional `int`. Defaults to `0`.
- Number of color channels for the decoded image.
-* <b>`ratio`</b>: An optional `int`. Defaults to `1`. Downscaling ratio.
-* <b>`fancy_upscaling`</b>: An optional `bool`. Defaults to `True`.
- If true use a slower but nicer upscaling of the
- chroma planes (yuv420/422 only).
-* <b>`try_recover_truncated`</b>: An optional `bool`. Defaults to `False`.
- If true try to recover an image from truncated input.
-* <b>`acceptable_fraction`</b>: An optional `float`. Defaults to `1`.
- The minimum required fraction of lines before a truncated
- input is accepted.
-* <b>`dct_method`</b>: An optional `string`. Defaults to `""`.
- string specifying a hint about the algorithm used for
- decompression. Defaults to "" which maps to a system-specific
- default. Currently valid values are ["INTEGER_FAST",
- "INTEGER_ACCURATE"]. The hint may be ignored (e.g., the internal
- jpeg library changes to a version that does not have that specific
- option.)
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `uint8`. 3-D with shape `[height, width, channels]`..
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.grayscale_to_rgb.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.grayscale_to_rgb.md
deleted file mode 100644
index 755b66141b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.grayscale_to_rgb.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.image.grayscale_to_rgb(images, name=None)` {#grayscale_to_rgb}
-
-Converts one or more images from Grayscale to RGB.
-
-Outputs a tensor of the same `DType` and rank as `images`. The size of the
-last dimension of the output is 3, containing the RGB value of the pixels.
-
-##### Args:
-
-
-* <b>`images`</b>: The Grayscale tensor to convert. Last dimension must be size 1.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The converted grayscale image(s).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.random_brightness.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.random_brightness.md
deleted file mode 100644
index 6c773b6985..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.random_brightness.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.image.random_brightness(image, max_delta, seed=None)` {#random_brightness}
-
-Adjust the brightness of images by a random factor.
-
-Equivalent to `adjust_brightness()` using a `delta` randomly picked in the
-interval `[-max_delta, max_delta)`.
-
-##### Args:
-
-
-* <b>`image`</b>: An image.
-* <b>`max_delta`</b>: float, must be non-negative.
-* <b>`seed`</b>: A Python integer. Used to create a random seed. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-
-##### Returns:
-
- The brightness-adjusted image.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `max_delta` is negative.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.rgb_to_grayscale.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.rgb_to_grayscale.md
deleted file mode 100644
index bf9b6846e0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.image.rgb_to_grayscale.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.image.rgb_to_grayscale(images, name=None)` {#rgb_to_grayscale}
-
-Converts one or more images from RGB to Grayscale.
-
-Outputs a tensor of the same `DType` and rank as `images`. The size of the
-last dimension of the output is 1, containing the Grayscale value of the
-pixels.
-
-##### Args:
-
-
-* <b>`images`</b>: The RGB tensor to convert. Last dimension must have size 3 and
- should contain RGB values.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The converted grayscale image(s).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_finite.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_finite.md
deleted file mode 100644
index 15d5a7df94..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_finite.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.is_finite(x, name=None)` {#is_finite}
-
-Returns which elements of x are finite.
-
-@compatibility(numpy)
-Equivalent to np.isfinite
-@end_compatibility
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_nan.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_nan.md
deleted file mode 100644
index b1fd8de13c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_nan.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.is_nan(x, name=None)` {#is_nan}
-
-Returns which elements of x are NaN.
-
-@compatibility(numpy)
-Equivalent to np.isnan
-@end_compatibility
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_non_decreasing.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_non_decreasing.md
deleted file mode 100644
index f10ff932c0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_non_decreasing.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.is_non_decreasing(x, name=None)` {#is_non_decreasing}
-
-Returns `True` if `x` is non-decreasing.
-
-Elements of `x` are compared in row-major order. The tensor `[x[0],...]`
-is non-decreasing if for every adjacent pair we have `x[i] <= x[i+1]`.
-If `x` has less than two elements, it is trivially non-decreasing.
-
-See also: `is_strictly_increasing`
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "is_non_decreasing"
-
-##### Returns:
-
- Boolean `Tensor`, equal to `True` iff `x` is non-decreasing.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `x` is not a numeric tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_numeric_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_numeric_tensor.md
deleted file mode 100644
index c2e61b856d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.is_numeric_tensor.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.is_numeric_tensor(tensor)` {#is_numeric_tensor}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.mod.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.mod.md
deleted file mode 100644
index 3f928e7a61..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.mod.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.mod(x, y, name=None)` {#mod}
-
-Returns element-wise remainder of division. When `x < 0` xor `y < 0` is
-
-true, this follows Python semantics in that the result here is consistent
-with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.
-
-*NOTE*: `FloorMod` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.name_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.name_scope.md
deleted file mode 100644
index f888ca22cd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.name_scope.md
+++ /dev/null
@@ -1,39 +0,0 @@
-### `tf.name_scope(name, default_name=None, values=None)` {#name_scope}
-
-Returns a context manager for use when defining a Python op.
-
-This context manager validates that the given `values` are from the
-same graph, makes that graph the default graph, and pushes a
-name scope in that graph (see
-[`Graph.name_scope()`](../../api_docs/python/framework.md#Graph.name_scope)
-for more details on that).
-
-For example, to define a new Python op called `my_op`:
-
-```python
-def my_op(a, b, c, name=None):
- with tf.name_scope(name, "MyOp", [a, b, c]) as scope:
- a = tf.convert_to_tensor(a, name="a")
- b = tf.convert_to_tensor(b, name="b")
- c = tf.convert_to_tensor(c, name="c")
- # Define some computation that uses `a`, `b`, and `c`.
- return foo_op(..., name=scope)
-```
-
-##### Args:
-
-
-* <b>`name`</b>: The name argument that is passed to the op function.
-* <b>`default_name`</b>: The default name to use if the `name` argument is `None`.
-* <b>`values`</b>: The list of `Tensor` arguments that are passed to the op function.
-
-##### Returns:
-
- A context manager for use in defining Python ops. Yields the name scope.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if neither `name` nor `default_name` is provided
- but `values` are.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.avg_pool3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.avg_pool3d.md
deleted file mode 100644
index 5bb4dcf68f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.avg_pool3d.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.nn.avg_pool3d(input, ksize, strides, padding, name=None)` {#avg_pool3d}
-
-Performs 3D average pooling on the input.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Shape `[batch, depth, rows, cols, channels]` tensor to pool over.
-* <b>`ksize`</b>: A list of `ints` that has length `>= 5`.
- 1-D tensor of length 5. The size of the window for each dimension of
- the input tensor. Must have `ksize[0] = ksize[4] = 1`.
-* <b>`strides`</b>: A list of `ints` that has length `>= 5`.
- 1-D tensor of length 5. The stride of the sliding window for each
- dimension of `input`. Must have `strides[0] = strides[4] = 1`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- The average pooled output tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.conv2d_backprop_filter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.conv2d_backprop_filter.md
deleted file mode 100644
index 27c2da89df..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.conv2d_backprop_filter.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.nn.conv2d_backprop_filter(input, filter_sizes, out_backprop, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)` {#conv2d_backprop_filter}
-
-Computes the gradients of convolution with respect to the filter.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
- 4-D with shape `[batch, in_height, in_width, in_channels]`.
-* <b>`filter_sizes`</b>: A `Tensor` of type `int32`.
- An integer vector representing the tensor shape of `filter`,
- where `filter` is a 4-D
- `[filter_height, filter_width, in_channels, out_channels]` tensor.
-* <b>`out_backprop`</b>: A `Tensor`. Must have the same type as `input`.
- 4-D with shape `[batch, out_height, out_width, out_channels]`.
- Gradients w.r.t. the output of the convolution.
-* <b>`strides`</b>: A list of `ints`.
- The stride of the sliding window for each dimension of the input
- of the convolution. Must be in the same order as the dimension specified with
- format.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`use_cudnn_on_gpu`</b>: An optional `bool`. Defaults to `True`.
-* <b>`data_format`</b>: An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`.
- Specify the data format of the input and output data. With the
- default format "NHWC", the data is stored in the order of:
- [batch, in_height, in_width, in_channels].
- Alternatively, the format could be "NCHW", the data storage order of:
- [batch, in_channels, in_height, in_width].
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`. 4-D with shape
- `[filter_height, filter_width, in_channels, out_channels]`. Gradient w.r.t.
- the `filter` input of the convolution.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.ctc_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.ctc_loss.md
deleted file mode 100644
index 128808ff36..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.ctc_loss.md
+++ /dev/null
@@ -1,103 +0,0 @@
-### `tf.nn.ctc_loss(labels, inputs, sequence_length, preprocess_collapse_repeated=False, ctc_merge_repeated=True, time_major=True)` {#ctc_loss}
-
-Computes the CTC (Connectionist Temporal Classification) Loss.
-
-This op implements the CTC loss as presented in the article:
-
-A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber.
-Connectionist Temporal Classification: Labelling Unsegmented Sequence Data
-with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.
-
-http://www.cs.toronto.edu/~graves/icml_2006.pdf
-
-Input requirements:
-
-```
-sequence_length(b) <= time for all b
-
-max(labels.indices(labels.indices[:, 1] == b, 2))
- <= sequence_length(b) for all b.
-```
-
-Notes:
-
-This class performs the softmax operation for you, so inputs should
-be e.g. linear projections of outputs by an LSTM.
-
-The `inputs` Tensor's innermost dimension size, `num_classes`, represents
-`num_labels + 1` classes, where num_labels is the number of true labels, and
-the largest value `(num_classes - 1)` is reserved for the blank label.
-
-For example, for a vocabulary containing 3 labels `[a, b, c]`,
-`num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.
-
-Regarding the arguments `preprocess_collapse_repeated` and
-`ctc_merge_repeated`:
-
-If `preprocess_collapse_repeated` is True, then a preprocessing step runs
-before loss calculation, wherein repeated labels passed to the loss
-are merged into single labels. This is useful if the training labels come
-from, e.g., forced alignments and therefore have unnecessary repetitions.
-
-If `ctc_merge_repeated` is set False, then deep within the CTC calculation,
-repeated non-blank labels will not be merged and are interpreted
-as individual labels. This is a simplified (non-standard) version of CTC.
-
-Here is a table of the (roughly) expected first order behavior:
-
-* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`
-
- Classical CTC behavior: Outputs true repeated classes with blanks in
- between, and can also output repeated classes with no blanks in
- between that need to be collapsed by the decoder.
-
-* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`
-
- Never learns to output repeated classes, as they are collapsed
- in the input labels before training.
-
-* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`
-
- Outputs repeated classes with blanks in between, but generally does not
- require the decoder to collapse/merge repeated classes.
-
-* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`
-
- Untested. Very likely will not learn to output repeated classes.
-
-##### Args:
-
-
-* <b>`labels`</b>: An `int32` `SparseTensor`.
- `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores
- the id for (batch b, time t).
- `labels.values[i]` must take on values in `[0, num_labels)`.
- See `core/ops/ctc_ops.cc` for more details.
-* <b>`inputs`</b>: 3-D `float` `Tensor`.
- If time_major == False, this will be a `Tensor` shaped:
- `[batch_size x max_time x num_classes]`.
- If time_major == True (default), this will be a `Tensor` shaped:
- `[max_time x batch_size x num_classes]`.
- The logits.
-* <b>`sequence_length`</b>: 1-D `int32` vector, size `[batch_size]`.
- The sequence lengths.
-* <b>`preprocess_collapse_repeated`</b>: Boolean. Default: False.
- If True, repeated labels are collapsed prior to the CTC calculation.
-* <b>`ctc_merge_repeated`</b>: Boolean. Default: True.
-* <b>`time_major`</b>: The shape format of the `inputs` Tensors.
- If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`.
- If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`.
- Using `time_major = True` (default) is a bit more efficient because it avoids
- transposes at the beginning of the ctc_loss calculation. However, most
- TensorFlow data is batch-major, so by this function also accepts inputs
- in batch-major form.
-
-##### Returns:
-
- A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if labels is not a `SparseTensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.l2_normalize.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.l2_normalize.md
deleted file mode 100644
index 57b617a331..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.nn.l2_normalize.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None)` {#l2_normalize}
-
-Normalizes along dimension `dim` using an L2 norm.
-
-For a 1-D tensor with `dim = 0`, computes
-
- output = x / sqrt(max(sum(x**2), epsilon))
-
-For `x` with more dimensions, independently normalizes each 1-D slice along
-dimension `dim`.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`.
-* <b>`dim`</b>: Dimension along which to normalize. A scalar or a vector of
- integers.
-* <b>`epsilon`</b>: A lower bound value for the norm. Will use `sqrt(epsilon)` as the
- divisor if `norm < sqrt(epsilon)`.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A `Tensor` with the same shape as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.norm.md
deleted file mode 100644
index f91766f656..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.norm.md
+++ /dev/null
@@ -1,66 +0,0 @@
-### `tf.norm(tensor, ord='euclidean', axis=None, keep_dims=False, name=None)` {#norm}
-
-Computes the norm of vectors, matrices, and tensors.
-
-This function can compute 3 different matrix norms (Frobenius, 1-norm, and
-inf-norm) and up to 9218868437227405311 different vectors norms.
-
-##### Args:
-
-
-* <b>`tensor`</b>: `Tensor` of types `float32`, `float64`, `complex64`, `complex128`
-* <b>`ord`</b>: Order of the norm. Supported values are 'fro', 'euclidean', `0`,
- `1, `2`, `np.inf` and any positive real number yielding the corresponding
- p-norm. Default is 'euclidean' which is equivalent to Frobenius norm if
- `tensor` is a matrix and equivalent to 2-norm for vectors.
- Some restrictions apply,
- a) The Frobenius norm `fro` is not defined for vectors,
- b) If axis is a 2-tuple (matrix-norm), only 'euclidean', 'fro', `1`,
- `np.inf` are supported.
- See the description of `axis` on how to compute norms for a batch of
- vectors or matrices stored in a tensor.
-* <b>`axis`</b>: If `axis` is `None` (the default), the input is considered a vector
- and a single vector norm is computed over the entire set of values in the
- tensor, i.e. `norm(tensor, ord=ord)` is equivalent to
- `norm(reshape(tensor, [-1]), ord=ord)`.
- If `axis` is a Python integer, the input is considered a batch of vectors,
- and `axis`t determines the axis in `tensor` over which to compute vector
- norms.
- If `axis` is a 2-tuple of Python integers it is considered a batch of
- matrices and `axis` determines the axes in `tensor` over which to compute
- a matrix norm.
- Negative indices are supported. Example: If you are passing a tensor that
- can be either a matrix or a batch of matrices at runtime, pass
- `axis=[-2,-1]` instead of `axis=None` to make sure that matrix norms are
- computed.
-* <b>`keep_dims`</b>: If True, the axis indicated in `axis` are kept with size 1.
- Otherwise, the dimensions in `axis` are removed from the output shape.
-* <b>`name`</b>: The name of the op.
-
-##### Returns:
-
-
-* <b>`output`</b>: A `Tensor` of the same type as tensor, containing the vector or
- matrix norms. If `keep_dims` is True then the rank of output is equal to
- the rank of `tensor`. Otherwise, if `axis` is none the output is a scalar,
- if `axis` is an integer, the rank of `output` is one less than the rank
- of `tensor`, if `axis` is a 2-tuple the rank of `output` is two less
- than the rank of `tensor`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `ord` or `axis` is invalid.
-
-@compatibility(numpy)
-Mostly equivalent to numpy.linalg.norm.
-Not supported: ord <= 0, 2-norm for matrices, nuclear norm.
-
-##### Other differences:
-
- a) If axis is `None`, treats the the flattened `tensor` as a vector
- regardless of rank.
- b) Explicitly supports 'euclidean' norm as the default, including for
- higher order tensors.
-@end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.not_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.not_equal.md
deleted file mode 100644
index 5ed8df49d5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.not_equal.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.not_equal(x, y, name=None)` {#not_equal}
-
-Returns the truth value of (x != y) element-wise.
-
-*NOTE*: `NotEqual` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `quint8`, `qint8`, `qint32`, `string`, `bool`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.orthogonal_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.orthogonal_initializer.md
deleted file mode 100644
index 02f8528a6f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.orthogonal_initializer.md
+++ /dev/null
@@ -1,31 +0,0 @@
-Initializer that generates an orthogonal matrix.
-
-If the shape of the tensor to initialize is two-dimensional, i is initialized
-with an orthogonal matrix obtained from the singular value decomposition of a
-matrix of uniform random numbers.
-
-If the shape of the tensor to initialize is more than two-dimensional,
-a matrix of shape `(shape[0] * ... * shape[n - 2], shape[n - 1])`
-is initialized, where `n` is the length of the shape vector.
-The matrix is subsequently reshaped to give a tensor of the desired shape.
-
-Args:
- gain: multiplicative factor to apply to the orthogonal matrix
- dtype: The type of the output.
- seed: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-- - -
-
-#### `tf.orthogonal_initializer.__call__(shape, dtype=None, partition_info=None)` {#orthogonal_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.orthogonal_initializer.__init__(gain=1.0, dtype=tf.float32, seed=None)` {#orthogonal_initializer.__init__}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.py_func.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.py_func.md
deleted file mode 100644
index 97e9df1308..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.py_func.md
+++ /dev/null
@@ -1,51 +0,0 @@
-### `tf.py_func(func, inp, Tout, stateful=True, name=None)` {#py_func}
-
-Wraps a python function and uses it as a TensorFlow op.
-
-Given a python function `func`, which takes numpy arrays as its
-inputs and returns numpy arrays as its outputs, wrap this function as an
-operation in a TensorFlow graph. The following snippet constructs a simple
-TensorFlow graph that invokes the `np.sinh()` NumPy function as a operation
-in the graph:
-
-```python
-def my_func(x):
- # x will be a numpy array with the contents of the placeholder below
- return np.sinh(x)
-inp = tf.placeholder(tf.float32)
-y = tf.py_func(my_func, [inp], tf.float32)
-```
-
-**N.B.** The `tf.py_func()` operation has the following known limitations:
-
-* The body of the function (i.e. `func`) will not be serialized in a
- `GraphDef`. Therefore, you should not use this function if you need to
- serialize your model and restore it in a different environment.
-
-* The operation must run in the same address space as the Python program
- that calls `tf.py_func()`. If you are using distributed TensorFlow, you
- must run a `tf.train.Server` in the same process as the program that calls
- `tf.py_func()` and you must pin the created operation to a device in that
- server (e.g. using `with tf.device():`).
-
-##### Args:
-
-
-* <b>`func`</b>: A Python function, which accepts a list of NumPy `ndarray` objects
- having element types that match the corresponding `tf.Tensor` objects
- in `inp`, and returns a list of `ndarray` objects (or a single `ndarray`)
- having element types that match the corresponding values in `Tout`.
-* <b>`inp`</b>: A list of `Tensor` objects.
-* <b>`Tout`</b>: A list or tuple of tensorflow data types or a single tensorflow data
- type if there is only one, indicating what `func` returns.
-* <b>`stateful`</b>: (Boolean.) If True, the function should be considered stateful.
- If a function is stateless, when given the same input it will return the
- same output and have no observable side effects. Optimizations such as
- common subexpression elimination are only performed on stateless
- operations.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A list of `Tensor` or a single `Tensor` which `func` computes.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.quantize_v2.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.quantize_v2.md
deleted file mode 100644
index a02df53efe..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.quantize_v2.md
+++ /dev/null
@@ -1,74 +0,0 @@
-### `tf.quantize_v2(input, min_range, max_range, T, mode=None, name=None)` {#quantize_v2}
-
-Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.
-
-[min_range, max_range] are scalar floats that specify the range for
-the 'input' data. The 'mode' attribute controls exactly which calculations are
-used to convert the float values to their quantized equivalents.
-
-In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:
-
-```
-out[i] = (in[i] - min_range) * range(T) / (max_range - min_range)
-if T == qint8, out[i] -= (range(T) + 1) / 2.0
-```
-here `range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()`
-
-*MIN_COMBINED Mode Example*
-
-Assume the input is type float and has a possible range of [0.0, 6.0] and the
-output type is quint8 ([0, 255]). The min_range and max_range values should be
-specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each
-value of the input by 255/6 and cast to quint8.
-
-If the output type was qint8 ([-128, 127]), the operation will additionally
-subtract each value by 128 prior to casting, so that the range of values aligns
-with the range of qint8.
-
-If the mode is 'MIN_FIRST', then this approach is used:
-
-```
-number_of_steps = 1 << (# of bits in T)
-range_adjust = number_of_steps / (number_of_steps - 1)
-range = (range_max - range_min) * range_adjust
-range_scale = number_of_steps / range
-quantized = round(input * range_scale) - round(range_min * range_scale) +
- numeric_limits<T>::min()
-quantized = max(quantized, numeric_limits<T>::min())
-quantized = min(quantized, numeric_limits<T>::max())
-```
-
-The biggest difference between this and MIN_COMBINED is that the minimum range
-is rounded first, before it's subtracted from the rounded value. With
-MIN_COMBINED, a small bias is introduced where repeated iterations of quantizing
-and dequantizing will introduce a larger and larger error.
-
-One thing to watch out for is that the operator may choose to adjust the
-requested minimum and maximum values slightly during the quantization process,
-so you should always use the output ports as the range for further calculations.
-For example, if the requested minimum and maximum values are close to equal,
-they will be separated by a small epsilon value to prevent ill-formed quantized
-buffers from being created. Otherwise, you can end up with buffers where all the
-quantized values map to the same float value, which causes problems for
-operations that have to perform further calculations on them.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `float32`.
-* <b>`min_range`</b>: A `Tensor` of type `float32`.
- The minimum scalar value possibly produced for the input.
-* <b>`max_range`</b>: A `Tensor` of type `float32`.
- The maximum scalar value possibly produced for the input.
-* <b>`T`</b>: A `tf.DType` from: `tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32`.
-* <b>`mode`</b>: An optional `string` from: `"MIN_COMBINED", "MIN_FIRST"`. Defaults to `"MIN_COMBINED"`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, output_min, output_max).
-
-* <b>`output`</b>: A `Tensor` of type `T`. The quantized data produced from the float input.
-* <b>`output_min`</b>: A `Tensor` of type `float32`. The actual minimum scalar value used for the output.
-* <b>`output_max`</b>: A `Tensor` of type `float32`. The actual maximum scalar value used for the output.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reduce_sum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reduce_sum.md
deleted file mode 100644
index 3da82a8cb7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reduce_sum.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.reduce_sum(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_sum}
-
-Computes the sum of elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-For example:
-
-```python
-# 'x' is [[1, 1, 1]
-# [1, 1, 1]]
-tf.reduce_sum(x) ==> 6
-tf.reduce_sum(x, 0) ==> [2, 2, 2]
-tf.reduce_sum(x, 1) ==> [3, 3]
-tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]]
-tf.reduce_sum(x, [0, 1]) ==> 6
-```
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The tensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
-@compatibility(numpy)
-Equivalent to np.sum
-@end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reshape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reshape.md
deleted file mode 100644
index 05de3a2779..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reshape.md
+++ /dev/null
@@ -1,73 +0,0 @@
-### `tf.reshape(tensor, shape, name=None)` {#reshape}
-
-Reshapes a tensor.
-
-Given `tensor`, this operation returns a tensor that has the same values
-as `tensor` with shape `shape`.
-
-If one component of `shape` is the special value -1, the size of that dimension
-is computed so that the total size remains constant. In particular, a `shape`
-of `[-1]` flattens into 1-D. At most one component of `shape` can be -1.
-
-If `shape` is 1-D or higher, then the operation returns a tensor with shape
-`shape` filled with the values of `tensor`. In this case, the number of elements
-implied by `shape` must be the same as the number of elements in `tensor`.
-
-For example:
-
-```prettyprint
-# tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9]
-# tensor 't' has shape [9]
-reshape(t, [3, 3]) ==> [[1, 2, 3],
- [4, 5, 6],
- [7, 8, 9]]
-
-# tensor 't' is [[[1, 1], [2, 2]],
-# [[3, 3], [4, 4]]]
-# tensor 't' has shape [2, 2, 2]
-reshape(t, [2, 4]) ==> [[1, 1, 2, 2],
- [3, 3, 4, 4]]
-
-# tensor 't' is [[[1, 1, 1],
-# [2, 2, 2]],
-# [[3, 3, 3],
-# [4, 4, 4]],
-# [[5, 5, 5],
-# [6, 6, 6]]]
-# tensor 't' has shape [3, 2, 3]
-# pass '[-1]' to flatten 't'
-reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6]
-
-# -1 can also be used to infer the shape
-
-# -1 is inferred to be 9:
-reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3],
- [4, 4, 4, 5, 5, 5, 6, 6, 6]]
-# -1 is inferred to be 2:
-reshape(t, [-1, 9]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3],
- [4, 4, 4, 5, 5, 5, 6, 6, 6]]
-# -1 is inferred to be 3:
-reshape(t, [ 2, -1, 3]) ==> [[[1, 1, 1],
- [2, 2, 2],
- [3, 3, 3]],
- [[4, 4, 4],
- [5, 5, 5],
- [6, 6, 6]]]
-
-# tensor 't' is [7]
-# shape `[]` reshapes to a scalar
-reshape(t, []) ==> 7
-```
-
-##### Args:
-
-
-* <b>`tensor`</b>: A `Tensor`.
-* <b>`shape`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- Defines the shape of the output tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reverse_sequence.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reverse_sequence.md
deleted file mode 100644
index c6e8c748bf..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.reverse_sequence.md
+++ /dev/null
@@ -1,76 +0,0 @@
-### `tf.reverse_sequence(input, seq_lengths, seq_axis=None, batch_axis=None, name=None, seq_dim=None, batch_dim=None)` {#reverse_sequence}
-
-Reverses variable length slices.
-
-This op first slices `input` along the dimension `batch_axis`, and for each
-slice `i`, reverses the first `seq_lengths[i]` elements along
-the dimension `seq_axis`.
-
-The elements of `seq_lengths` must obey `seq_lengths[i] <= input.dims[seq_dim]`,
-and `seq_lengths` must be a vector of length `input.dims[batch_dim]`.
-
-The output slice `i` along dimension `batch_axis` is then given by input
-slice `i`, with the first `seq_lengths[i]` slices along dimension
-`seq_axis` reversed.
-
-For example:
-
-```prettyprint
-# Given this:
-batch_dim = 0
-seq_dim = 1
-input.dims = (4, 8, ...)
-seq_lengths = [7, 2, 3, 5]
-
-# then slices of input are reversed on seq_dim, but only up to seq_lengths:
-output[0, 0:7, :, ...] = input[0, 7:0:-1, :, ...]
-output[1, 0:2, :, ...] = input[1, 2:0:-1, :, ...]
-output[2, 0:3, :, ...] = input[2, 3:0:-1, :, ...]
-output[3, 0:5, :, ...] = input[3, 5:0:-1, :, ...]
-
-# while entries past seq_lens are copied through:
-output[0, 7:, :, ...] = input[0, 7:, :, ...]
-output[1, 2:, :, ...] = input[1, 2:, :, ...]
-output[2, 3:, :, ...] = input[2, 3:, :, ...]
-output[3, 2:, :, ...] = input[3, 2:, :, ...]
-```
-
-In contrast, if:
-
-```prettyprint
-# Given this:
-batch_dim = 2
-seq_dim = 0
-input.dims = (8, ?, 4, ...)
-seq_lengths = [7, 2, 3, 5]
-
-# then slices of input are reversed on seq_dim, but only up to seq_lengths:
-output[0:7, :, 0, :, ...] = input[7:0:-1, :, 0, :, ...]
-output[0:2, :, 1, :, ...] = input[2:0:-1, :, 1, :, ...]
-output[0:3, :, 2, :, ...] = input[3:0:-1, :, 2, :, ...]
-output[0:5, :, 3, :, ...] = input[5:0:-1, :, 3, :, ...]
-
-# while entries past seq_lens are copied through:
-output[7:, :, 0, :, ...] = input[7:, :, 0, :, ...]
-output[2:, :, 1, :, ...] = input[2:, :, 1, :, ...]
-output[3:, :, 2, :, ...] = input[3:, :, 2, :, ...]
-output[2:, :, 3, :, ...] = input[2:, :, 3, :, ...]
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. The input to reverse.
-* <b>`seq_lengths`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 1-D with length `input.dims(batch_dim)` and
- `max(seq_lengths) <= input.dims(seq_dim)`
-* <b>`seq_axis`</b>: An `int`. The dimension which is partially reversed.
-* <b>`batch_axis`</b>: An optional `int`. Defaults to `0`.
- The dimension along which reversal is performed.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- The partially reversed input. It has the same shape as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.segment_min.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.segment_min.md
deleted file mode 100644
index 5cacf2cf72..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.segment_min.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.segment_min(data, segment_ids, name=None)` {#segment_min}
-
-Computes the minimum along segments of a tensor.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-Computes a tensor such that
-\\(output_i = \min_j(data_j)\\) where `min` is over `j` such
-that `segment_ids[j] == i`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/SegmentMin.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`segment_ids`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor whose rank is equal to the rank of `data`'s
- first dimension. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_tensor_to_dense.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_tensor_to_dense.md
deleted file mode 100644
index 6269665d08..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sparse_tensor_to_dense.md
+++ /dev/null
@@ -1,43 +0,0 @@
-### `tf.sparse_tensor_to_dense(sp_input, default_value=0, validate_indices=True, name=None)` {#sparse_tensor_to_dense}
-
-Converts a `SparseTensor` into a dense tensor.
-
-This op is a convenience wrapper around `sparse_to_dense` for `SparseTensor`s.
-
-For example, if `sp_input` has shape `[3, 5]` and non-empty string values:
-
- [0, 1]: a
- [0, 3]: b
- [2, 0]: c
-
-and `default_value` is `x`, then the output will be a dense `[3, 5]`
-string tensor with values:
-
- [[x a x b x]
- [x x x x x]
- [c x x x x]]
-
-Indices must be without repeats. This is only
-tested if validate_indices is True.
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The input `SparseTensor`.
-* <b>`default_value`</b>: Scalar value to set for indices not specified in
- `sp_input`. Defaults to zero.
-* <b>`validate_indices`</b>: A boolean value. If `True`, indices are checked to make
- sure they are sorted in lexicographic order and that there are no repeats.
-* <b>`name`</b>: A name prefix for the returned tensors (optional).
-
-##### Returns:
-
- A dense tensor with shape `sp_input.dense_shape` and values specified by
- the non-empty values in `sp_input`. Indices not in `sp_input` are assigned
- `default_value`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sqrt.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sqrt.md
deleted file mode 100644
index 89daef944d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.sqrt.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.sqrt(x, name=None)` {#sqrt}
-
-Computes square root of x element-wise.
-
-I.e., \(y = \sqrt{x} = x^{1/2}\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,
- `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.FileWriterCache.clear.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.FileWriterCache.clear.md
deleted file mode 100644
index e3c7027813..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.FileWriterCache.clear.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.summary.FileWriterCache.clear()` {#FileWriterCache.clear}
-
-Clear cached summary writers. Currently only used for unit tests.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.TaggedRunMetadata.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.TaggedRunMetadata.md
deleted file mode 100644
index 8dc62c4c18..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.TaggedRunMetadata.md
+++ /dev/null
@@ -1,252 +0,0 @@
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.ByteSize()` {#TaggedRunMetadata.ByteSize}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.Clear()` {#TaggedRunMetadata.Clear}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.ClearExtension(extension_handle)` {#TaggedRunMetadata.ClearExtension}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.ClearField(field_name)` {#TaggedRunMetadata.ClearField}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.CopyFrom(other_msg)` {#TaggedRunMetadata.CopyFrom}
-
-Copies the content of the specified message into the current message.
-
-The method clears the current message and then merges the specified
-message using MergeFrom.
-
-##### Args:
-
-
-* <b>`other_msg`</b>: Message to copy into the current one.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.DiscardUnknownFields()` {#TaggedRunMetadata.DiscardUnknownFields}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.FindInitializationErrors()` {#TaggedRunMetadata.FindInitializationErrors}
-
-Finds required fields which are not initialized.
-
-##### Returns:
-
- A list of strings. Each string is a path to an uninitialized field from
- the top-level message, e.g. "foo.bar[5].baz".
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.FromString(s)` {#TaggedRunMetadata.FromString}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.HasExtension(extension_handle)` {#TaggedRunMetadata.HasExtension}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.HasField(field_name)` {#TaggedRunMetadata.HasField}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.IsInitialized(errors=None)` {#TaggedRunMetadata.IsInitialized}
-
-Checks if all required fields of a message are set.
-
-##### Args:
-
-
-* <b>`errors`</b>: A list which, if provided, will be populated with the field
- paths of all missing required fields.
-
-##### Returns:
-
- True iff the specified message has all required fields set.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.ListFields()` {#TaggedRunMetadata.ListFields}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.MergeFrom(msg)` {#TaggedRunMetadata.MergeFrom}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.MergeFromString(serialized)` {#TaggedRunMetadata.MergeFromString}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.ParseFromString(serialized)` {#TaggedRunMetadata.ParseFromString}
-
-Parse serialized protocol buffer data into this message.
-
-Like MergeFromString(), except we clear the object first and
-do not return the value that MergeFromString returns.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.RegisterExtension(extension_handle)` {#TaggedRunMetadata.RegisterExtension}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.SerializePartialToString()` {#TaggedRunMetadata.SerializePartialToString}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.SerializeToString()` {#TaggedRunMetadata.SerializeToString}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.SetInParent()` {#TaggedRunMetadata.SetInParent}
-
-Sets the _cached_byte_size_dirty bit to true,
-and propagates this to our listener iff this was a state change.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.WhichOneof(oneof_name)` {#TaggedRunMetadata.WhichOneof}
-
-Returns the name of the currently set field inside a oneof, or None.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__deepcopy__(memo=None)` {#TaggedRunMetadata.__deepcopy__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__eq__(other)` {#TaggedRunMetadata.__eq__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__getstate__()` {#TaggedRunMetadata.__getstate__}
-
-Support the pickle protocol.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__hash__()` {#TaggedRunMetadata.__hash__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__init__(**kwargs)` {#TaggedRunMetadata.__init__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__ne__(other_msg)` {#TaggedRunMetadata.__ne__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__repr__()` {#TaggedRunMetadata.__repr__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__setstate__(state)` {#TaggedRunMetadata.__setstate__}
-
-Support the pickle protocol.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__str__()` {#TaggedRunMetadata.__str__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__unicode__()` {#TaggedRunMetadata.__unicode__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.run_metadata` {#TaggedRunMetadata.run_metadata}
-
-Magic attribute generated for "run_metadata" proto field.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.tag` {#TaggedRunMetadata.tag}
-
-Magic attribute generated for "tag" proto field.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.merge_all.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.merge_all.md
deleted file mode 100644
index 8f1ff2a277..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.summary.merge_all.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.summary.merge_all(key='summaries')` {#merge_all}
-
-Merges all summaries collected in the default graph.
-
-##### Args:
-
-
-* <b>`key`</b>: `GraphKey` used to collect the summaries. Defaults to
- `GraphKeys.SUMMARIES`.
-
-##### Returns:
-
- If no summaries were collected, returns None. Otherwise returns a scalar
- `Tensor` of type `string` containing the serialized `Summary` protocol
- buffer resulting from the merging.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.ExponentialMovingAverage.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.ExponentialMovingAverage.md
deleted file mode 100644
index b540230fe0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.ExponentialMovingAverage.md
+++ /dev/null
@@ -1,232 +0,0 @@
-Maintains moving averages of variables by employing an exponential decay.
-
-When training a model, it is often beneficial to maintain moving averages of
-the trained parameters. Evaluations that use averaged parameters sometimes
-produce significantly better results than the final trained values.
-
-The `apply()` method adds shadow copies of trained variables and add ops that
-maintain a moving average of the trained variables in their shadow copies.
-It is used when building the training model. The ops that maintain moving
-averages are typically run after each training step.
-The `average()` and `average_name()` methods give access to the shadow
-variables and their names. They are useful when building an evaluation
-model, or when restoring a model from a checkpoint file. They help use the
-moving averages in place of the last trained values for evaluations.
-
-The moving averages are computed using exponential decay. You specify the
-decay value when creating the `ExponentialMovingAverage` object. The shadow
-variables are initialized with the same initial values as the trained
-variables. When you run the ops to maintain the moving averages, each
-shadow variable is updated with the formula:
-
- `shadow_variable -= (1 - decay) * (shadow_variable - variable)`
-
-This is mathematically equivalent to the classic formula below, but the use
-of an `assign_sub` op (the `"-="` in the formula) allows concurrent lockless
-updates to the variables:
-
- `shadow_variable = decay * shadow_variable + (1 - decay) * variable`
-
-Reasonable values for `decay` are close to 1.0, typically in the
-multiple-nines range: 0.999, 0.9999, etc.
-
-Example usage when creating a training model:
-
-```python
-# Create variables.
-var0 = tf.Variable(...)
-var1 = tf.Variable(...)
-# ... use the variables to build a training model...
-...
-# Create an op that applies the optimizer. This is what we usually
-# would use as a training op.
-opt_op = opt.minimize(my_loss, [var0, var1])
-
-# Create an ExponentialMovingAverage object
-ema = tf.train.ExponentialMovingAverage(decay=0.9999)
-
-# Create the shadow variables, and add ops to maintain moving averages
-# of var0 and var1.
-maintain_averages_op = ema.apply([var0, var1])
-
-# Create an op that will update the moving averages after each training
-# step. This is what we will use in place of the usual training op.
-with tf.control_dependencies([opt_op]):
- training_op = tf.group(maintain_averages_op)
-
-...train the model by running training_op...
-```
-
-There are two ways to use the moving averages for evaluations:
-
-* Build a model that uses the shadow variables instead of the variables.
- For this, use the `average()` method which returns the shadow variable
- for a given variable.
-* Build a model normally but load the checkpoint files to evaluate by using
- the shadow variable names. For this use the `average_name()` method. See
- the [Saver class](../../api_docs/python/train.md#Saver) for more
- information on restoring saved variables.
-
-Example of restoring the shadow variable values:
-
-```python
-# Create a Saver that loads variables from their saved shadow values.
-shadow_var0_name = ema.average_name(var0)
-shadow_var1_name = ema.average_name(var1)
-saver = tf.train.Saver({shadow_var0_name: var0, shadow_var1_name: var1})
-saver.restore(...checkpoint filename...)
-# var0 and var1 now hold the moving average values
-```
-
-- - -
-
-#### `tf.train.ExponentialMovingAverage.__init__(decay, num_updates=None, zero_debias=False, name='ExponentialMovingAverage')` {#ExponentialMovingAverage.__init__}
-
-Creates a new ExponentialMovingAverage object.
-
-The `apply()` method has to be called to create shadow variables and add
-ops to maintain moving averages.
-
-The optional `num_updates` parameter allows one to tweak the decay rate
-dynamically. It is typical to pass the count of training steps, usually
-kept in a variable that is incremented at each step, in which case the
-decay rate is lower at the start of training. This makes moving averages
-move faster. If passed, the actual decay rate used is:
-
- `min(decay, (1 + num_updates) / (10 + num_updates))`
-
-##### Args:
-
-
-* <b>`decay`</b>: Float. The decay to use.
-* <b>`num_updates`</b>: Optional count of number of updates applied to variables.
-* <b>`zero_debias`</b>: If `True`, zero debias moving-averages that are initialized
- with tensors.
-* <b>`name`</b>: String. Optional prefix name to use for the name of ops added in
- `apply()`.
-
-
-- - -
-
-#### `tf.train.ExponentialMovingAverage.apply(var_list=None)` {#ExponentialMovingAverage.apply}
-
-Maintains moving averages of variables.
-
-`var_list` must be a list of `Variable` or `Tensor` objects. This method
-creates shadow variables for all elements of `var_list`. Shadow variables
-for `Variable` objects are initialized to the variable's initial value.
-They will be added to the `GraphKeys.MOVING_AVERAGE_VARIABLES` collection.
-For `Tensor` objects, the shadow variables are initialized to 0 and zero
-debiased (see docstring in `assign_moving_average` for more details).
-
-shadow variables are created with `trainable=False` and added to the
-`GraphKeys.ALL_VARIABLES` collection. They will be returned by calls to
-`tf.global_variables()`.
-
-Returns an op that updates all shadow variables as described above.
-
-Note that `apply()` can be called multiple times with different lists of
-variables.
-
-##### Args:
-
-
-* <b>`var_list`</b>: A list of Variable or Tensor objects. The variables
- and Tensors must be of types float16, float32, or float64.
-
-##### Returns:
-
- An Operation that updates the moving averages.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the arguments are not all float16, float32, or float64.
-* <b>`ValueError`</b>: If the moving average of one of the variables is already
- being computed.
-
-
-- - -
-
-#### `tf.train.ExponentialMovingAverage.average_name(var)` {#ExponentialMovingAverage.average_name}
-
-Returns the name of the `Variable` holding the average for `var`.
-
-The typical scenario for `ExponentialMovingAverage` is to compute moving
-averages of variables during training, and restore the variables from the
-computed moving averages during evaluations.
-
-To restore variables, you have to know the name of the shadow variables.
-That name and the original variable can then be passed to a `Saver()` object
-to restore the variable from the moving average value with:
- `saver = tf.train.Saver({ema.average_name(var): var})`
-
-`average_name()` can be called whether or not `apply()` has been called.
-
-##### Args:
-
-
-* <b>`var`</b>: A `Variable` object.
-
-##### Returns:
-
- A string: The name of the variable that will be used or was used
- by the `ExponentialMovingAverage class` to hold the moving average of
- `var`.
-
-
-- - -
-
-#### `tf.train.ExponentialMovingAverage.average(var)` {#ExponentialMovingAverage.average}
-
-Returns the `Variable` holding the average of `var`.
-
-##### Args:
-
-
-* <b>`var`</b>: A `Variable` object.
-
-##### Returns:
-
- A `Variable` object or `None` if the moving average of `var`
- is not maintained.
-
-
-- - -
-
-#### `tf.train.ExponentialMovingAverage.variables_to_restore(moving_avg_variables=None)` {#ExponentialMovingAverage.variables_to_restore}
-
-Returns a map of names to `Variables` to restore.
-
-If a variable has a moving average, use the moving average variable name as
-the restore name; otherwise, use the variable name.
-
-For example,
-
-```python
- variables_to_restore = ema.variables_to_restore()
- saver = tf.train.Saver(variables_to_restore)
-```
-
-Below is an example of such mapping:
-
-```
- conv/batchnorm/gamma/ExponentialMovingAverage: conv/batchnorm/gamma,
- conv_4/conv2d_params/ExponentialMovingAverage: conv_4/conv2d_params,
- global_step: global_step
-```
-
-##### Args:
-
-
-* <b>`moving_avg_variables`</b>: a list of variables that require to use of the
- moving variable name to be restored. If None, it will default to
- variables.moving_average_variables() + variables.trainable_variables()
-
-##### Returns:
-
- A map from restore_names to variables. The restore_name can be the
- moving_average version of the variable name if it exist, or the original
- variable name.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.GlobalStepWaiterHook.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.GlobalStepWaiterHook.md
deleted file mode 100644
index 4710e841d1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.GlobalStepWaiterHook.md
+++ /dev/null
@@ -1,87 +0,0 @@
-Delay execution until global step reaches to wait_until_step.
-
-This hook delays execution until global step reaches to `wait_until_step`. It
-is used to gradually start workers in distributed settings. One example usage
-would be setting `wait_until_step=int(K*log(task_id+1))` assuming that
-task_id=0 is the chief.
-- - -
-
-#### `tf.train.GlobalStepWaiterHook.__init__(wait_until_step)` {#GlobalStepWaiterHook.__init__}
-
-Create a _GlobalStepWaiterHook.
-
-##### Args:
-
-
-* <b>`wait_until_step`</b>: an `int` shows until which global step should we wait.
-
-
-- - -
-
-#### `tf.train.GlobalStepWaiterHook.after_create_session(session, coord)` {#GlobalStepWaiterHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.GlobalStepWaiterHook.after_run(run_context, run_values)` {#GlobalStepWaiterHook.after_run}
-
-Called after each call to run().
-
-The `run_values` argument contains results of requested ops/tensors by
-`before_run()`.
-
-The `run_context` argument is the same one send to `before_run` call.
-`run_context.request_stop()` can be called to stop the iteration.
-
-##### Args:
-
-
-* <b>`run_context`</b>: A `SessionRunContext` object.
-* <b>`run_values`</b>: A SessionRunValues object.
-
-
-- - -
-
-#### `tf.train.GlobalStepWaiterHook.before_run(run_context)` {#GlobalStepWaiterHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.GlobalStepWaiterHook.begin()` {#GlobalStepWaiterHook.begin}
-
-
-
-
-- - -
-
-#### `tf.train.GlobalStepWaiterHook.end(session)` {#GlobalStepWaiterHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.SessionRunArgs.__new__.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.SessionRunArgs.__new__.md
deleted file mode 100644
index 2dd4d8c8b3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.SessionRunArgs.__new__.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.train.SessionRunArgs.__new__(cls, fetches, feed_dict=None, options=None)` {#SessionRunArgs.__new__}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.SessionRunHook.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.SessionRunHook.md
deleted file mode 100644
index 00bd190dbf..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.SessionRunHook.md
+++ /dev/null
@@ -1,97 +0,0 @@
-Hook to extend calls to MonitoredSession.run().
-- - -
-
-#### `tf.train.SessionRunHook.after_create_session(session, coord)` {#SessionRunHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.SessionRunHook.after_run(run_context, run_values)` {#SessionRunHook.after_run}
-
-Called after each call to run().
-
-The `run_values` argument contains results of requested ops/tensors by
-`before_run()`.
-
-The `run_context` argument is the same one send to `before_run` call.
-`run_context.request_stop()` can be called to stop the iteration.
-
-##### Args:
-
-
-* <b>`run_context`</b>: A `SessionRunContext` object.
-* <b>`run_values`</b>: A SessionRunValues object.
-
-
-- - -
-
-#### `tf.train.SessionRunHook.before_run(run_context)` {#SessionRunHook.before_run}
-
-Called before each call to run().
-
-You can return from this call a `SessionRunArgs` object indicating ops or
-tensors to add to the upcoming `run()` call. These ops/tensors will be run
-together with the ops/tensors originally passed to the original run() call.
-The run args you return can also contain feeds to be added to the run()
-call.
-
-The `run_context` argument is a `SessionRunContext` that provides
-information about the upcoming `run()` call: the originally requested
-op/tensors, the TensorFlow Session.
-
-At this point graph is finalized and you can not add ops.
-
-##### Args:
-
-
-* <b>`run_context`</b>: A `SessionRunContext` object.
-
-##### Returns:
-
- None or a `SessionRunArgs` object.
-
-
-- - -
-
-#### `tf.train.SessionRunHook.begin()` {#SessionRunHook.begin}
-
-Called once before using the session.
-
-When called, the default graph is the one that will be launched in the
-session. The hook can modify the graph by adding new operations to it.
-After the `begin()` call the graph will be finalized and the other callbacks
-can not modify the graph anymore. Second call of `begin()` on the same
-graph, should not change the graph.
-
-
-- - -
-
-#### `tf.train.SessionRunHook.end(session)` {#SessionRunHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.add_queue_runner.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.add_queue_runner.md
deleted file mode 100644
index f5b9549ad8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.add_queue_runner.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.train.add_queue_runner(qr, collection='queue_runners')` {#add_queue_runner}
-
-Adds a `QueueRunner` to a collection in the graph.
-
-When building a complex model that uses many queues it is often difficult to
-gather all the queue runners that need to be run. This convenience function
-allows you to add a queue runner to a well known collection in the graph.
-
-The companion method `start_queue_runners()` can be used to start threads for
-all the collected queue runners.
-
-##### Args:
-
-
-* <b>`qr`</b>: A `QueueRunner`.
-* <b>`collection`</b>: A `GraphKey` specifying the graph collection to add
- the queue runner to. Defaults to `GraphKeys.QUEUE_RUNNERS`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.limit_epochs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.limit_epochs.md
deleted file mode 100644
index bcd9d32c30..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.train.limit_epochs.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.train.limit_epochs(tensor, num_epochs=None, name=None)` {#limit_epochs}
-
-Returns tensor `num_epochs` times and then raises an `OutOfRange` error.
-
-Note: creates local counter `epochs`. Use `local_variables_initializer()` to
-initialize local variables.
-
-##### Args:
-
-
-* <b>`tensor`</b>: Any `Tensor`.
-* <b>`num_epochs`</b>: A positive integer (optional). If specified, limits the number
- of steps the output tensor may be evaluated.
-* <b>`name`</b>: A name for the operations (optional).
-
-##### Returns:
-
- tensor or `OutOfRange`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `num_epochs` is invalid.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.tuple.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.tuple.md
deleted file mode 100644
index 503a98d625..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.tuple.md
+++ /dev/null
@@ -1,36 +0,0 @@
-### `tf.tuple(tensors, name=None, control_inputs=None)` {#tuple}
-
-Group tensors together.
-
-This creates a tuple of tensors with the same values as the `tensors`
-argument, except that the value of each tensor is only returned after the
-values of all tensors have been computed.
-
-`control_inputs` contains additional ops that have to finish before this op
-finishes, but whose outputs are not returned.
-
-This can be used as a "join" mechanism for parallel computations: all the
-argument tensors can be computed in parallel, but the values of any tensor
-returned by `tuple` are only available after all the parallel computations
-are done.
-
-See also `group` and `with_dependencies`.
-
-##### Args:
-
-
-* <b>`tensors`</b>: A list of `Tensor`s or `IndexedSlices`, some entries can be `None`.
-* <b>`name`</b>: (optional) A name to use as a `name_scope` for the operation.
-* <b>`control_inputs`</b>: List of additional ops to finish before returning.
-
-##### Returns:
-
- Same as `tensors`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `tensors` does not contain any `Tensor` or `IndexedSlices`.
-* <b>`TypeError`</b>: If `control_inputs` is not a list of `Operation` or `Tensor`
- objects.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.zeros_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.zeros_initializer.md
deleted file mode 100644
index 0bfa37f6cf..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf.zeros_initializer.md
+++ /dev/null
@@ -1,15 +0,0 @@
-Initializer that generates tensors initialized to 0.
-- - -
-
-#### `tf.zeros_initializer.__call__(shape, dtype=None, partition_info=None)` {#zeros_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.zeros_initializer.__init__(dtype=tf.float32)` {#zeros_initializer.__init__}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf_debug.DumpingDebugWrapperSession.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf_debug.DumpingDebugWrapperSession.md
deleted file mode 100644
index f86f63d7d9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf_debug.DumpingDebugWrapperSession.md
+++ /dev/null
@@ -1,140 +0,0 @@
-Debug Session wrapper that dumps debug data to filesystem.
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.__enter__()` {#DumpingDebugWrapperSession.__enter__}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.__exit__(exec_type, exec_value, exec_tb)` {#DumpingDebugWrapperSession.__exit__}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.__init__(sess, session_root, watch_fn=None, log_usage=True)` {#DumpingDebugWrapperSession.__init__}
-
-Constructor of DumpingDebugWrapperSession.
-
-##### Args:
-
-
-* <b>`sess`</b>: The TensorFlow `Session` object being wrapped.
-* <b>`session_root`</b>: (`str`) Path to the session root directory. Must be a
- directory that does not exist or an empty directory. If the directory
- does not exist, it will be created by the debugger core during debug
- [`Session.run()`](../../../g3doc/api_docs/python/client.md#session.run)
- calls.
- As the `run()` calls occur, subdirectories will be added to
- `session_root`. The subdirectories' names has the following pattern:
- run_<epoch_time_stamp>_<uuid>
- E.g., run_1480734393835964_ad4c953a85444900ae79fc1b652fb324
-* <b>`watch_fn`</b>: (`Callable`) A Callable that can be used to define per-run
- debug ops and watched tensors. See the doc of
- `NonInteractiveDebugWrapperSession.__init__()` for details.
-* <b>`log_usage`</b>: (`bool`) whether the usage of this class is to be logged.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `session_root` is an existing and non-empty directory or
- if `session_root` is a file.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.close()` {#DumpingDebugWrapperSession.close}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.graph` {#DumpingDebugWrapperSession.graph}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.invoke_node_stepper(node_stepper, restore_variable_values_on_exit=True)` {#DumpingDebugWrapperSession.invoke_node_stepper}
-
-See doc of BaseDebugWrapperSession.invoke_node_stepper.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.on_run_end(request)` {#DumpingDebugWrapperSession.on_run_end}
-
-See doc of BaseDebugWrapperSession.on_run_end.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.on_run_start(request)` {#DumpingDebugWrapperSession.on_run_start}
-
-See doc of BaseDebugWrapperSession.on_run_start.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.on_session_init(request)` {#DumpingDebugWrapperSession.on_session_init}
-
-See doc of BaseDebugWrapperSession.on_run_start.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.partial_run(handle, fetches, feed_dict=None)` {#DumpingDebugWrapperSession.partial_run}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.partial_run_setup(fetches, feeds=None)` {#DumpingDebugWrapperSession.partial_run_setup}
-
-Sets up the feeds and fetches for partial runs in the session.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#DumpingDebugWrapperSession.run}
-
-Wrapper around Session.run() that inserts tensor watch options.
-
-##### Args:
-
-
-* <b>`fetches`</b>: Same as the `fetches` arg to regular `Session.run()`.
-* <b>`feed_dict`</b>: Same as the `feed_dict` arg to regular `Session.run()`.
-* <b>`options`</b>: Same as the `options` arg to regular `Session.run()`.
-* <b>`run_metadata`</b>: Same as the `run_metadata` arg to regular `Session.run()`.
-
-##### Returns:
-
- Simply forwards the output of the wrapped `Session.run()` call.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: On invalid `OnRunStartAction` value.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.sess_str` {#DumpingDebugWrapperSession.sess_str}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.session` {#DumpingDebugWrapperSession.session}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf_debug.watch_graph_with_blacklists.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf_debug.watch_graph_with_blacklists.md
deleted file mode 100644
index 72af627344..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard0/tf_debug.watch_graph_with_blacklists.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf_debug.watch_graph_with_blacklists(run_options, graph, debug_ops='DebugIdentity', debug_urls=None, node_name_regex_blacklist=None, op_type_regex_blacklist=None, global_step=-1)` {#watch_graph_with_blacklists}
-
-Add debug tensor watches, blacklisting nodes and op types.
-
-This is similar to `watch_graph()`, but the node names and op types are
-blacklisted, instead of whitelisted.
-
-N.B.: Under certain circumstances, not all specified `Tensor`s will be
- actually watched (e.g., nodes that are constant-folded during runtime will
- not be watched).
-
-##### Args:
-
-
-* <b>`run_options`</b>: An instance of `config_pb2.RunOptions` to be modified.
-* <b>`graph`</b>: An instance of `ops.Graph`.
-* <b>`debug_ops`</b>: (`str` or `list` of `str`) name(s) of the debug op(s) to use.
-* <b>`debug_urls`</b>: URL(s) to send ebug values to, e.g.,
- `file:///tmp/tfdbg_dump_1`, `grpc://localhost:12345`.
-* <b>`node_name_regex_blacklist`</b>: Regular-expression blacklist for node_name.
- This should be a string, e.g., `"(weight_[0-9]+|bias_.*)"`.
-* <b>`op_type_regex_blacklist`</b>: Regular-expression blacklist for the op type of
- nodes, e.g., `"(Variable|Add)"`.
- If both node_name_regex_blacklist and op_type_regex_blacklist
- are set, the two filtering operations will occur in a logical `OR`
- relation. In other words, a node will be excluded if it hits either of
- the two blacklists; a node will be included if and only if it hits
- neither of the blacklists.
-* <b>`global_step`</b>: (`int`) Optional global_step count for this debug tensor
- watch.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.AggregationMethod.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.AggregationMethod.md
deleted file mode 100644
index ee655fbd25..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.AggregationMethod.md
+++ /dev/null
@@ -1,10 +0,0 @@
-A class listing aggregation methods used to combine gradients.
-
-Computing partial derivatives can require aggregating gradient
-contributions. This class lists the various methods that can
-be used to combine gradients in the graph:
-
-* `ADD_N`: All of the gradient terms are summed as part of one
- operation using the "AddN" op. It has the property that all
- gradients must be ready before any aggregation is performed.
-* `DEFAULT`: The system-chosen default aggregation method.
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.ConditionalAccumulatorBase.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.ConditionalAccumulatorBase.md
deleted file mode 100644
index f41d77e7db..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.ConditionalAccumulatorBase.md
+++ /dev/null
@@ -1,79 +0,0 @@
-A conditional accumulator for aggregating gradients.
-
-Up-to-date gradients (i.e., time step at which gradient was computed is
-equal to the accumulator's time step) are added to the accumulator.
-
-Extraction of the average gradient is blocked until the required number of
-gradients has been accumulated.
-- - -
-
-#### `tf.ConditionalAccumulatorBase.__init__(dtype, shape, accumulator_ref)` {#ConditionalAccumulatorBase.__init__}
-
-Creates a new ConditionalAccumulator.
-
-##### Args:
-
-
-* <b>`dtype`</b>: Datatype of the accumulated gradients.
-* <b>`shape`</b>: Shape of the accumulated gradients.
-* <b>`accumulator_ref`</b>: A handle to the conditional accumulator, created by sub-
- classes
-
-
-- - -
-
-#### `tf.ConditionalAccumulatorBase.accumulator_ref` {#ConditionalAccumulatorBase.accumulator_ref}
-
-The underlying accumulator reference.
-
-
-- - -
-
-#### `tf.ConditionalAccumulatorBase.dtype` {#ConditionalAccumulatorBase.dtype}
-
-The datatype of the gradients accumulated by this accumulator.
-
-
-- - -
-
-#### `tf.ConditionalAccumulatorBase.name` {#ConditionalAccumulatorBase.name}
-
-The name of the underlying accumulator.
-
-
-- - -
-
-#### `tf.ConditionalAccumulatorBase.num_accumulated(name=None)` {#ConditionalAccumulatorBase.num_accumulated}
-
-Number of gradients that have currently been aggregated in accumulator.
-
-##### Args:
-
-
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- Number of accumulated gradients currently in accumulator.
-
-
-- - -
-
-#### `tf.ConditionalAccumulatorBase.set_global_step(new_global_step, name=None)` {#ConditionalAccumulatorBase.set_global_step}
-
-Sets the global time step of the accumulator.
-
-The operation logs a warning if we attempt to set to a time step that is
-lower than the accumulator's own time step.
-
-##### Args:
-
-
-* <b>`new_global_step`</b>: Value of new time step. Can be a variable or a constant
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- Operation that sets the accumulator's time step.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.FIFOQueue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.FIFOQueue.md
deleted file mode 100644
index 1d271e6eab..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.FIFOQueue.md
+++ /dev/null
@@ -1,299 +0,0 @@
-A queue implementation that dequeues elements in first-in first-out order.
-
-See [`tf.QueueBase`](#QueueBase) for a description of the methods on
-this class.
-- - -
-
-#### `tf.FIFOQueue.__init__(capacity, dtypes, shapes=None, names=None, shared_name=None, name='fifo_queue')` {#FIFOQueue.__init__}
-
-Creates a queue that dequeues elements in a first-in first-out order.
-
-A `FIFOQueue` has bounded capacity; supports multiple concurrent
-producers and consumers; and provides exactly-once delivery.
-
-A `FIFOQueue` holds a list of up to `capacity` elements. Each
-element is a fixed-length tuple of tensors whose dtypes are
-described by `dtypes`, and whose shapes are optionally described
-by the `shapes` argument.
-
-If the `shapes` argument is specified, each component of a queue
-element must have the respective fixed shape. If it is
-unspecified, different queue elements may have different shapes,
-but the use of `dequeue_many` is disallowed.
-
-##### Args:
-
-
-* <b>`capacity`</b>: An integer. The upper bound on the number of elements
- that may be stored in this queue.
-* <b>`dtypes`</b>: A list of `DType` objects. The length of `dtypes` must equal
- the number of tensors in each queue element.
-* <b>`shapes`</b>: (Optional.) A list of fully-defined `TensorShape` objects
- with the same length as `dtypes`, or `None`.
-* <b>`names`</b>: (Optional.) A list of string naming the components in the queue
- with the same length as `dtypes`, or `None`. If specified the dequeue
- methods return a dictionary with the names as keys.
-* <b>`shared_name`</b>: (Optional.) If non-empty, this queue will be shared under
- the given name across multiple sessions.
-* <b>`name`</b>: Optional name for the queue operation.
-
-
-- - -
-
-#### `tf.FIFOQueue.close(cancel_pending_enqueues=False, name=None)` {#FIFOQueue.close}
-
-Closes this queue.
-
-This operation signals that no more elements will be enqueued in
-the given queue. Subsequent `enqueue` and `enqueue_many`
-operations will fail. Subsequent `dequeue` and `dequeue_many`
-operations will continue to succeed if sufficient elements remain
-in the queue. Subsequent `dequeue` and `dequeue_many` operations
-that would block will fail immediately.
-
-If `cancel_pending_enqueues` is `True`, all pending requests will also
-be cancelled.
-
-##### Args:
-
-
-* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
- `False` (described above).
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that closes the queue.
-
-
-- - -
-
-#### `tf.FIFOQueue.dequeue(name=None)` {#FIFOQueue.dequeue}
-
-Dequeues one element from this queue.
-
-If the queue is empty when this operation executes, it will block
-until there is an element to dequeue.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue is empty, and there are no pending
-enqueue operations that can fulfill this request,
-`tf.errors.OutOfRangeError` will be raised. If the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of tensors that was dequeued.
-
-
-- - -
-
-#### `tf.FIFOQueue.dequeue_many(n, name=None)` {#FIFOQueue.dequeue_many}
-
-Dequeues and concatenates `n` elements from this queue.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. All of the
-components in the dequeued tuple will have size `n` in the 0th dimension.
-
-If the queue is closed and there are less than `n` elements left, then an
-`OutOfRange` exception is raised.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue contains fewer than `n` elements, and
-there are no pending enqueue operations that can fulfill this
-request, `tf.errors.OutOfRangeError` will be raised. If the
-session is [closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.FIFOQueue.dequeue_up_to(n, name=None)` {#FIFOQueue.dequeue_up_to}
-
-Dequeues and concatenates `n` elements from this queue.
-
-**Note** This operation is not supported by all queues. If a queue does not
-support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. If the queue
-has not been closed, all of the components in the dequeued tuple
-will have size `n` in the 0th dimension.
-
-If the queue is closed and there are more than `0` but fewer than
-`n` elements remaining, then instead of raising a
-`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
-less than `n` elements are returned immediately. If the queue is
-closed and there are `0` elements left in the queue, then a
-`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
-Otherwise the behavior is identical to `dequeue_many`.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.FIFOQueue.dtypes` {#FIFOQueue.dtypes}
-
-The list of dtypes for each component of a queue element.
-
-
-- - -
-
-#### `tf.FIFOQueue.enqueue(vals, name=None)` {#FIFOQueue.enqueue}
-
-Enqueues one element to this queue.
-
-If the queue is full when this operation executes, it will block
-until the element has been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
- the values to enqueue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a new tuple of tensors to the queue.
-
-
-- - -
-
-#### `tf.FIFOQueue.enqueue_many(vals, name=None)` {#FIFOQueue.enqueue_many}
-
-Enqueues zero or more elements to this queue.
-
-This operation slices each component tensor along the 0th dimension to
-make multiple queue elements. All of the tensors in `vals` must have the
-same size in the 0th dimension.
-
-If the queue is full when this operation executes, it will block
-until all of the elements have been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
- from which the queue elements are taken.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a batch of tuples of tensors to the queue.
-
-
-- - -
-
-#### `tf.FIFOQueue.from_list(index, queues)` {#FIFOQueue.from_list}
-
-Create a queue using the queue reference from `queues[index]`.
-
-##### Args:
-
-
-* <b>`index`</b>: An integer scalar tensor that determines the input that gets
- selected.
-* <b>`queues`</b>: A list of `QueueBase` objects.
-
-##### Returns:
-
- A `QueueBase` object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
- or when the data types of `queues` are not all the same.
-
-
-- - -
-
-#### `tf.FIFOQueue.name` {#FIFOQueue.name}
-
-The name of the underlying queue.
-
-
-- - -
-
-#### `tf.FIFOQueue.names` {#FIFOQueue.names}
-
-The list of names for each component of a queue element.
-
-
-- - -
-
-#### `tf.FIFOQueue.queue_ref` {#FIFOQueue.queue_ref}
-
-The underlying queue reference.
-
-
-- - -
-
-#### `tf.FIFOQueue.shapes` {#FIFOQueue.shapes}
-
-The list of shapes for each component of a queue element.
-
-
-- - -
-
-#### `tf.FIFOQueue.size(name=None)` {#FIFOQueue.size}
-
-Compute the number of elements in this queue.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A scalar tensor containing the number of elements in this queue.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.IdentityReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.IdentityReader.md
deleted file mode 100644
index 03f6211303..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.IdentityReader.md
+++ /dev/null
@@ -1,175 +0,0 @@
-A Reader that outputs the queued work as both the key and value.
-
-To use, enqueue strings in a Queue. Read will take the front
-work string and output (work, work).
-
-See ReaderBase for supported methods.
-- - -
-
-#### `tf.IdentityReader.__init__(name=None)` {#IdentityReader.__init__}
-
-Create a IdentityReader.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-
-- - -
-
-#### `tf.IdentityReader.num_records_produced(name=None)` {#IdentityReader.num_records_produced}
-
-Returns the number of records this reader has produced.
-
-This is the same as the number of Read executions that have
-succeeded.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.IdentityReader.num_work_units_completed(name=None)` {#IdentityReader.num_work_units_completed}
-
-Returns the number of work units this reader has finished processing.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.IdentityReader.read(queue, name=None)` {#IdentityReader.read}
-
-Returns the next record (key, value pair) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g. when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (key, value).
-
-* <b>`key`</b>: A string scalar Tensor.
-* <b>`value`</b>: A string scalar Tensor.
-
-
-- - -
-
-#### `tf.IdentityReader.read_up_to(queue, num_records, name=None)` {#IdentityReader.read_up_to}
-
-Returns up to num_records (key, value pairs) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g., when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-It may return less than num_records even before the last batch.
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`num_records`</b>: Number of records to read.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (keys, values).
-
-* <b>`keys`</b>: A 1-D string Tensor.
-* <b>`values`</b>: A 1-D string Tensor.
-
-
-- - -
-
-#### `tf.IdentityReader.reader_ref` {#IdentityReader.reader_ref}
-
-Op that implements the reader.
-
-
-- - -
-
-#### `tf.IdentityReader.reset(name=None)` {#IdentityReader.reset}
-
-Restore a reader to its initial clean state.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.IdentityReader.restore_state(state, name=None)` {#IdentityReader.restore_state}
-
-Restore a reader to a previously saved state.
-
-Not all Readers support being restored, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`state`</b>: A string Tensor.
- Result of a SerializeState of a Reader with matching type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.IdentityReader.serialize_state(name=None)` {#IdentityReader.serialize_state}
-
-Produce a string tensor that encodes the state of a reader.
-
-Not all Readers support being serialized, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A string Tensor.
-
-
-- - -
-
-#### `tf.IdentityReader.supports_serialize` {#IdentityReader.supports_serialize}
-
-Whether the Reader implementation can serialize its state.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.NoGradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.NoGradient.md
deleted file mode 100644
index 7181713d26..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.NoGradient.md
+++ /dev/null
@@ -1,32 +0,0 @@
-### `tf.NoGradient(op_type)` {#NoGradient}
-
-Specifies that ops of type `op_type` is not differentiable.
-
-This function should *not* be used for operations that have a
-well-defined gradient that is not yet implemented.
-
-This function is only used when defining a new op type. It may be
-used for ops such as `tf.size()` that are not differentiable. For
-example:
-
-```python
-tf.NotDifferentiable("Size")
-```
-
-The gradient computed for 'op_type' will then propagate zeros.
-
-For ops that have a well-defined gradient but are not yet implemented,
-no declaration should be made, and an error *must* be thrown if
-an attempt to request its gradient is made.
-
-##### Args:
-
-
-* <b>`op_type`</b>: The string type of an operation. This corresponds to the
- `OpDef.name` field for the proto that defines the operation.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `op_type` is not a string.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Print.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Print.md
deleted file mode 100644
index b1ec7c1af0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Print.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.Print(input_, data, message=None, first_n=None, summarize=None, name=None)` {#Print}
-
-Prints a list of tensors.
-
-This is an identity op with the side effect of printing `data` when
-evaluating.
-
-##### Args:
-
-
-* <b>`input_`</b>: A tensor passed through this op.
-* <b>`data`</b>: A list of tensors to print out when op is evaluated.
-* <b>`message`</b>: A string, prefix of the error message.
-* <b>`first_n`</b>: Only log `first_n` number of times. Negative numbers log always;
- this is the default.
-* <b>`summarize`</b>: Only print this many entries of each tensor. If None, then a
- maximum of 3 elements are printed per input tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same tensor as `input_`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Tensor.md
deleted file mode 100644
index abab577434..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Tensor.md
+++ /dev/null
@@ -1,940 +0,0 @@
-Represents one of the outputs of an `Operation`.
-
-A `Tensor` is a symbolic handle to one of the outputs of an
-`Operation`. It does not hold the values of that operation's output,
-but instead provides a means of computing those values in a
-TensorFlow [`Session`](../../api_docs/python/client.md#Session).
-
-This class has two primary purposes:
-
-1. A `Tensor` can be passed as an input to another `Operation`.
- This builds a dataflow connection between operations, which
- enables TensorFlow to execute an entire `Graph` that represents a
- large, multi-step computation.
-
-2. After the graph has been launched in a session, the value of the
- `Tensor` can be computed by passing it to
- [`Session.run()`](../../api_docs/python/client.md#Session.run).
- `t.eval()` is a shortcut for calling
- `tf.get_default_session().run(t)`.
-
-In the following example, `c`, `d`, and `e` are symbolic `Tensor`
-objects, whereas `result` is a numpy array that stores a concrete
-value:
-
-```python
-# Build a dataflow graph.
-c = tf.constant([[1.0, 2.0], [3.0, 4.0]])
-d = tf.constant([[1.0, 1.0], [0.0, 1.0]])
-e = tf.matmul(c, d)
-
-# Construct a `Session` to execute the graph.
-sess = tf.Session()
-
-# Execute the graph and store the value that `e` represents in `result`.
-result = sess.run(e)
-```
-- - -
-
-#### `tf.Tensor.__abs__(x, name=None)` {#Tensor.__abs__}
-
-Computes the absolute value of a tensor.
-
-Given a tensor of real numbers `x`, this operation returns a tensor
-containing the absolute value of each element in `x`. For example, if x is
-an input element and y is an output element, this operation computes
-\\(y = |x|\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor` of type `float32`, `float64`, `int32`, or
- `int64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` the same size and type as `x` with absolute
- values.
-
-
-- - -
-
-#### `tf.Tensor.__add__(x, y)` {#Tensor.__add__}
-
-Returns x + y element-wise.
-
-*NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Tensor.__and__(x, y)` {#Tensor.__and__}
-
-Returns the truth value of x AND y element-wise.
-
-*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__bool__()` {#Tensor.__bool__}
-
-Dummy method to prevent a tensor from being used as a Python `bool`.
-
-This overload raises a `TypeError` when the user inadvertently
-treats a `Tensor` as a boolean (e.g. in an `if` statement). For
-example:
-
-```python
-if tf.constant(True): # Will raise.
- # ...
-
-if tf.constant(5) < tf.constant(7): # Will raise.
- # ...
-```
-
-This disallows ambiguities between testing the Python value vs testing the
-dynamic condition of the `Tensor`.
-
-##### Raises:
-
- `TypeError`.
-
-
-- - -
-
-#### `tf.Tensor.__div__(x, y)` {#Tensor.__div__}
-
-Divide two values using Python 2 semantics. Used for Tensor.__div__.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` returns the quotient of x and y.
-
-
-- - -
-
-#### `tf.Tensor.__eq__(other)` {#Tensor.__eq__}
-
-
-
-
-- - -
-
-#### `tf.Tensor.__floordiv__(x, y)` {#Tensor.__floordiv__}
-
-Divides `x / y` elementwise, rounding toward the most negative integer.
-
-The same as `tf.div(x,y)` for integers, but uses `tf.floor(tf.div(x,y))` for
-floating point arguments so that the result is always an integer (though
-possibly an integer represented as floating point). This op is generated by
-`x // y` floor division in Python 3 and in Python 2.7 with
-`from __future__ import division`.
-
-Note that for efficiency, `floordiv` uses C semantics for negative numbers
-(unlike Python and Numpy).
-
-`x` and `y` must have the same type, and the result will have the same type
-as well.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` rounded down (except possibly towards zero for negative integers).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the inputs are complex.
-
-
-- - -
-
-#### `tf.Tensor.__ge__(x, y, name=None)` {#Tensor.__ge__}
-
-Returns the truth value of (x >= y) element-wise.
-
-*NOTE*: `GreaterEqual` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__getitem__(tensor, slice_spec, var=None)` {#Tensor.__getitem__}
-
-Overload for Tensor.__getitem__.
-
-This operation extracts the specified region from the tensor.
-The notation is similar to NumPy with the restriction that
-currently only support basic indexing. That means that
-using a tensor as input is not currently allowed
-
-Some useful examples:
-
-```python
-# strip leading and trailing 2 elements
-foo = tf.constant([1,2,3,4,5,6])
-print(foo[2:-2].eval()) # => [3,4]
-
-# skip every row and reverse every column
-foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
-print(foo[::2,::-1].eval()) # => [[3,2,1], [9,8,7]]
-
-# Insert another dimension
-foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
-print(foo[tf.newaxis, :, :].eval()) # => [[[3,2,1], [9,8,7]]]
-print(foo[:, tf.newaxis, :].eval()) # => [[[3,2,1]], [[9,8,7]]]
-print(foo[:, :, tf.newaxis].eval()) # => [[[3],[2],[1]], [[9],[8],[7]]]
-
-# Ellipses (3 equivalent operations)
-print(foo[tf.newaxis, :, :].eval()) # => [[[3,2,1], [9,8,7]]]
-print(foo[tf.newaxis, ...].eval()) # => [[[3,2,1], [9,8,7]]]
-print(foo[tf.newaxis].eval()) # => [[[3,2,1], [9,8,7]]]
-```
-
-##### Notes:
-
- - `tf.newaxis` is `None` as in NumPy.
- - An implicit ellipsis is placed at the end of the `slice_spec`
- - NumPy advanced indexing is currently not supported.
-
-##### Args:
-
-
-* <b>`tensor`</b>: An ops.Tensor object.
-* <b>`slice_spec`</b>: The arguments to Tensor.__getitem__.
-* <b>`var`</b>: In the case of variable slice assignment, the Variable
- object to slice (i.e. tensor is the read-only view of this
- variable).
-
-##### Returns:
-
- The appropriate slice of "tensor", based on "slice_spec".
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If a slice range is negative size.
-* <b>`TypeError`</b>: If the slice indices aren't int, slice, or Ellipsis.
-
-
-- - -
-
-#### `tf.Tensor.__gt__(x, y, name=None)` {#Tensor.__gt__}
-
-Returns the truth value of (x > y) element-wise.
-
-*NOTE*: `Greater` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__hash__()` {#Tensor.__hash__}
-
-
-
-
-- - -
-
-#### `tf.Tensor.__init__(op, value_index, dtype)` {#Tensor.__init__}
-
-Creates a new `Tensor`.
-
-##### Args:
-
-
-* <b>`op`</b>: An `Operation`. `Operation` that computes this tensor.
-* <b>`value_index`</b>: An `int`. Index of the operation's endpoint that produces
- this tensor.
-* <b>`dtype`</b>: A `DType`. Type of elements stored in this tensor.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the op is not an `Operation`.
-
-
-- - -
-
-#### `tf.Tensor.__invert__(x, name=None)` {#Tensor.__invert__}
-
-Returns the truth value of NOT x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__iter__()` {#Tensor.__iter__}
-
-Dummy method to prevent iteration. Do not call.
-
-NOTE(mrry): If we register __getitem__ as an overloaded operator,
-Python will valiantly attempt to iterate over the Tensor from 0 to
-infinity. Declaring this method prevents this unintended
-behavior.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: when invoked.
-
-
-- - -
-
-#### `tf.Tensor.__le__(x, y, name=None)` {#Tensor.__le__}
-
-Returns the truth value of (x <= y) element-wise.
-
-*NOTE*: `LessEqual` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__lt__(x, y, name=None)` {#Tensor.__lt__}
-
-Returns the truth value of (x < y) element-wise.
-
-*NOTE*: `Less` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__mod__(x, y)` {#Tensor.__mod__}
-
-Returns element-wise remainder of division. When `x < 0` xor `y < 0` is
-
-true, this follows Python semantics in that the result here is consistent
-with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.
-
-*NOTE*: `FloorMod` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Tensor.__mul__(x, y)` {#Tensor.__mul__}
-
-Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse".
-
-
-- - -
-
-#### `tf.Tensor.__neg__(x, name=None)` {#Tensor.__neg__}
-
-Computes numerical negative value element-wise.
-
-I.e., \\(y = -x\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Tensor.__nonzero__()` {#Tensor.__nonzero__}
-
-Dummy method to prevent a tensor from being used as a Python `bool`.
-
-This is the Python 2.x counterpart to `__bool__()` above.
-
-##### Raises:
-
- `TypeError`.
-
-
-- - -
-
-#### `tf.Tensor.__or__(x, y)` {#Tensor.__or__}
-
-Returns the truth value of x OR y element-wise.
-
-*NOTE*: `LogicalOr` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__pow__(x, y)` {#Tensor.__pow__}
-
-Computes the power of one value to another.
-
-Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for
-corresponding elements in `x` and `y`. For example:
-
-```
-# tensor 'x' is [[2, 2], [3, 3]]
-# tensor 'y' is [[8, 16], [2, 3]]
-tf.pow(x, y) ==> [[256, 65536], [9, 27]]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`y`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`.
-
-
-- - -
-
-#### `tf.Tensor.__radd__(y, x)` {#Tensor.__radd__}
-
-Returns x + y element-wise.
-
-*NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Tensor.__rand__(y, x)` {#Tensor.__rand__}
-
-Returns the truth value of x AND y element-wise.
-
-*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__rdiv__(y, x)` {#Tensor.__rdiv__}
-
-Divide two values using Python 2 semantics. Used for Tensor.__div__.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` returns the quotient of x and y.
-
-
-- - -
-
-#### `tf.Tensor.__repr__()` {#Tensor.__repr__}
-
-
-
-
-- - -
-
-#### `tf.Tensor.__rfloordiv__(y, x)` {#Tensor.__rfloordiv__}
-
-Divides `x / y` elementwise, rounding toward the most negative integer.
-
-The same as `tf.div(x,y)` for integers, but uses `tf.floor(tf.div(x,y))` for
-floating point arguments so that the result is always an integer (though
-possibly an integer represented as floating point). This op is generated by
-`x // y` floor division in Python 3 and in Python 2.7 with
-`from __future__ import division`.
-
-Note that for efficiency, `floordiv` uses C semantics for negative numbers
-(unlike Python and Numpy).
-
-`x` and `y` must have the same type, and the result will have the same type
-as well.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` rounded down (except possibly towards zero for negative integers).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the inputs are complex.
-
-
-- - -
-
-#### `tf.Tensor.__rmod__(y, x)` {#Tensor.__rmod__}
-
-Returns element-wise remainder of division. When `x < 0` xor `y < 0` is
-
-true, this follows Python semantics in that the result here is consistent
-with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.
-
-*NOTE*: `FloorMod` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Tensor.__rmul__(y, x)` {#Tensor.__rmul__}
-
-Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse".
-
-
-- - -
-
-#### `tf.Tensor.__ror__(y, x)` {#Tensor.__ror__}
-
-Returns the truth value of x OR y element-wise.
-
-*NOTE*: `LogicalOr` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Tensor.__rpow__(y, x)` {#Tensor.__rpow__}
-
-Computes the power of one value to another.
-
-Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for
-corresponding elements in `x` and `y`. For example:
-
-```
-# tensor 'x' is [[2, 2], [3, 3]]
-# tensor 'y' is [[8, 16], [2, 3]]
-tf.pow(x, y) ==> [[256, 65536], [9, 27]]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`y`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`.
-
-
-- - -
-
-#### `tf.Tensor.__rsub__(y, x)` {#Tensor.__rsub__}
-
-Returns x - y element-wise.
-
-*NOTE*: `Sub` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Tensor.__rtruediv__(y, x)` {#Tensor.__rtruediv__}
-
-
-
-
-- - -
-
-#### `tf.Tensor.__rxor__(y, x)` {#Tensor.__rxor__}
-
-x ^ y = (x | y) & ~(x & y).
-
-
-- - -
-
-#### `tf.Tensor.__str__()` {#Tensor.__str__}
-
-
-
-
-- - -
-
-#### `tf.Tensor.__sub__(x, y)` {#Tensor.__sub__}
-
-Returns x - y element-wise.
-
-*NOTE*: `Sub` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Tensor.__truediv__(x, y)` {#Tensor.__truediv__}
-
-
-
-
-- - -
-
-#### `tf.Tensor.__xor__(x, y)` {#Tensor.__xor__}
-
-x ^ y = (x | y) & ~(x & y).
-
-
-- - -
-
-#### `tf.Tensor.consumers()` {#Tensor.consumers}
-
-Returns a list of `Operation`s that consume this tensor.
-
-##### Returns:
-
- A list of `Operation`s.
-
-
-- - -
-
-#### `tf.Tensor.device` {#Tensor.device}
-
-The name of the device on which this tensor will be produced, or None.
-
-
-- - -
-
-#### `tf.Tensor.dtype` {#Tensor.dtype}
-
-The `DType` of elements in this tensor.
-
-
-- - -
-
-#### `tf.Tensor.eval(feed_dict=None, session=None)` {#Tensor.eval}
-
-Evaluates this tensor in a `Session`.
-
-Calling this method will execute all preceding operations that
-produce the inputs needed for the operation that produces this
-tensor.
-
-*N.B.* Before invoking `Tensor.eval()`, its graph must have been
-launched in a session, and either a default session must be
-available, or `session` must be specified explicitly.
-
-##### Args:
-
-
-* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
- See [`Session.run()`](../../api_docs/python/client.md#Session.run) for a
- description of the valid feed values.
-* <b>`session`</b>: (Optional.) The `Session` to be used to evaluate this tensor. If
- none, the default session will be used.
-
-##### Returns:
-
- A numpy array corresponding to the value of this tensor.
-
-
-- - -
-
-#### `tf.Tensor.get_shape()` {#Tensor.get_shape}
-
-Alias of Tensor.shape.
-
-
-- - -
-
-#### `tf.Tensor.graph` {#Tensor.graph}
-
-The `Graph` that contains this tensor.
-
-
-- - -
-
-#### `tf.Tensor.name` {#Tensor.name}
-
-The string name of this tensor.
-
-
-- - -
-
-#### `tf.Tensor.op` {#Tensor.op}
-
-The `Operation` that produces this tensor as an output.
-
-
-- - -
-
-#### `tf.Tensor.set_shape(shape)` {#Tensor.set_shape}
-
-Updates the shape of this tensor.
-
-This method can be called multiple times, and will merge the given
-`shape` with the current shape of this tensor. It can be used to
-provide additional information about the shape of this tensor that
-cannot be inferred from the graph alone. For example, this can be used
-to provide additional information about the shapes of images:
-
-```python
-_, image_data = tf.TFRecordReader(...).read(...)
-image = tf.image.decode_png(image_data, channels=3)
-
-# The height and width dimensions of `image` are data dependent, and
-# cannot be computed without executing the op.
-print(image.shape)
-==> TensorShape([Dimension(None), Dimension(None), Dimension(3)])
-
-# We know that each image in this dataset is 28 x 28 pixels.
-image.set_shape([28, 28, 3])
-print(image.shape)
-==> TensorShape([Dimension(28), Dimension(28), Dimension(3)])
-```
-
-##### Args:
-
-
-* <b>`shape`</b>: A `TensorShape` representing the shape of this tensor.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `shape` is not compatible with the current shape of
- this tensor.
-
-
-- - -
-
-#### `tf.Tensor.shape` {#Tensor.shape}
-
-Returns the `TensorShape` that represents the shape of this tensor.
-
-The shape is computed using shape inference functions that are
-registered in the Op for each `Operation`. See
-[`TensorShape`](../../api_docs/python/framework.md#TensorShape)
-for more details of what a shape represents.
-
-The inferred shape of a tensor is used to provide shape
-information without having to launch the graph in a session. This
-can be used for debugging, and providing early error messages. For
-example:
-
-```python
-c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
-
-print(c.shape)
-==> TensorShape([Dimension(2), Dimension(3)])
-
-d = tf.constant([[1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]])
-
-print(d.shape)
-==> TensorShape([Dimension(4), Dimension(2)])
-
-# Raises a ValueError, because `c` and `d` do not have compatible
-# inner dimensions.
-e = tf.matmul(c, d)
-
-f = tf.matmul(c, d, transpose_a=True, transpose_b=True)
-
-print(f.shape)
-==> TensorShape([Dimension(3), Dimension(4)])
-```
-
-In some cases, the inferred shape may have unknown dimensions. If
-the caller has additional information about the values of these
-dimensions, `Tensor.set_shape()` can be used to augment the
-inferred shape.
-
-##### Returns:
-
- A `TensorShape` representing the shape of this tensor.
-
-
-- - -
-
-#### `tf.Tensor.value_index` {#Tensor.value_index}
-
-The index of this tensor in the outputs of its `Operation`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Variable.from_proto.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Variable.from_proto.md
deleted file mode 100644
index e4ab071c59..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.Variable.from_proto.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.Variable.from_proto(variable_def, import_scope=None)` {#Variable.from_proto}
-
-Returns a `Variable` object created from `variable_def`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.accumulate_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.accumulate_n.md
deleted file mode 100644
index 7b558a4868..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.accumulate_n.md
+++ /dev/null
@@ -1,40 +0,0 @@
-### `tf.accumulate_n(inputs, shape=None, tensor_dtype=None, name=None)` {#accumulate_n}
-
-Returns the element-wise sum of a list of tensors.
-
-Optionally, pass `shape` and `tensor_dtype` for shape and type checking,
-otherwise, these are inferred.
-
-NOTE: This operation is not differentiable and cannot be used if inputs depend
-on trainable variables. Please use `tf.add_n` for such cases.
-
-For example:
-
-```python
-# tensor 'a' is [[1, 2], [3, 4]]
-# tensor `b` is [[5, 0], [0, 6]]
-tf.accumulate_n([a, b, a]) ==> [[7, 4], [6, 14]]
-
-# Explicitly pass shape and type
-tf.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32)
- ==> [[7, 4], [6, 14]]
-```
-
-##### Args:
-
-
-* <b>`inputs`</b>: A list of `Tensor` objects, each with same shape and type.
-* <b>`shape`</b>: Shape of elements of `inputs`.
-* <b>`tensor_dtype`</b>: The type of `inputs`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of same shape and type as the elements of `inputs`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `inputs` don't all have same shape and dtype or the shape
- cannot be inferred.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.all_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.all_variables.md
deleted file mode 100644
index 1badf0e5c5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.all_variables.md
+++ /dev/null
@@ -1,8 +0,0 @@
-### `tf.all_variables(*args, **kwargs)` {#all_variables}
-
-See `tf.global_variables`. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02.
-Instructions for updating:
-Please use tf.global_variables instead.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_less_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_less_equal.md
deleted file mode 100644
index 3671bc94df..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_less_equal.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.assert_less_equal(x, y, data=None, summarize=None, message=None, name=None)` {#assert_less_equal}
-
-Assert the condition `x <= y` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_less_equal(x, y)]):
- output = tf.reduce_sum(x)
-```
-
-This condition holds if for every pair of (possibly broadcast) elements
-`x[i]`, `y[i]`, we have `x[i] <= y[i]`.
-If both `x` and `y` are empty, this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`y`</b>: Numeric `Tensor`, same dtype as and broadcastable to `x`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`, `y`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_less_equal"
-
-##### Returns:
-
- Op that raises `InvalidArgumentError` if `x <= y` is False.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_rank_at_least.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_rank_at_least.md
deleted file mode 100644
index 380ab0af74..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assert_rank_at_least.md
+++ /dev/null
@@ -1,33 +0,0 @@
-### `tf.assert_rank_at_least(x, rank, data=None, summarize=None, message=None, name=None)` {#assert_rank_at_least}
-
-Assert `x` has rank equal to `rank` or higher.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_rank_at_least(x, 2)]):
- output = tf.reduce_sum(x)
-```
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`rank`</b>: Scalar `Tensor`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional).
- Defaults to "assert_rank_at_least".
-
-##### Returns:
-
- Op raising `InvalidArgumentError` unless `x` has specified rank or higher.
- If static checks determine `x` has correct rank, a `no_op` is returned.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If static checks determine `x` has wrong rank.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assign.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assign.md
deleted file mode 100644
index f72385be60..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.assign.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.assign(ref, value, validate_shape=None, use_locking=None, name=None)` {#assign}
-
-Update 'ref' by assigning 'value' to it.
-
-This operation outputs "ref" after the assignment is done.
-This makes it easier to chain operations that need to use the reset value.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`.
- Should be from a `Variable` node. May be uninitialized.
-* <b>`value`</b>: A `Tensor`. Must have the same type as `ref`.
- The value to be assigned to the variable.
-* <b>`validate_shape`</b>: An optional `bool`. Defaults to `True`.
- If true, the operation will validate that the shape
- of 'value' matches the shape of the Tensor being assigned to. If false,
- 'ref' will take on the shape of 'value'.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `True`.
- If True, the assignment will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as "ref". Returned as a convenience for operations that want
- to use the new value after the variable has been reset.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_to_space.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_to_space.md
deleted file mode 100644
index 3c3f85f869..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.batch_to_space.md
+++ /dev/null
@@ -1,100 +0,0 @@
-### `tf.batch_to_space(input, crops, block_size, name=None)` {#batch_to_space}
-
-BatchToSpace for 4-D tensors of type T.
-
-This is a legacy version of the more general BatchToSpaceND.
-
-Rearranges (permutes) data from batch into blocks of spatial data, followed by
-cropping. This is the reverse transformation of SpaceToBatch. More specifically,
-this op outputs a copy of the input tensor where values from the `batch`
-dimension are moved in spatial blocks to the `height` and `width` dimensions,
-followed by cropping along the `height` and `width` dimensions.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. 4-D tensor with shape
- `[batch*block_size*block_size, height_pad/block_size, width_pad/block_size,
- depth]`. Note that the batch size of the input tensor must be divisible by
- `block_size * block_size`.
-* <b>`crops`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies
- how many elements to crop from the intermediate result across the spatial
- dimensions as follows:
-
- crops = [[crop_top, crop_bottom], [crop_left, crop_right]]
-
-* <b>`block_size`</b>: An `int` that is `>= 2`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- 4-D with shape `[batch, height, width, depth]`, where:
-
- height = height_pad - crop_top - crop_bottom
- width = width_pad - crop_left - crop_right
-
- The attr `block_size` must be greater than one. It indicates the block size.
-
- Some examples:
-
- (1) For the following input of shape `[4, 1, 1, 1]` and block_size of 2:
-
- ```prettyprint
- [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
- ```
-
- The output tensor has shape `[1, 2, 2, 1]` and value:
-
- ```prettyprint
- x = [[[[1], [2]], [[3], [4]]]]
- ```
-
- (2) For the following input of shape `[4, 1, 1, 3]` and block_size of 2:
-
- ```prettyprint
- [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
- ```
-
- The output tensor has shape `[1, 2, 2, 3]` and value:
-
- ```prettyprint
- x = [[[[1, 2, 3], [4, 5, 6]],
- [[7, 8, 9], [10, 11, 12]]]]
- ```
-
- (3) For the following input of shape `[4, 2, 2, 1]` and block_size of 2:
-
- ```prettyprint
- x = [[[[1], [3]], [[9], [11]]],
- [[[2], [4]], [[10], [12]]],
- [[[5], [7]], [[13], [15]]],
- [[[6], [8]], [[14], [16]]]]
- ```
-
- The output tensor has shape `[1, 4, 4, 1]` and value:
-
- ```prettyprint
- x = [[[1], [2], [3], [4]],
- [[5], [6], [7], [8]],
- [[9], [10], [11], [12]],
- [[13], [14], [15], [16]]]
- ```
-
- (4) For the following input of shape `[8, 1, 2, 1]` and block_size of 2:
-
- ```prettyprint
- x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],
- [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]
- ```
-
- The output tensor has shape `[2, 2, 4, 1]` and value:
-
- ```prettyprint
- x = [[[[1], [3]], [[5], [7]]],
- [[[2], [4]], [[10], [12]]],
- [[[5], [7]], [[13], [15]]],
- [[[6], [8]], [[14], [16]]]]
- ```
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.constant_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.constant_initializer.md
deleted file mode 100644
index fa95f791bf..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.constant_initializer.md
+++ /dev/null
@@ -1,86 +0,0 @@
-Initializer that generates tensors with constant values.
-
-The resulting tensor is populated with values of type `dtype`, as
-specified by arguments `value` following the desired `shape` of the
-new tensor (see examples below).
-
-The argument `value` can be a constant value, or a list of values of type
-`dtype`. If `value` is a list, then the length of the list must be less
-than or equal to the number of elements implied by the desired shape of the
-tensor. In the case where the total number of elements in `value` is less
-than the number of elements required by the tensor shape, the last element
-in `value` will be used to fill the remaining entries. If the total number of
-elements in `value` is greater than the number of elements required by the
-tensor shape, the initializer will raise a `ValueError`.
-
-Args:
- value: A Python scalar, list of values, or a N-dimensional numpy array. All
- elements of the initialized variable will be set to the corresponding
- value in the `value` argument.
- dtype: The data type.
- verify_shape: Boolean that enables verification of the shape of `value`. If
- `True`, the initializer will throw an error if the shape of `value` is not
- compatible with the shape of the initialized tensor.
-
-Examples:
- The following example can be rewritten using a numpy.ndarray instead
- of the `value` list, even reshaped, as shown in the two commented lines
- below the `value` list initialization.
-
-```python
- >>> import numpy as np
- >>> import tensorflow as tf
-
- >>> value = [0, 1, 2, 3, 4, 5, 6, 7]
- >>> # value = np.array(value)
- >>> # value = value.reshape([2, 4])
- >>> init = tf.constant_initializer(value)
-
- >>> print('fitting shape:')
- >>> with tf.Session():
- >>> x = tf.get_variable('x', shape=[2, 4], initializer=init)
- >>> x.initializer.run()
- >>> print(x.eval())
-
- fitting shape:
- [[ 0. 1. 2. 3.]
- [ 4. 5. 6. 7.]]
-
- >>> print('larger shape:')
- >>> with tf.Session():
- >>> x = tf.get_variable('x', shape=[3, 4], initializer=init)
- >>> x.initializer.run()
- >>> print(x.eval())
-
- larger shape:
- [[ 0. 1. 2. 3.]
- [ 4. 5. 6. 7.]
- [ 7. 7. 7. 7.]]
-
- >>> print('smaller shape:')
- >>> with tf.Session():
- >>> x = tf.get_variable('x', shape=[2, 3], initializer=init)
-
- ValueError: Too many elements provided. Needed at most 6, but received 8
-
- >>> print('shape verification:')
- >>> init_verify = tf.constant_initializer(value, verify_shape=True)
- >>> with tf.Session():
- >>> x = tf.get_variable('x', shape=[3, 4], initializer=init_verify)
-
- TypeError: Expected Tensor's shape: (3, 4), got (8,).
-```
-- - -
-
-#### `tf.constant_initializer.__call__(shape, dtype=None, partition_info=None)` {#constant_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.constant_initializer.__init__(value=0, dtype=tf.float32, verify_shape=False)` {#constant_initializer.__init__}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.md
deleted file mode 100644
index 0bee637f4d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.md
+++ /dev/null
@@ -1,58 +0,0 @@
-Base Class for Tensor-like objects that emit stochastic values.
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.__init__()` {#BaseStochasticTensor.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.dtype` {#BaseStochasticTensor.dtype}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.graph` {#BaseStochasticTensor.graph}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.loss(sample_loss)` {#BaseStochasticTensor.loss}
-
-Returns the term to add to the surrogate loss.
-
-This method is called by `surrogate_loss`. The input `sample_loss` should
-have already had `stop_gradient` applied to it. This is because the
-surrogate_loss usually provides a Monte Carlo sample term of the form
-`differentiable_surrogate * sample_loss` where `sample_loss` is considered
-constant with respect to the input for purposes of the gradient.
-
-##### Args:
-
-
-* <b>`sample_loss`</b>: `Tensor`, sample loss downstream of this `StochasticTensor`.
-
-##### Returns:
-
- Either `None` or a `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.name` {#BaseStochasticTensor.name}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.BaseStochasticTensor.value(name=None)` {#BaseStochasticTensor.value}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.bayesflow.variational_inference.elbo.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.bayesflow.variational_inference.elbo.md
deleted file mode 100644
index b4fbbf8aad..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.bayesflow.variational_inference.elbo.md
+++ /dev/null
@@ -1,71 +0,0 @@
-### `tf.contrib.bayesflow.variational_inference.elbo(log_likelihood, variational_with_prior=None, keep_batch_dim=True, form=None, name='ELBO')` {#elbo}
-
-Evidence Lower BOund. `log p(x) >= ELBO`.
-
-Optimization objective for inference of hidden variables by variational
-inference.
-
-This function is meant to be used in conjunction with `StochasticTensor`.
-The user should build out the inference network, using `StochasticTensor`s
-as latent variables, and the generative network. `elbo` at minimum needs
-`p(x|Z)` and assumes that all `StochasticTensor`s upstream of `p(x|Z)` are
-the variational distributions. Use `register_prior` to register `Distribution`
-priors for each `StochasticTensor`. Alternatively, pass in
-`variational_with_prior` specifying all variational distributions and their
-priors.
-
-Mathematical details:
-
-```
-log p(x) = log \int p(x, Z) dZ
- = log \int \frac {q(Z)p(x, Z)}{q(Z)} dZ
- = log E_q[\frac {p(x, Z)}{q(Z)}]
- >= E_q[log \frac {p(x, Z)}{q(Z)}] = L[q; p, x] # ELBO
-
-L[q; p, x] = E_q[log p(x|Z)p(Z)] - E_q[log q(Z)]
- = E_q[log p(x|Z)p(Z)] + H[q] (1)
- = E_q[log p(x|Z)] - KL(q || p) (2)
-
-H - Entropy
-KL - Kullback-Leibler divergence
-```
-
-See section 2.2 of Stochastic Variational Inference by Hoffman et al. for
-more, including the ELBO's equivalence to minimizing `KL(q(Z)||p(Z|x))`
-in the fully Bayesian setting. https://arxiv.org/pdf/1206.7051.pdf.
-
-`form` specifies which form of the ELBO is used. `form=ELBOForms.default`
-tries, in order of preference: analytic KL, analytic entropy, sampling.
-
-Multiple entries in the `variational_with_prior` dict implies a factorization.
-e.g. `q(Z) = q(z1)q(z2)q(z3)`.
-
-##### Args:
-
-
-* <b>`log_likelihood`</b>: `Tensor` log p(x|Z).
-* <b>`variational_with_prior`</b>: dict from `StochasticTensor` q(Z) to
- `Distribution` p(Z). If `None`, defaults to all `StochasticTensor`
- objects upstream of `log_likelihood` with priors registered with
- `register_prior`.
-* <b>`keep_batch_dim`</b>: bool. Whether to keep the batch dimension when summing
- entropy/KL term. When the sample is per data point, this should be True;
- otherwise (e.g. in a Bayesian NN), this should be False.
-* <b>`form`</b>: ELBOForms constant. Controls how the ELBO is computed. Defaults to
- ELBOForms.default.
-* <b>`name`</b>: name to prefix ops with.
-
-##### Returns:
-
- `Tensor` ELBO of the same type and shape as `log_likelihood`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if variationals in `variational_with_prior` are not
- `StochasticTensor`s or if priors are not `Distribution`s.
-* <b>`TypeError`</b>: if form is not a valid ELBOForms constant.
-* <b>`ValueError`</b>: if `variational_with_prior` is None and there are no
- `StochasticTensor`s upstream of `log_likelihood`.
-* <b>`ValueError`</b>: if any variational does not have a prior passed or registered.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.crf.crf_log_likelihood.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.crf.crf_log_likelihood.md
deleted file mode 100644
index a1f7bd2033..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.crf.crf_log_likelihood.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.contrib.crf.crf_log_likelihood(inputs, tag_indices, sequence_lengths, transition_params=None)` {#crf_log_likelihood}
-
-Computes the log-likelihood of tag sequences in a CRF.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A [batch_size, max_seq_len, num_tags] tensor of unary potentials
- to use as input to the CRF layer.
-* <b>`tag_indices`</b>: A [batch_size, max_seq_len] matrix of tag indices for which we
- compute the log-likelihood.
-* <b>`sequence_lengths`</b>: A [batch_size] vector of true sequence lengths.
-* <b>`transition_params`</b>: A [num_tags, num_tags] transition matrix, if available.
-
-##### Returns:
-
-
-* <b>`log_likelihood`</b>: A scalar containing the log-likelihood of the given sequence
- of tag indices.
-* <b>`transition_params`</b>: A [num_tags, num_tags] transition matrix. This is either
- provided by the caller or created in this function.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.MultivariateNormalDiag.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.MultivariateNormalDiag.md
deleted file mode 100644
index c0aec27082..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.MultivariateNormalDiag.md
+++ /dev/null
@@ -1,771 +0,0 @@
-The multivariate normal distribution on `R^k`.
-
-The Multivariate Normal distribution is defined over `R^k` and parameterized
-by a (batch of) length-`k` `loc` vector (aka "mu") and a (batch of) `k x k`
-`scale` matrix; `covariance = scale @ scale.T` where `@` denotes
-matrix-multiplication.
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; loc, scale) = exp(-0.5 ||y||**2) / Z,
-y = inv(scale) @ (x - loc),
-Z = (2 pi)**(0.5 k) |det(scale)|,
-```
-
-where:
-
-* `loc` is a vector in `R^k`,
-* `scale` is a linear operator in `R^{k x k}`, `cov = scale @ scale.T`,
-* `Z` denotes the normalization constant, and,
-* `||y||**2` denotes the squared Euclidean norm of `y`.
-
-A (non-batch) `scale` matrix is:
-
-```none
-scale = diag(scale_diag + scale_identity_multiplier * ones(k))
-```
-
-where:
-
-* `scale_diag.shape = [k]`, and,
-* `scale_identity_multiplier.shape = []`.
-
-Additional leading dimensions (if any) will index batches.
-
-If both `scale_diag` and `scale_identity_multiplier` are `None`, then
-`scale` is the Identity matrix.
-
-The MultivariateNormal distribution is a member of the [location-scale
-family](https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be
-constructed as,
-
-```none
-X ~ MultivariateNormal(loc=0, scale=1) # Identity scale, zero shift.
-Y = scale @ X + loc
-```
-
-#### Examples
-
-```python
-ds = tf.contrib.distributions
-
-# Initialize a single 2-variate Gaussian.
-mvn = ds.MultivariateNormalDiag(
- loc=[1., -1],
- scale_diag=[1, 2.])
-
-mvn.mean().eval()
-# ==> [1., -1]
-
-mvn.stddev().eval()
-# ==> [1., 2]
-
-# Evaluate this on an observation in `R^2`, returning a scalar.
-mvn.prob([-1., 0]).eval() # shape: []
-
-# Initialize a 3-batch, 2-variate scaled-identity Gaussian.
-mvn = ds.MultivariateNormalDiag(
- loc=[1., -1],
- scale_identity_multiplier=[1, 2., 3])
-
-mvn.mean().eval() # shape: [3, 2]
-# ==> [[1., -1]
-# [1, -1],
-# [1, -1]]
-
-mvn.stddev().eval() # shape: [3, 2]
-# ==> [[1., 1],
-# [2, 2],
-# [3, 3]]
-
-# Evaluate this on an observation in `R^2`, returning a length-3 vector.
-mvn.prob([-1., 0]).eval() # shape: [3]
-
-# Initialize a 2-batch of 3-variate Gaussians.
-mvn = ds.MultivariateNormalDiag(
- loc=[[1., 2, 3],
- [11, 22, 33]] # shape: [2, 3]
- scale_diag=[[1., 2, 3],
- [0.5, 1, 1.5]]) # shape: [2, 3]
-
-# Evaluate this on a two observations, each in `R^3`, returning a length-2
-# vector.
-x = [[-1., 0, 1],
- [-11, 0, 11.]] # shape: [2, 3].
-mvn.prob(x).eval() # shape: [2]
-```
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.__init__(loc=None, scale_diag=None, scale_identity_multiplier=None, validate_args=False, allow_nan_stats=True, name='MultivariateNormalDiag')` {#MultivariateNormalDiag.__init__}
-
-Construct Multivariate Normal distribution on `R^k`.
-
-The `batch_shape` is the broadcast shape between `loc` and `scale`
-arguments.
-
-The `event_shape` is given by the last dimension of `loc` or the last
-dimension of the matrix implied by `scale`.
-
-Recall that `covariance = scale @ scale.T`. A (non-batch) `scale` matrix is:
-
-```none
-scale = diag(scale_diag + scale_identity_multiplier * ones(k))
-```
-
-where:
-
-* `scale_diag.shape = [k]`, and,
-* `scale_identity_multiplier.shape = []`.
-
-Additional leading dimensions (if any) will index batches.
-
-If both `scale_diag` and `scale_identity_multiplier` are `None`, then
-`scale` is the Identity matrix.
-
-##### Args:
-
-
-* <b>`loc`</b>: Floating-point `Tensor`. If this is set to `None`, `loc` is
- implicitly `0`. When specified, may have shape `[B1, ..., Bb, k]` where
- `b >= 0` and `k` is the event size.
-* <b>`scale_diag`</b>: Non-zero, floating-point `Tensor` representing a diagonal
- matrix added to `scale`. May have shape `[B1, ..., Bb, k]`, `b >= 0`,
- and characterizes `b`-batches of `k x k` diagonal matrices added to
- `scale`. When both `scale_identity_multiplier` and `scale_diag` are
- `None` then `scale` is the `Identity`.
-* <b>`scale_identity_multiplier`</b>: Non-zero, floating-point `Tensor` representing
- a scaled-identity-matrix added to `scale`. May have shape
- `[B1, ..., Bb]`, `b >= 0`, and characterizes `b`-batches of scaled
- `k x k` identity matrices added to `scale`. When both
- `scale_identity_multiplier` and `scale_diag` are `None` then `scale` is
- the `Identity`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`,
- statistics (e.g., mean, mode, variance) use the value "`NaN`" to
- indicate the result is undefined. When `False`, an exception is raised
- if one or more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if at most `scale_identity_multiplier` is specified.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.allow_nan_stats` {#MultivariateNormalDiag.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.batch_shape` {#MultivariateNormalDiag.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.batch_shape_tensor(name='batch_shape_tensor')` {#MultivariateNormalDiag.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.bijector` {#MultivariateNormalDiag.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.cdf(value, name='cdf')` {#MultivariateNormalDiag.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.copy(**override_parameters_kwargs)` {#MultivariateNormalDiag.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.covariance(name='covariance')` {#MultivariateNormalDiag.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.det_covariance(name='det_covariance')` {#MultivariateNormalDiag.det_covariance}
-
-Determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.distribution` {#MultivariateNormalDiag.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.dtype` {#MultivariateNormalDiag.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.entropy(name='entropy')` {#MultivariateNormalDiag.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.event_shape` {#MultivariateNormalDiag.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.event_shape_tensor(name='event_shape_tensor')` {#MultivariateNormalDiag.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.is_continuous` {#MultivariateNormalDiag.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.is_scalar_batch(name='is_scalar_batch')` {#MultivariateNormalDiag.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.is_scalar_event(name='is_scalar_event')` {#MultivariateNormalDiag.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.loc` {#MultivariateNormalDiag.loc}
-
-The `loc` `Tensor` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.log_cdf(value, name='log_cdf')` {#MultivariateNormalDiag.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.log_det_covariance(name='log_det_covariance')` {#MultivariateNormalDiag.log_det_covariance}
-
-Log of determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.log_prob(value, name='log_prob')` {#MultivariateNormalDiag.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.log_survival_function(value, name='log_survival_function')` {#MultivariateNormalDiag.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.mean(name='mean')` {#MultivariateNormalDiag.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.mode(name='mode')` {#MultivariateNormalDiag.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.name` {#MultivariateNormalDiag.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#MultivariateNormalDiag.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.param_static_shapes(cls, sample_shape)` {#MultivariateNormalDiag.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.parameters` {#MultivariateNormalDiag.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.prob(value, name='prob')` {#MultivariateNormalDiag.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.reparameterization_type` {#MultivariateNormalDiag.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.sample(sample_shape=(), seed=None, name='sample')` {#MultivariateNormalDiag.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.scale` {#MultivariateNormalDiag.scale}
-
-The `scale` `LinearOperator` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.stddev(name='stddev')` {#MultivariateNormalDiag.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.survival_function(value, name='survival_function')` {#MultivariateNormalDiag.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.validate_args` {#MultivariateNormalDiag.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiag.variance(name='variance')` {#MultivariateNormalDiag.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.md
deleted file mode 100644
index 0353b095bf..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.md
+++ /dev/null
@@ -1,619 +0,0 @@
-MultivariateNormalDiag with `diag_stddev = softplus(diag_stddev)`.
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.__init__(loc, scale_diag, validate_args=False, allow_nan_stats=True, name='MultivariateNormalDiagWithSoftplusScale')` {#MultivariateNormalDiagWithSoftplusScale.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.allow_nan_stats` {#MultivariateNormalDiagWithSoftplusScale.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.batch_shape` {#MultivariateNormalDiagWithSoftplusScale.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.batch_shape_tensor(name='batch_shape_tensor')` {#MultivariateNormalDiagWithSoftplusScale.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.bijector` {#MultivariateNormalDiagWithSoftplusScale.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.cdf(value, name='cdf')` {#MultivariateNormalDiagWithSoftplusScale.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.copy(**override_parameters_kwargs)` {#MultivariateNormalDiagWithSoftplusScale.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.covariance(name='covariance')` {#MultivariateNormalDiagWithSoftplusScale.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.det_covariance(name='det_covariance')` {#MultivariateNormalDiagWithSoftplusScale.det_covariance}
-
-Determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.distribution` {#MultivariateNormalDiagWithSoftplusScale.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.dtype` {#MultivariateNormalDiagWithSoftplusScale.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.entropy(name='entropy')` {#MultivariateNormalDiagWithSoftplusScale.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.event_shape` {#MultivariateNormalDiagWithSoftplusScale.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.event_shape_tensor(name='event_shape_tensor')` {#MultivariateNormalDiagWithSoftplusScale.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.is_continuous` {#MultivariateNormalDiagWithSoftplusScale.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.is_scalar_batch(name='is_scalar_batch')` {#MultivariateNormalDiagWithSoftplusScale.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.is_scalar_event(name='is_scalar_event')` {#MultivariateNormalDiagWithSoftplusScale.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.loc` {#MultivariateNormalDiagWithSoftplusScale.loc}
-
-The `loc` `Tensor` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.log_cdf(value, name='log_cdf')` {#MultivariateNormalDiagWithSoftplusScale.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.log_det_covariance(name='log_det_covariance')` {#MultivariateNormalDiagWithSoftplusScale.log_det_covariance}
-
-Log of determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.log_prob(value, name='log_prob')` {#MultivariateNormalDiagWithSoftplusScale.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.log_survival_function(value, name='log_survival_function')` {#MultivariateNormalDiagWithSoftplusScale.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.mean(name='mean')` {#MultivariateNormalDiagWithSoftplusScale.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.mode(name='mode')` {#MultivariateNormalDiagWithSoftplusScale.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.name` {#MultivariateNormalDiagWithSoftplusScale.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#MultivariateNormalDiagWithSoftplusScale.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.param_static_shapes(cls, sample_shape)` {#MultivariateNormalDiagWithSoftplusScale.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.parameters` {#MultivariateNormalDiagWithSoftplusScale.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.prob(value, name='prob')` {#MultivariateNormalDiagWithSoftplusScale.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.reparameterization_type` {#MultivariateNormalDiagWithSoftplusScale.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.sample(sample_shape=(), seed=None, name='sample')` {#MultivariateNormalDiagWithSoftplusScale.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.scale` {#MultivariateNormalDiagWithSoftplusScale.scale}
-
-The `scale` `LinearOperator` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.stddev(name='stddev')` {#MultivariateNormalDiagWithSoftplusScale.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.survival_function(value, name='survival_function')` {#MultivariateNormalDiagWithSoftplusScale.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.validate_args` {#MultivariateNormalDiagWithSoftplusScale.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagWithSoftplusScale.variance(name='variance')` {#MultivariateNormalDiagWithSoftplusScale.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.QuantizedDistribution.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.QuantizedDistribution.md
deleted file mode 100644
index 42bdb1eb24..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.QuantizedDistribution.md
+++ /dev/null
@@ -1,740 +0,0 @@
-Distribution representing the quantization `Y = ceiling(X)`.
-
-#### Definition in terms of sampling.
-
-```
-1. Draw X
-2. Set Y <-- ceiling(X)
-3. If Y < low, reset Y <-- low
-4. If Y > high, reset Y <-- high
-5. Return Y
-```
-
-#### Definition in terms of the probability mass function.
-
-Given scalar random variable `X`, we define a discrete random variable `Y`
-supported on the integers as follows:
-
-```
-P[Y = j] := P[X <= low], if j == low,
- := P[X > high - 1], j == high,
- := 0, if j < low or j > high,
- := P[j - 1 < X <= j], all other j.
-```
-
-Conceptually, without cutoffs, the quantization process partitions the real
-line `R` into half open intervals, and identifies an integer `j` with the
-right endpoints:
-
-```
-R = ... (-2, -1](-1, 0](0, 1](1, 2](2, 3](3, 4] ...
-j = ... -1 0 1 2 3 4 ...
-```
-
-`P[Y = j]` is the mass of `X` within the `jth` interval.
-If `low = 0`, and `high = 2`, then the intervals are redrawn
-and `j` is re-assigned:
-
-```
-R = (-infty, 0](0, 1](1, infty)
-j = 0 1 2
-```
-
-`P[Y = j]` is still the mass of `X` within the `jth` interval.
-
-#### Caveats
-
-Since evaluation of each `P[Y = j]` involves a cdf evaluation (rather than
-a closed form function such as for a Poisson), computations such as mean and
-entropy are better done with samples or approximations, and are not
-implemented by this class.
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.__init__(distribution, low=None, high=None, validate_args=False, name='QuantizedDistribution')` {#QuantizedDistribution.__init__}
-
-Construct a Quantized Distribution representing `Y = ceiling(X)`.
-
-Some properties are inherited from the distribution defining `X`. Example:
-`allow_nan_stats` is determined for this `QuantizedDistribution` by reading
-the `distribution`.
-
-##### Args:
-
-
-* <b>`distribution`</b>: The base distribution class to transform. Typically an
- instance of `Distribution`.
-* <b>`low`</b>: `Tensor` with same `dtype` as this distribution and shape
- able to be added to samples. Should be a whole number. Default `None`.
- If provided, base distribution's `prob` should be defined at
- `low`.
-* <b>`high`</b>: `Tensor` with same `dtype` as this distribution and shape
- able to be added to samples. Should be a whole number. Default `None`.
- If provided, base distribution's `prob` should be defined at
- `high - 1`.
- `high` must be strictly greater than `low`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `dist_cls` is not a subclass of
- `Distribution` or continuous.
-* <b>`NotImplementedError`</b>: If the base distribution does not implement `cdf`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.allow_nan_stats` {#QuantizedDistribution.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.batch_shape` {#QuantizedDistribution.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.batch_shape_tensor(name='batch_shape_tensor')` {#QuantizedDistribution.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.cdf(value, name='cdf')` {#QuantizedDistribution.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-
-Additional documentation from `QuantizedDistribution`:
-
-For whole numbers `y`,
-
-```
-cdf(y) := P[Y <= y]
- = 1, if y >= high,
- = 0, if y < low,
- = P[X <= y], otherwise.
-```
-
-Since `Y` only has mass at whole numbers, `P[Y <= y] = P[Y <= floor(y)]`.
-This dictates that fractional `y` are first floored to a whole number, and
-then above definition applies.
-
-The base distribution's `cdf` method must be defined on `y - 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.copy(**override_parameters_kwargs)` {#QuantizedDistribution.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.covariance(name='covariance')` {#QuantizedDistribution.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.distribution` {#QuantizedDistribution.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.dtype` {#QuantizedDistribution.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.entropy(name='entropy')` {#QuantizedDistribution.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.event_shape` {#QuantizedDistribution.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.event_shape_tensor(name='event_shape_tensor')` {#QuantizedDistribution.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.is_continuous` {#QuantizedDistribution.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.is_scalar_batch(name='is_scalar_batch')` {#QuantizedDistribution.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.is_scalar_event(name='is_scalar_event')` {#QuantizedDistribution.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.log_cdf(value, name='log_cdf')` {#QuantizedDistribution.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-
-Additional documentation from `QuantizedDistribution`:
-
-For whole numbers `y`,
-
-```
-cdf(y) := P[Y <= y]
- = 1, if y >= high,
- = 0, if y < low,
- = P[X <= y], otherwise.
-```
-
-Since `Y` only has mass at whole numbers, `P[Y <= y] = P[Y <= floor(y)]`.
-This dictates that fractional `y` are first floored to a whole number, and
-then above definition applies.
-
-The base distribution's `log_cdf` method must be defined on `y - 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.log_prob(value, name='log_prob')` {#QuantizedDistribution.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `QuantizedDistribution`:
-
-For whole numbers `y`,
-
-```
-P[Y = y] := P[X <= low], if y == low,
- := P[X > high - 1], y == high,
- := 0, if j < low or y > high,
- := P[y - 1 < X <= y], all other y.
-```
-
-
-The base distribution's `log_cdf` method must be defined on `y - 1`. If the
-base distribution has a `log_survival_function` method results will be more
-accurate for large values of `y`, and in this case the `log_survival_function`
-must also be defined on `y - 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.log_survival_function(value, name='log_survival_function')` {#QuantizedDistribution.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-
-Additional documentation from `QuantizedDistribution`:
-
-For whole numbers `y`,
-
-```
-survival_function(y) := P[Y > y]
- = 0, if y >= high,
- = 1, if y < low,
- = P[X <= y], otherwise.
-```
-
-Since `Y` only has mass at whole numbers, `P[Y <= y] = P[Y <= floor(y)]`.
-This dictates that fractional `y` are first floored to a whole number, and
-then above definition applies.
-
-The base distribution's `log_cdf` method must be defined on `y - 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.mean(name='mean')` {#QuantizedDistribution.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.mode(name='mode')` {#QuantizedDistribution.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.name` {#QuantizedDistribution.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#QuantizedDistribution.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.param_static_shapes(cls, sample_shape)` {#QuantizedDistribution.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.parameters` {#QuantizedDistribution.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.prob(value, name='prob')` {#QuantizedDistribution.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `QuantizedDistribution`:
-
-For whole numbers `y`,
-
-```
-P[Y = y] := P[X <= low], if y == low,
- := P[X > high - 1], y == high,
- := 0, if j < low or y > high,
- := P[y - 1 < X <= y], all other y.
-```
-
-
-The base distribution's `cdf` method must be defined on `y - 1`. If the
-base distribution has a `survival_function` method, results will be more
-accurate for large values of `y`, and in this case the `survival_function` must
-also be defined on `y - 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.reparameterization_type` {#QuantizedDistribution.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.sample(sample_shape=(), seed=None, name='sample')` {#QuantizedDistribution.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.stddev(name='stddev')` {#QuantizedDistribution.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.survival_function(value, name='survival_function')` {#QuantizedDistribution.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-
-Additional documentation from `QuantizedDistribution`:
-
-For whole numbers `y`,
-
-```
-survival_function(y) := P[Y > y]
- = 0, if y >= high,
- = 1, if y < low,
- = P[X <= y], otherwise.
-```
-
-Since `Y` only has mass at whole numbers, `P[Y <= y] = P[Y <= floor(y)]`.
-This dictates that fractional `y` are first floored to a whole number, and
-then above definition applies.
-
-The base distribution's `cdf` method must be defined on `y - 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.validate_args` {#QuantizedDistribution.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.QuantizedDistribution.variance(name='variance')` {#QuantizedDistribution.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.StudentT.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.StudentT.md
deleted file mode 100644
index 569e5caab3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.StudentT.md
+++ /dev/null
@@ -1,683 +0,0 @@
-Student's t-distribution with degree of freedom `df`, location `loc`, and `scale` parameters.
-
-#### Mathematical details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; df, mu, sigma) = (1 + y**2 / df)**(-0.5 (df + 1)) / Z
-where,
-y = (x - mu) / sigma
-Z = abs(sigma) sqrt(df pi) Gamma(0.5 df) / Gamma(0.5 (df + 1))
-```
-
-where:
-* `loc = mu`,
-* `scale = sigma`, and,
-* `Z` is the normalization constant, and,
-* `Gamma` is the [gamma function](
- https://en.wikipedia.org/wiki/Gamma_function).
-
-The StudentT distribution is a member of the [location-scale family](
-https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be
-constructed as,
-
-```none
-X ~ StudentT(df, loc=0, scale=1)
-Y = loc + scale * X
-```
-
-Notice that `scale` has semantics more similar to standard deviation than
-variance. However it is not actually the std. deviation; the Student's
-t-distribution std. dev. is `scale sqrt(df / (df - 2))` when `df > 2`.
-
-#### Examples
-
-Examples of initialization of one or a batch of distributions.
-
-```python
-# Define a single scalar Student t distribution.
-single_dist = tf.contrib.distributions.StudentT(df=3)
-
-# Evaluate the pdf at 1, returning a scalar Tensor.
-single_dist.prob(1.)
-
-# Define a batch of two scalar valued Student t's.
-# The first has degrees of freedom 2, mean 1, and scale 11.
-# The second 3, 2 and 22.
-multi_dist = tf.contrib.distributions.StudentT(df=[2, 3],
- loc=[1, 2.],
- scale=[11, 22.])
-
-# Evaluate the pdf of the first distribution on 0, and the second on 1.5,
-# returning a length two tensor.
-multi_dist.prob([0, 1.5])
-
-# Get 3 samples, returning a 3 x 2 tensor.
-multi_dist.sample(3)
-```
-
-Arguments are broadcast when possible.
-
-```python
-# Define a batch of two Student's t distributions.
-# Both have df 2 and mean 1, but different scales.
-dist = tf.contrib.distributions.StudentT(df=2, loc=1, scale=[11, 22.])
-
-# Evaluate the pdf of both distributions on the same point, 3.0,
-# returning a length 2 tensor.
-dist.prob(3.0)
-```
-- - -
-
-#### `tf.contrib.distributions.StudentT.__init__(df, loc, scale, validate_args=False, allow_nan_stats=True, name='StudentT')` {#StudentT.__init__}
-
-Construct Student's t distributions.
-
-The distributions have degree of freedom `df`, mean `loc`, and scale
-`scale`.
-
-The parameters `df`, `loc`, and `scale` must be shaped in a way that
-supports broadcasting (e.g. `df + loc + scale` is a valid operation).
-
-##### Args:
-
-
-* <b>`df`</b>: Floating-point `Tensor`. The degrees of freedom of the
- distribution(s). `df` must contain only positive values.
-* <b>`loc`</b>: Floating-point `Tensor`. The mean(s) of the distribution(s).
-* <b>`scale`</b>: Floating-point `Tensor`. The scaling factor(s) for the
- distribution(s). Note that `scale` is not technically the standard
- deviation of this distribution but has semantics more similar to
- standard deviation than variance.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`,
- statistics (e.g., mean, mode, variance) use the value "`NaN`" to
- indicate the result is undefined. When `False`, an exception is raised
- if one or more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if loc and scale are different dtypes.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.allow_nan_stats` {#StudentT.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.batch_shape` {#StudentT.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.batch_shape_tensor(name='batch_shape_tensor')` {#StudentT.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.cdf(value, name='cdf')` {#StudentT.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.copy(**override_parameters_kwargs)` {#StudentT.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.covariance(name='covariance')` {#StudentT.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.df` {#StudentT.df}
-
-Degrees of freedom in these Student's t distribution(s).
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.dtype` {#StudentT.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.entropy(name='entropy')` {#StudentT.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.event_shape` {#StudentT.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.event_shape_tensor(name='event_shape_tensor')` {#StudentT.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.is_continuous` {#StudentT.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.is_scalar_batch(name='is_scalar_batch')` {#StudentT.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.is_scalar_event(name='is_scalar_event')` {#StudentT.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.loc` {#StudentT.loc}
-
-Locations of these Student's t distribution(s).
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.log_cdf(value, name='log_cdf')` {#StudentT.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.log_prob(value, name='log_prob')` {#StudentT.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.log_survival_function(value, name='log_survival_function')` {#StudentT.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.mean(name='mean')` {#StudentT.mean}
-
-Mean.
-
-Additional documentation from `StudentT`:
-
-The mean of Student's T equals `loc` if `df > 1`, otherwise it is
-`NaN`. If `self.allow_nan_stats=True`, then an exception will be raised
-rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.mode(name='mode')` {#StudentT.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.name` {#StudentT.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#StudentT.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.param_static_shapes(cls, sample_shape)` {#StudentT.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.parameters` {#StudentT.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.prob(value, name='prob')` {#StudentT.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.reparameterization_type` {#StudentT.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.sample(sample_shape=(), seed=None, name='sample')` {#StudentT.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.scale` {#StudentT.scale}
-
-Scaling factors of these Student's t distribution(s).
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.stddev(name='stddev')` {#StudentT.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.survival_function(value, name='survival_function')` {#StudentT.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.validate_args` {#StudentT.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentT.variance(name='variance')` {#StudentT.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-
-Additional documentation from `StudentT`:
-
-The variance for Student's T equals
-
-```
-df / (df - 2), when df > 2
-infinity, when 1 < df <= 2
-NaN, when df <= 1
-```
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.md
deleted file mode 100644
index e21e0cca75..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.md
+++ /dev/null
@@ -1,583 +0,0 @@
-StudentT with `df = floor(abs(df))` and `scale = softplus(scale)`.
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.__init__(df, loc, scale, validate_args=False, allow_nan_stats=True, name='StudentTWithAbsDfSoftplusScale')` {#StudentTWithAbsDfSoftplusScale.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.allow_nan_stats` {#StudentTWithAbsDfSoftplusScale.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.batch_shape` {#StudentTWithAbsDfSoftplusScale.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.batch_shape_tensor(name='batch_shape_tensor')` {#StudentTWithAbsDfSoftplusScale.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.cdf(value, name='cdf')` {#StudentTWithAbsDfSoftplusScale.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.copy(**override_parameters_kwargs)` {#StudentTWithAbsDfSoftplusScale.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.covariance(name='covariance')` {#StudentTWithAbsDfSoftplusScale.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.df` {#StudentTWithAbsDfSoftplusScale.df}
-
-Degrees of freedom in these Student's t distribution(s).
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.dtype` {#StudentTWithAbsDfSoftplusScale.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.entropy(name='entropy')` {#StudentTWithAbsDfSoftplusScale.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.event_shape` {#StudentTWithAbsDfSoftplusScale.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.event_shape_tensor(name='event_shape_tensor')` {#StudentTWithAbsDfSoftplusScale.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.is_continuous` {#StudentTWithAbsDfSoftplusScale.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.is_scalar_batch(name='is_scalar_batch')` {#StudentTWithAbsDfSoftplusScale.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.is_scalar_event(name='is_scalar_event')` {#StudentTWithAbsDfSoftplusScale.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.loc` {#StudentTWithAbsDfSoftplusScale.loc}
-
-Locations of these Student's t distribution(s).
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.log_cdf(value, name='log_cdf')` {#StudentTWithAbsDfSoftplusScale.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.log_prob(value, name='log_prob')` {#StudentTWithAbsDfSoftplusScale.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.log_survival_function(value, name='log_survival_function')` {#StudentTWithAbsDfSoftplusScale.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.mean(name='mean')` {#StudentTWithAbsDfSoftplusScale.mean}
-
-Mean.
-
-Additional documentation from `StudentT`:
-
-The mean of Student's T equals `loc` if `df > 1`, otherwise it is
-`NaN`. If `self.allow_nan_stats=True`, then an exception will be raised
-rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.mode(name='mode')` {#StudentTWithAbsDfSoftplusScale.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.name` {#StudentTWithAbsDfSoftplusScale.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#StudentTWithAbsDfSoftplusScale.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.param_static_shapes(cls, sample_shape)` {#StudentTWithAbsDfSoftplusScale.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.parameters` {#StudentTWithAbsDfSoftplusScale.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.prob(value, name='prob')` {#StudentTWithAbsDfSoftplusScale.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.reparameterization_type` {#StudentTWithAbsDfSoftplusScale.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.sample(sample_shape=(), seed=None, name='sample')` {#StudentTWithAbsDfSoftplusScale.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.scale` {#StudentTWithAbsDfSoftplusScale.scale}
-
-Scaling factors of these Student's t distribution(s).
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.stddev(name='stddev')` {#StudentTWithAbsDfSoftplusScale.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.survival_function(value, name='survival_function')` {#StudentTWithAbsDfSoftplusScale.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.validate_args` {#StudentTWithAbsDfSoftplusScale.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.StudentTWithAbsDfSoftplusScale.variance(name='variance')` {#StudentTWithAbsDfSoftplusScale.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-
-Additional documentation from `StudentT`:
-
-The variance for Student's T equals
-
-```
-df / (df - 2), when df > 2
-infinity, when 1 < df <= 2
-NaN, when df <= 1
-```
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.TransformedDistribution.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.TransformedDistribution.md
deleted file mode 100644
index 4c3d0fb0b2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.distributions.TransformedDistribution.md
+++ /dev/null
@@ -1,710 +0,0 @@
-A Transformed Distribution.
-
-A `TransformedDistribution` models `p(y)` given a base distribution `p(x)`,
-and a deterministic, invertible, differentiable transform, `Y = g(X)`. The
-transform is typically an instance of the `Bijector` class and the base
-distribution is typically an instance of the `Distribution` class.
-
-A `Bijector` is expected to implement the following functions:
-- `forward`,
-- `inverse`,
-- `inverse_log_det_jacobian`.
-The semantics of these functions are outlined in the `Bijector` documentation.
-
-We now describe how a `TransformedDistribution` alters the input/outputs of a
-`Distribution` associated with a random variable (rv) `X`.
-
-Write `cdf(Y=y)` for an absolutely continuous cumulative distribution function
-of random variable `Y`; write the probability density function `pdf(Y=y) :=
-d^k / (dy_1,...,dy_k) cdf(Y=y)` for its derivative wrt to `Y` evaluated at
-`y`. Assume that `Y = g(X)` where `g` is a deterministic diffeomorphism,
-i.e., a non-random, continuous, differentiable, and invertible function.
-Write the inverse of `g` as `X = g^{-1}(Y)` and `(J o g)(x)` for the Jacobian
-of `g` evaluated at `x`.
-
-A `TransformedDistribution` implements the following operations:
-
- * `sample`:
-
- Mathematically:
-
- ```none
- Y = g(X)
- ```
-
- Programmatically:
-
- ```python
- return bijector.forward(distribution.sample(...))
- ```
-
- * `log_prob`:
-
- Mathematically:
-
- ```none
- (log o pdf)(Y=y) = (log o pdf o g^{-1})(y) +
- (log o abs o det o J o g^{-1})(y)
- ```
-
- Programmatically:
-
- ```python
- return (distribution.log_prob(bijector.inverse(y)) +
- bijector.inverse_log_det_jacobian(y))
- ```
-
- * `log_cdf`:
-
- Mathematically:
-
- ```none
- (log o cdf)(Y=y) = (log o cdf o g^{-1})(y)
- ```
-
- Programmatically:
-
- ```python
- return distribution.log_cdf(bijector.inverse(x))
- ```
-
- * and similarly for: `cdf`, `prob`, `log_survival_function`,
- `survival_function`.
-
-A simple example constructing a Log-Normal distribution from a Normal
-distribution:
-
-```python
-ds = tf.contrib.distributions
-log_normal = ds.TransformedDistribution(
- distribution=ds.Normal(loc=mu, scale=sigma),
- bijector=ds.bijector.Exp(),
- name="LogNormalTransformedDistribution")
-```
-
-A `LogNormal` made from callables:
-
-```python
-ds = tf.contrib.distributions
-log_normal = ds.TransformedDistribution(
- distribution=ds.Normal(loc=mu, scale=sigma),
- bijector=ds.bijector.Inline(
- forward_fn=tf.exp,
- inverse_fn=tf.log,
- inverse_log_det_jacobian_fn=(
- lambda y: -tf.reduce_sum(tf.log(y), axis=-1)),
- name="LogNormalTransformedDistribution")
-```
-
-Another example constructing a Normal from a StandardNormal:
-
-```python
-ds = tf.contrib.distributions
-normal = ds.TransformedDistribution(
- distribution=ds.Normal(loc=0, scale=1),
- bijector=ds.bijector.ScaleAndShift(loc=mu, scale=sigma, event_ndims=0),
- name="NormalTransformedDistribution")
-```
-
-A `TransformedDistribution`'s batch- and event-shape are implied by the base
-distribution unless explicitly overridden by `batch_shape` or `event_shape`
-arguments. Specifying an overriding `batch_shape` (`event_shape`) is
-permitted only if the base distribution has scalar batch-shape (event-shape).
-The bijector is applied to the distribution as if the distribution possessed
-the overridden shape(s). The following example demonstrates how to construct a
-multivariate Normal as a `TransformedDistribution`.
-
-```python
-bs = tf.contrib.distributions.bijector
-ds = tf.contrib.distributions
-# We will create two MVNs with batch_shape = event_shape = 2.
-mean = [[-1., 0], # batch:0
- [0., 1]] # batch:1
-chol_cov = [[[1., 0],
- [0, 1]], # batch:0
- [[1, 0],
- [2, 2]]] # batch:1
-mvn1 = ds.TransformedDistribution(
- distribution=ds.Normal(loc=0., scale=1.),
- bijector=bs.Affine(shift=mean, tril=chol_cov),
- batch_shape=[2], # Valid because base_distribution.batch_shape == [].
- event_shape=[2]) # Valid because base_distribution.event_shape == [].
-mvn2 = ds.MultivariateNormalTriL(loc=mean, scale_tril=chol_cov)
-# mvn1.log_prob(x) == mvn2.log_prob(x)
-```
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.__init__(distribution, bijector=None, batch_shape=None, event_shape=None, validate_args=False, name=None)` {#TransformedDistribution.__init__}
-
-Construct a Transformed Distribution.
-
-##### Args:
-
-
-* <b>`distribution`</b>: The base distribution instance to transform. Typically an
- instance of `Distribution`.
-* <b>`bijector`</b>: The object responsible for calculating the transformation.
- Typically an instance of `Bijector`. `None` means `Identity()`.
-* <b>`batch_shape`</b>: `integer` vector `Tensor` which overrides `distribution`
- `batch_shape`; valid only if `distribution.is_scalar_batch()`.
-* <b>`event_shape`</b>: `integer` vector `Tensor` which overrides `distribution`
- `event_shape`; valid only if `distribution.is_scalar_event()`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class. Default:
- `bijector.name + distribution.name`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.allow_nan_stats` {#TransformedDistribution.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.batch_shape` {#TransformedDistribution.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.batch_shape_tensor(name='batch_shape_tensor')` {#TransformedDistribution.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.bijector` {#TransformedDistribution.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.cdf(value, name='cdf')` {#TransformedDistribution.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.copy(**override_parameters_kwargs)` {#TransformedDistribution.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.covariance(name='covariance')` {#TransformedDistribution.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.distribution` {#TransformedDistribution.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.dtype` {#TransformedDistribution.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.entropy(name='entropy')` {#TransformedDistribution.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.event_shape` {#TransformedDistribution.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.event_shape_tensor(name='event_shape_tensor')` {#TransformedDistribution.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.is_continuous` {#TransformedDistribution.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.is_scalar_batch(name='is_scalar_batch')` {#TransformedDistribution.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.is_scalar_event(name='is_scalar_event')` {#TransformedDistribution.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.log_cdf(value, name='log_cdf')` {#TransformedDistribution.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.log_prob(value, name='log_prob')` {#TransformedDistribution.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.log_survival_function(value, name='log_survival_function')` {#TransformedDistribution.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.mean(name='mean')` {#TransformedDistribution.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.mode(name='mode')` {#TransformedDistribution.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.name` {#TransformedDistribution.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#TransformedDistribution.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.param_static_shapes(cls, sample_shape)` {#TransformedDistribution.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.parameters` {#TransformedDistribution.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.prob(value, name='prob')` {#TransformedDistribution.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.reparameterization_type` {#TransformedDistribution.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.sample(sample_shape=(), seed=None, name='sample')` {#TransformedDistribution.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.stddev(name='stddev')` {#TransformedDistribution.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.survival_function(value, name='survival_function')` {#TransformedDistribution.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.validate_args` {#TransformedDistribution.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.TransformedDistribution.variance(name='variance')` {#TransformedDistribution.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.get_graph_from_inputs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.get_graph_from_inputs.md
deleted file mode 100644
index d7f0016029..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.get_graph_from_inputs.md
+++ /dev/null
@@ -1,32 +0,0 @@
-### `tf.contrib.framework.get_graph_from_inputs(op_input_list, graph=None)` {#get_graph_from_inputs}
-
-Returns the appropriate graph to use for the given inputs.
-
-1. If `graph` is provided, we validate that all inputs in `op_input_list` are
- from the same graph.
-2. Otherwise, we attempt to select a graph from the first Operation- or
- Tensor-valued input in `op_input_list`, and validate that all other
- such inputs are in the same graph.
-3. If the graph was not specified and it could not be inferred from
- `op_input_list`, we attempt to use the default graph.
-
-##### Args:
-
-
-* <b>`op_input_list`</b>: A list of inputs to an operation, which may include `Tensor`,
- `Operation`, and other objects that may be converted to a graph element.
-* <b>`graph`</b>: (Optional) The explicit graph to use.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `op_input_list` is not a list or tuple, or if graph is not a
- Graph.
-* <b>`ValueError`</b>: If a graph is explicitly passed and not all inputs are from it,
- or if the inputs are from multiple graphs, or we could not find a graph
- and there was no default graph.
-
-##### Returns:
-
- The appropriate graph to use for the given inputs.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.get_local_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.get_local_variables.md
deleted file mode 100644
index 94c9fc96d9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.get_local_variables.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.contrib.framework.get_local_variables(scope=None, suffix=None)` {#get_local_variables}
-
-Gets the list of local variables, filtered by scope and/or suffix.
-
-##### Args:
-
-
-* <b>`scope`</b>: an optional scope for filtering the variables to return.
-* <b>`suffix`</b>: an optional suffix for filtering the variables to return.
-
-##### Returns:
-
- a list of variables in collection with scope and suffix.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.get_variables_by_name.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.get_variables_by_name.md
deleted file mode 100644
index a76f564a1d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.get_variables_by_name.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.contrib.framework.get_variables_by_name(given_name, scope=None)` {#get_variables_by_name}
-
-Gets the list of variables that were given that name.
-
-##### Args:
-
-
-* <b>`given_name`</b>: name given to the variable without any scope.
-* <b>`scope`</b>: an optional scope for filtering the variables to return.
-
-##### Returns:
-
- a copied list of variables with the given name and scope.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.init_from_checkpoint.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.init_from_checkpoint.md
deleted file mode 100644
index 9d3ee1d24a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.framework.init_from_checkpoint.md
+++ /dev/null
@@ -1,72 +0,0 @@
-### `tf.contrib.framework.init_from_checkpoint(checkpoint_dir, assignment_map)` {#init_from_checkpoint}
-
-Using assingment map initializes current variables with loaded tensors.
-
-Note: This overrides default initialization ops of specified variables and
-redefines dtype.
-
-##### Assignment map supports following syntax:
-
- `'checkpoint_scope_name/': 'scope_name/'` - will load all variables in
- current `scope_name` from `checkpoint_scope_name` with matching variable
- names.
- `'checkpoint_scope_name/some_other_variable': 'scope_name/variable_name'` -
- will initalize `scope_name/variable_name` variable
- from `checkpoint_scope_name/some_other_variable`.
- `'scope_variable_name': variable` - will initialize given `tf.Variable`
- object with variable from the checkpoint.
- `'scope_variable_name': list(variable)` - will initialize list of
- partitioned variables with variable from the checkpoint.
- `'/': 'scope_name/'` - will load all variables in current `scope_name` from
- checkpoint's root (e.g. no scope).
-
-Supports loading into partitioned variables, which are represented as
-'<variable>/part_<part #>'.
-
-
-* <b>`Example`</b>:
-```python
- # Create variables.
- with tf.variable_scope('test'):
- m = tf.get_variable('my_var')
- with tf.variable_scope('test2'):
- var2 = tf.get_variable('my_var')
- var3 = tf.get_variable(name="my1", shape=[100, 100],
- partitioner=lambda shape, dtype: [5, 1])
- ...
- # Specify which variables to intialize from checkpoint.
- init_from_checkpoint(checkpoint_dir, {
- 'some_var': 'test/my_var',
- 'some_scope/': 'test2/'})
- ...
- # Or use `Variable` objects to identify what to initialize.
- init_from_checkpoint(checkpoint_dir, {
- 'some_scope/var2': var2,
- })
- # Initialize partitioned variables
- init_from_checkpoint(checkpoint_dir, {
- 'some_var_from_ckpt': 'part_var',
- })
- # Or specifying the list of `Variable` objects.
- init_from_checkpoint(checkpoint_dir, {
- 'some_var_from_ckpt': var3._get_variable_list(),
- })
- ...
- # Initialize variables as usual.
- session.run(tf.get_all_variables())
-```
-
-##### Args:
-
-
-* <b>`checkpoint_dir`</b>: Directory with checkpoints file or path to checkpoint.
-* <b>`assignment_map`</b>: Dict, where keys are names of the variables in the
- checkpoint and values are current variables or names of current variables
- (in default graph).
-
-##### Raises:
-
- tf.errors.OpError: If missing checkpoints or tensors in checkpoints.
-
-* <b>`ValueError`</b>: If missing variables in current graph.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.compute_boundary_ts.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.compute_boundary_ts.md
deleted file mode 100644
index 27ad95be99..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.compute_boundary_ts.md
+++ /dev/null
@@ -1,32 +0,0 @@
-### `tf.contrib.graph_editor.compute_boundary_ts(ops)` {#compute_boundary_ts}
-
-Compute the tensors at the boundary of a set of ops.
-
-This function looks at all the tensors connected to the given ops (in/out)
-and classify them into three categories:
-1) input tensors: tensors whose generating operation is not in ops.
-2) output tensors: tensors whose consumer operations are not in ops
-3) inside tensors: tensors which are neither input nor output tensors.
-
-Note that a tensor can be both an inside tensor and an output tensor if it is
-consumed by operations both outside and inside of `ops`.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of tf.Operation.
-
-##### Returns:
-
- A tuple `(outside_input_ts, outside_output_ts, inside_ts)` where:
- `outside_input_ts` is a Python list of input tensors;
- `outside_output_ts` is a python list of output tensors;
- `inside_ts` is a python list of inside tensors.
- Since a tensor can be both an inside tensor and an output tensor,
- `outside_output_ts` and `inside_ts` might intersect.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ops cannot be converted to a list of tf.Operation.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.get_name_scope_ops.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.get_name_scope_ops.md
deleted file mode 100644
index 462ae97e17..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.get_name_scope_ops.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.contrib.graph_editor.get_name_scope_ops(ops, scope)` {#get_name_scope_ops}
-
-Get all the operations under the given scope path.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of tf.Operation.
-* <b>`scope`</b>: a scope path.
-
-##### Returns:
-
- A list of tf.Operation.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ops cannot be converted to a list of tf.Operation.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.make_list_of_t.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.make_list_of_t.md
deleted file mode 100644
index c67586bcf8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.make_list_of_t.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.contrib.graph_editor.make_list_of_t(ts, check_graph=True, allow_graph=True, ignore_ops=False)` {#make_list_of_t}
-
-Convert ts to a list of `tf.Tensor`.
-
-##### Args:
-
-
-* <b>`ts`</b>: can be an iterable of `tf.Tensor`, a `tf.Graph` or a single tensor.
-* <b>`check_graph`</b>: if `True` check if all the tensors belong to the same graph.
-* <b>`allow_graph`</b>: if `False` a `tf.Graph` cannot be converted.
-* <b>`ignore_ops`</b>: if `True`, silently ignore `tf.Operation`.
-
-##### Returns:
-
- A newly created list of `tf.Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `ts` cannot be converted to a list of `tf.Tensor` or,
- if `check_graph` is `True`, if all the ops do not belong to the same graph.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.reroute_ios.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.reroute_ios.md
deleted file mode 100644
index 0979bf0e0f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.reroute_ios.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.graph_editor.reroute_ios(sgv0, sgv1)` {#reroute_ios}
-
-Re-route the inputs and outputs of sgv0 to sgv1 (see _reroute).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.transform_op_if_inside_handler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.transform_op_if_inside_handler.md
deleted file mode 100644
index 176ed58f08..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.graph_editor.transform_op_if_inside_handler.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.contrib.graph_editor.transform_op_if_inside_handler(info, op, keep_if_possible=True)` {#transform_op_if_inside_handler}
-
-Transform an optional op only if it is inside the subgraph.
-
-This handler is typically use to handle original op: it is fine to keep them
-if they are inside the subgraph, otherwise they are just ignored.
-
-##### Args:
-
-
-* <b>`info`</b>: Transform._TmpInfo instance.
-* <b>`op`</b>: the optional op to transform (or ignore).
-* <b>`keep_if_possible`</b>: re-attach to the original op if possible, that is,
- if the source graph and the destination graph are the same.
-
-##### Returns:
-
- The transformed op or None.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.integrate.odeint.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.integrate.odeint.md
deleted file mode 100644
index 25b2709be8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.integrate.odeint.md
+++ /dev/null
@@ -1,90 +0,0 @@
-### `tf.contrib.integrate.odeint(func, y0, t, rtol=1e-06, atol=1e-12, method=None, options=None, full_output=False, name=None)` {#odeint}
-
-Integrate a system of ordinary differential equations.
-
-Solves the initial value problem for a non-stiff system of first order ode-s:
-
- ```
- dy/dt = func(y, t), y(t[0]) = y0
- ```
-
-where y is a Tensor of any shape.
-
-For example:
-
- ```
- # solve `dy/dt = -y`, corresponding to exponential decay
- tf.contrib.integrate.odeint(lambda y, _: -y, 1.0, [0, 1, 2])
- => [1, exp(-1), exp(-2)]
- ```
-
-Output dtypes and numerical precision are based on the dtypes of the inputs
-`y0` and `t`.
-
-Currently, implements 5th order Runge-Kutta with adaptive step size control
-and dense output, using the Dormand-Prince method. Similar to the 'dopri5'
-method of `scipy.integrate.ode` and MATLAB's `ode45`.
-
-Based on: Shampine, Lawrence F. (1986), "Some Practical Runge-Kutta Formulas",
-Mathematics of Computation, American Mathematical Society, 46 (173): 135-150,
-doi:10.2307/2008219
-
-##### Args:
-
-
-* <b>`func`</b>: Function that maps a Tensor holding the state `y` and a scalar Tensor
- `t` into a Tensor of state derivatives with respect to time.
-* <b>`y0`</b>: N-D Tensor giving starting value of `y` at time point `t[0]`. May
- have any floating point or complex dtype.
-* <b>`t`</b>: 1-D Tensor holding a sequence of time points for which to solve for
- `y`. The initial time point should be the first element of this sequence,
- and each time must be larger than the previous time. May have any floating
- point dtype. If not provided as a Tensor, converted to a Tensor with
- float64 dtype.
-* <b>`rtol`</b>: optional float64 Tensor specifying an upper bound on relative error,
- per element of `y`.
-* <b>`atol`</b>: optional float64 Tensor specifying an upper bound on absolute error,
- per element of `y`.
-* <b>`method`</b>: optional string indicating the integration method to use. Currently,
- the only valid option is `'dopri5'`.
-* <b>`options`</b>: optional dict of configuring options for the indicated integration
- method. Can only be provided if a `method` is explicitly set. For
- `'dopri5'`, valid options include:
- * first_step: an initial guess for the size of the first integration
- (current default: 1.0, but may later be changed to use heuristics based
- on the gradient).
- * safety: safety factor for adaptive step control, generally a constant
- in the range 0.8-1 (default: 0.9).
- * ifactor: maximum factor by which the adaptive step may be increased
- (default: 10.0).
- * dfactor: maximum factor by which the adpative step may be decreased
- (default: 0.2).
- * max_num_steps: integer maximum number of integrate steps between time
- points in `t` (default: 1000).
-* <b>`full_output`</b>: optional boolean. If True, `odeint` returns a tuple
- `(y, info_dict)` describing the integration process.
-* <b>`name`</b>: Optional name for this operation.
-
-##### Returns:
-
-
-* <b>`y`</b>: (N+1)-D tensor, where the first dimension corresponds to different
- time points. Contains the solved value of y for each desired time point in
- `t`, with the initial value `y0` being the first element along the first
- dimension.
-* <b>`info_dict`</b>: only if `full_output == True`. A dict with the following values:
- * num_func_evals: integer Tensor counting the number of function
- evaluations.
- * integrate_points: 1D float64 Tensor with the upper bound of each
- integration time step.
- * error_ratio: 1D float Tensor with the estimated ratio of the integration
- error to the error tolerance at each integration step. An ratio greater
- than 1 corresponds to rejected steps.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an invalid `method` is provided.
-* <b>`TypeError`</b>: if `options` is supplied without `method`, or if `t` or `y0` has
- an invalid dtype.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.embedding_column.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.embedding_column.md
deleted file mode 100644
index 30c543c631..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.embedding_column.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.contrib.layers.embedding_column(sparse_id_column, dimension, combiner='mean', initializer=None, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None)` {#embedding_column}
-
-Creates an `_EmbeddingColumn` for feeding sparse data into a DNN.
-
-##### Args:
-
-
-* <b>`sparse_id_column`</b>: A `_SparseColumn` which is created by for example
- `sparse_column_with_*` or crossed_column functions. Note that `combiner`
- defined in `sparse_id_column` is ignored.
-* <b>`dimension`</b>: An integer specifying dimension of the embedding.
-* <b>`combiner`</b>: A string specifying how to reduce if there are multiple entries
- in a single row. Currently "mean", "sqrtn" and "sum" are supported, with
- "mean" the default. "sqrtn" often achieves good accuracy, in particular
- with bag-of-words columns. Each of this can be thought as example level
- normalizations on the column:
- * "sum": do not normalize
- * "mean": do l1 normalization
- * "sqrtn": do l2 normalization
- For more information: `tf.embedding_lookup_sparse`.
-* <b>`initializer`</b>: A variable initializer function to be used in embedding
- variable initialization. If not specified, defaults to
- `tf.truncated_normal_initializer` with mean 0.0 and standard deviation
- 1/sqrt(sparse_id_column.length).
-* <b>`ckpt_to_load_from`</b>: (Optional). String representing checkpoint name/pattern
- to restore the column weights. Required if `tensor_name_in_ckpt` is not
- None.
-* <b>`tensor_name_in_ckpt`</b>: (Optional). Name of the `Tensor` in the provided
- checkpoint from which to restore the column weights. Required if
- `ckpt_to_load_from` is not None.
-* <b>`max_norm`</b>: (Optional). If not None, embedding values are l2-normalized to
- the value of max_norm.
-
-##### Returns:
-
- An `_EmbeddingColumn`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.flatten.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.flatten.md
deleted file mode 100644
index e7de4571b0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.flatten.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.contrib.layers.flatten(*args, **kwargs)` {#flatten}
-
-Flattens the input while maintaining the batch_size.
-
- Assumes that the first dimension represents the batch.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A tensor of size [batch_size, ...].
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`scope`</b>: Optional scope for name_scope.
-
-##### Returns:
-
- A flattened tensor with shape [batch_size, k].
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If inputs rank is unknown or less than 2.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.make_place_holder_tensors_for_base_features.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.make_place_holder_tensors_for_base_features.md
deleted file mode 100644
index bc6cc5ccc3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.make_place_holder_tensors_for_base_features.md
+++ /dev/null
@@ -1,15 +0,0 @@
-### `tf.contrib.layers.make_place_holder_tensors_for_base_features(feature_columns)` {#make_place_holder_tensors_for_base_features}
-
-Returns placeholder tensors for inference.
-
-##### Args:
-
-
-* <b>`feature_columns`</b>: An iterable containing all the feature columns. All items
- should be instances of classes derived from _FeatureColumn.
-
-##### Returns:
-
- A dict mapping feature keys to SparseTensors (sparse columns) or
- placeholder Tensors (dense columns).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.summarize_tensors.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.summarize_tensors.md
deleted file mode 100644
index 608999b437..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.layers.summarize_tensors.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.layers.summarize_tensors(tensors, summarizer=summarize_tensor)` {#summarize_tensors}
-
-Summarize a set of tensors.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.LinearClassifier.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.LinearClassifier.md
deleted file mode 100644
index 5b70ba0493..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.LinearClassifier.md
+++ /dev/null
@@ -1,467 +0,0 @@
-Linear classifier model.
-
-Train a linear model to classify instances into one of multiple possible
-classes. When number of possible classes is 2, this is binary classification.
-
-Example:
-
-```python
-sparse_column_a = sparse_column_with_hash_bucket(...)
-sparse_column_b = sparse_column_with_hash_bucket(...)
-
-sparse_feature_a_x_sparse_feature_b = crossed_column(...)
-
-# Estimator using the default optimizer.
-estimator = LinearClassifier(
- feature_columns=[sparse_column_a, sparse_feature_a_x_sparse_feature_b])
-
-# Or estimator using the FTRL optimizer with regularization.
-estimator = LinearClassifier(
- feature_columns=[sparse_column_a, sparse_feature_a_x_sparse_feature_b],
- optimizer=tf.train.FtrlOptimizer(
- learning_rate=0.1,
- l1_regularization_strength=0.001
- ))
-
-# Or estimator using the SDCAOptimizer.
-estimator = LinearClassifier(
- feature_columns=[sparse_column_a, sparse_feature_a_x_sparse_feature_b],
- optimizer=tf.contrib.linear_optimizer.SDCAOptimizer(
- example_id_column='example_id',
- num_loss_partitions=...,
- symmetric_l2_regularization=2.0
- ))
-
-# Input builders
-def input_fn_train: # returns x, y (where y represents label's class index).
- ...
-def input_fn_eval: # returns x, y (where y represents label's class index).
- ...
-estimator.fit(input_fn=input_fn_train)
-estimator.evaluate(input_fn=input_fn_eval)
-estimator.predict(x=x) # returns predicted labels (i.e. label's class index).
-```
-
-Input of `fit` and `evaluate` should have following features,
- otherwise there will be a `KeyError`:
-
-* if `weight_column_name` is not `None`, a feature with
- `key=weight_column_name` whose value is a `Tensor`.
-* for each `column` in `feature_columns`:
- - if `column` is a `SparseColumn`, a feature with `key=column.name`
- whose `value` is a `SparseTensor`.
- - if `column` is a `WeightedSparseColumn`, two features: the first with
- `key` the id column name, the second with `key` the weight column name.
- Both features' `value` must be a `SparseTensor`.
- - if `column` is a `RealValuedColumn`, a feature with `key=column.name`
- whose `value` is a `Tensor`.
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.__init__(feature_columns, model_dir=None, n_classes=2, weight_column_name=None, optimizer=None, gradient_clip_norm=None, enable_centered_bias=False, _joint_weight=False, config=None, feature_engineering_fn=None)` {#LinearClassifier.__init__}
-
-Construct a `LinearClassifier` estimator object.
-
-##### Args:
-
-
-* <b>`feature_columns`</b>: An iterable containing all the feature columns used by
- the model. All items in the set should be instances of classes derived
- from `FeatureColumn`.
-* <b>`model_dir`</b>: Directory to save model parameters, graph and etc. This can
- also be used to load checkpoints from the directory into a estimator
- to continue training a previously saved model.
-* <b>`n_classes`</b>: number of label classes. Default is binary classification.
- Note that class labels are integers representing the class index (i.e.
- values from 0 to n_classes-1). For arbitrary label values (e.g. string
- labels), convert to class indices first.
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training. It
- will be multiplied by the loss of the example.
-* <b>`optimizer`</b>: The optimizer used to train the model. If specified, it should
- be either an instance of `tf.Optimizer` or the SDCAOptimizer. If `None`,
- the Ftrl optimizer will be used.
-* <b>`gradient_clip_norm`</b>: A `float` > 0. If provided, gradients are clipped
- to their global norm with this clipping ratio. See
- `tf.clip_by_global_norm` for more details.
-* <b>`enable_centered_bias`</b>: A bool. If True, estimator will learn a centered
- bias variable for each class. Rest of the model structure learns the
- residual after centered bias.
- _joint_weight: If True, the weights for all columns will be stored in a
- single (possibly partitioned) variable. It's more efficient, but it's
- incompatible with SDCAOptimizer, and requires all feature columns are
- sparse and use the 'sum' combiner.
-
-* <b>`config`</b>: `RunConfig` object to configure the runtime settings.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into the model.
-
-##### Returns:
-
- A `LinearClassifier` estimator.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if n_classes < 2.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.__repr__()` {#LinearClassifier.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.bias_` {#LinearClassifier.bias_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.config` {#LinearClassifier.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.evaluate(*args, **kwargs)` {#LinearClassifier.evaluate}
-
-See `Evaluable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` or `y` is provided, and at least one of
- `input_fn` or `feed_fn` is provided.
- Or if `metrics` is not `None` or `dict`.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.export(export_dir, input_fn=None, input_feature_key=None, use_deprecated_input_fn=True, signature_fn=None, default_batch_size=1, exports_to_keep=None)` {#LinearClassifier.export}
-
-See BaseEstimator.export.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#LinearClassifier.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.fit(*args, **kwargs)` {#LinearClassifier.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.get_params(deep=True)` {#LinearClassifier.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.get_variable_names()` {#LinearClassifier.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.get_variable_value(name)` {#LinearClassifier.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.model_dir` {#LinearClassifier.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.partial_fit(*args, **kwargs)` {#LinearClassifier.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.predict(*args, **kwargs)` {#LinearClassifier.predict}
-
-Returns predictions for given features. (deprecated arguments) (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2017-03-01.
-Instructions for updating:
-Please switch to predict_classes, or set `outputs` argument.
-
-By default, returns predicted classes. But this default will be dropped
-soon. Users should either pass `outputs`, or call `predict_classes` method.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns classes.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted classes with shape [batch_size] (or an iterable
- of predicted classes if as_iterable is True). Each predicted class is
- represented by its class index (i.e. integer from 0 to n_classes-1).
- If `outputs` is set, returns a dict of predictions.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.predict_classes(*args, **kwargs)` {#LinearClassifier.predict_classes}
-
-Returns predicted classes for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted classes with shape [batch_size] (or an iterable
- of predicted classes if as_iterable is True). Each predicted class is
- represented by its class index (i.e. integer from 0 to n_classes-1).
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.predict_proba(*args, **kwargs)` {#LinearClassifier.predict_proba}
-
-Returns predicted probabilities for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x and y must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted probabilities with shape [batch_size, n_classes]
- (or an iterable of predicted probabilities if as_iterable is True).
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.set_params(**params)` {#LinearClassifier.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
-- - -
-
-#### `tf.contrib.learn.LinearClassifier.weights_` {#LinearClassifier.weights_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.PredictionKey.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.PredictionKey.md
deleted file mode 100644
index 8b13789179..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.PredictionKey.md
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.extract_pandas_data.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.extract_pandas_data.md
deleted file mode 100644
index 6b1956884d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.extract_pandas_data.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.contrib.learn.extract_pandas_data(data)` {#extract_pandas_data}
-
-Extract data from pandas.DataFrame for predictors.
-
-Given a DataFrame, will extract the values and cast them to float. The
-DataFrame is expected to contain values of type int, float or bool.
-
-##### Args:
-
-
-* <b>`data`</b>: `pandas.DataFrame` containing the data to be extracted.
-
-##### Returns:
-
- A numpy `ndarray` of the DataFrame's values as floats.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if data contains types other than int, float or bool.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.infer_real_valued_columns_from_input_fn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.infer_real_valued_columns_from_input_fn.md
deleted file mode 100644
index e1ac197953..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.infer_real_valued_columns_from_input_fn.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.contrib.learn.infer_real_valued_columns_from_input_fn(input_fn)` {#infer_real_valued_columns_from_input_fn}
-
-Creates `FeatureColumn` objects for inputs defined by `input_fn`.
-
-This interprets all inputs as dense, fixed-length float values. This creates
-a local graph in which it calls `input_fn` to build the tensors, then discards
-it.
-
-##### Args:
-
-
-* <b>`input_fn`</b>: Input function returning a tuple of:
- features - Dictionary of string feature name to `Tensor` or `Tensor`.
- labels - `Tensor` of label values.
-
-##### Returns:
-
- List of `FeatureColumn` objects.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.monitors.SummaryWriterCache.get.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.monitors.SummaryWriterCache.get.md
deleted file mode 100644
index 35b49b99cf..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.monitors.SummaryWriterCache.get.md
+++ /dev/null
@@ -1,13 +0,0 @@
-#### `tf.contrib.learn.monitors.SummaryWriterCache.get(logdir)` {#SummaryWriterCache.get}
-
-Returns the FileWriter for the specified directory.
-
-##### Args:
-
-
-* <b>`logdir`</b>: str, name of the directory.
-
-##### Returns:
-
- A `FileWriter`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.monitors.ValidationMonitor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.monitors.ValidationMonitor.md
deleted file mode 100644
index b24a86f1e1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.monitors.ValidationMonitor.md
+++ /dev/null
@@ -1,242 +0,0 @@
-Runs evaluation of a given estimator, at most every N steps.
-
-Note that the evaluation is done based on the saved checkpoint, which will
-usually be older than the current step.
-
-Can do early stopping on validation metrics if `early_stopping_rounds` is
-provided.
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.__init__(x=None, y=None, input_fn=None, batch_size=None, eval_steps=None, every_n_steps=100, metrics=None, hooks=None, early_stopping_rounds=None, early_stopping_metric='loss', early_stopping_metric_minimize=True, name=None)` {#ValidationMonitor.__init__}
-
-Initializes a ValidationMonitor.
-
-##### Args:
-
-
-* <b>`x`</b>: See `BaseEstimator.evaluate`.
-* <b>`y`</b>: See `BaseEstimator.evaluate`.
-* <b>`input_fn`</b>: See `BaseEstimator.evaluate`.
-* <b>`batch_size`</b>: See `BaseEstimator.evaluate`.
-* <b>`eval_steps`</b>: See `BaseEstimator.evaluate`.
-* <b>`every_n_steps`</b>: Check for new checkpoints to evaluate every N steps. If a
- new checkpoint is found, it is evaluated. See `EveryN`.
-* <b>`metrics`</b>: See `BaseEstimator.evaluate`.
-* <b>`hooks`</b>: A list of `SessionRunHook` hooks to pass to the
- `Estimator`'s `evaluate` function.
-* <b>`early_stopping_rounds`</b>: `int`. If the metric indicated by
- `early_stopping_metric` does not change according to
- `early_stopping_metric_minimize` for this many steps, then training
- will be stopped.
-* <b>`early_stopping_metric`</b>: `string`, name of the metric to check for early
- stopping.
-* <b>`early_stopping_metric_minimize`</b>: `bool`, True if `early_stopping_metric` is
- expected to decrease (thus early stopping occurs when this metric
- stops decreasing), False if `early_stopping_metric` is expected to
- increase. Typically, `early_stopping_metric_minimize` is True for
- loss metrics like mean squared error, and False for performance
- metrics like accuracy.
-* <b>`name`</b>: See `BaseEstimator.evaluate`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both x and input_fn are provided.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.begin(max_steps=None)` {#ValidationMonitor.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.best_step` {#ValidationMonitor.best_step}
-
-Returns the step at which the best early stopping metric was found.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.best_value` {#ValidationMonitor.best_value}
-
-Returns the best early stopping metric value found so far.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.early_stopped` {#ValidationMonitor.early_stopped}
-
-Returns True if this monitor caused an early stop.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.end(session=None)` {#ValidationMonitor.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.epoch_begin(epoch)` {#ValidationMonitor.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.epoch_end(epoch)` {#ValidationMonitor.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.every_n_post_step(step, session)` {#ValidationMonitor.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.every_n_step_begin(step)` {#ValidationMonitor.every_n_step_begin}
-
-Callback before every n'th step begins.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list` of tensors that will be evaluated at this step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.every_n_step_end(step, outputs)` {#ValidationMonitor.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.post_step(step, session)` {#ValidationMonitor.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.run_on_all_workers` {#ValidationMonitor.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.set_estimator(estimator)` {#ValidationMonitor.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.step_begin(step)` {#ValidationMonitor.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ValidationMonitor.step_end(step, output)` {#ValidationMonitor.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.train.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.train.md
deleted file mode 100644
index 4158479faf..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.learn.train.md
+++ /dev/null
@@ -1,75 +0,0 @@
-### `tf.contrib.learn.train(*args, **kwargs)` {#train}
-
-Train a model. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-02-15.
-Instructions for updating:
-graph_actions.py will be deleted. Use tf.train.* utilities instead. You can use learn/estimators/estimator.py as an example.
-
-Given `graph`, a directory to write outputs to (`output_dir`), and some ops,
-run a training loop. The given `train_op` performs one step of training on the
-model. The `loss_op` represents the objective function of the training. It is
-expected to increment the `global_step_tensor`, a scalar integer tensor
-counting training steps. This function uses `Supervisor` to initialize the
-graph (from a checkpoint if one is available in `output_dir`), write summaries
-defined in the graph, and write regular checkpoints as defined by
-`supervisor_save_model_secs`.
-
-Training continues until `global_step_tensor` evaluates to `max_steps`, or, if
-`fail_on_nan_loss`, until `loss_op` evaluates to `NaN`. In that case the
-program is terminated with exit code 1.
-
-##### Args:
-
-
-* <b>`graph`</b>: A graph to train. It is expected that this graph is not in use
- elsewhere.
-* <b>`output_dir`</b>: A directory to write outputs to.
-* <b>`train_op`</b>: An op that performs one training step when run.
-* <b>`loss_op`</b>: A scalar loss tensor.
-* <b>`global_step_tensor`</b>: A tensor representing the global step. If none is given,
- one is extracted from the graph using the same logic as in `Supervisor`.
-* <b>`init_op`</b>: An op that initializes the graph. If `None`, use `Supervisor`'s
- default.
-* <b>`init_feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
- This feed dictionary will be used when `init_op` is evaluated.
-* <b>`init_fn`</b>: Optional callable passed to Supervisor to initialize the model.
-* <b>`log_every_steps`</b>: Output logs regularly. The logs contain timing data and the
- current loss.
-* <b>`supervisor_is_chief`</b>: Whether the current process is the chief supervisor in
- charge of restoring the model and running standard services.
-* <b>`supervisor_master`</b>: The master string to use when preparing the session.
-* <b>`supervisor_save_model_secs`</b>: Save a checkpoint every
- `supervisor_save_model_secs` seconds when training.
-* <b>`keep_checkpoint_max`</b>: The maximum number of recent checkpoint files to
- keep. As new files are created, older files are deleted. If None or 0,
- all checkpoint files are kept. This is simply passed as the max_to_keep
- arg to tf.Saver constructor.
-* <b>`supervisor_save_summaries_steps`</b>: Save summaries every
- `supervisor_save_summaries_steps` seconds when training.
-* <b>`feed_fn`</b>: A function that is called every iteration to produce a `feed_dict`
- passed to `session.run` calls. Optional.
-* <b>`steps`</b>: Trains for this many steps (e.g. current global step + `steps`).
-* <b>`fail_on_nan_loss`</b>: If true, raise `NanLossDuringTrainingError` if `loss_op`
- evaluates to `NaN`. If false, continue training as if nothing happened.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-* <b>`max_steps`</b>: Number of total steps for which to train model. If `None`,
- train forever. Two calls fit(steps=100) means 200 training iterations.
- On the other hand two calls of fit(max_steps=100) means, second call
- will not do any iteration since first call did all 100 steps.
-
-##### Returns:
-
- The final loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `output_dir`, `train_op`, `loss_op`, or `global_step_tensor`
- is not provided. See `tf.contrib.framework.get_global_step` for how we
- look up the latter if not provided explicitly.
-* <b>`NanLossDuringTrainingError`</b>: If `fail_on_nan_loss` is `True`, and loss ever
- evaluates to `NaN`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.linalg.LinearOperatorComposition.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.linalg.LinearOperatorComposition.md
deleted file mode 100644
index a6654ab014..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.linalg.LinearOperatorComposition.md
+++ /dev/null
@@ -1,536 +0,0 @@
-Composes one or more `LinearOperators`.
-
-This operator composes one or more linear operators `[op1,...,opJ]`,
-building a new `LinearOperator` with action defined by:
-
-```
-op_composed(x) := op1(op2(...(opJ(x)...))
-```
-
-If `opj` acts like [batch] matrix `Aj`, then `op_composed` acts like the
-[batch] matrix formed with the multiplication `A1 A2...AJ`.
-
-If `opj` has shape `batch_shape_j + [M_j, N_j]`, then we must have
-`N_j = M_{j+1}`, in which case the composed operator has shape equal to
-`broadcast_batch_shape + [M_1, N_J]`, where `broadcast_batch_shape` is the
-mutual broadcast of `batch_shape_j`, `j = 1,...,J`, assuming the intermediate
-batch shapes broadcast. Even if the composed shape is well defined, the
-composed operator's methods may fail due to lack of broadcasting ability in
-the defining operators' methods.
-
-```python
-# Create a 2 x 2 linear operator composed of two 2 x 2 operators.
-operator_1 = LinearOperatorMatrix([[1., 2.], [3., 4.]])
-operator_2 = LinearOperatorMatrix([[1., 0.], [0., 1.]])
-operator = LinearOperatorComposition([operator_1, operator_2])
-
-operator.to_dense()
-==> [[1., 2.]
- [3., 4.]]
-
-operator.shape
-==> [2, 2]
-
-operator.log_determinant()
-==> scalar Tensor
-
-x = ... Shape [2, 4] Tensor
-operator.apply(x)
-==> Shape [2, 4] Tensor
-
-# Create a [2, 3] batch of 4 x 5 linear operators.
-matrix_45 = tf.random_normal(shape=[2, 3, 4, 5])
-operator_45 = LinearOperatorMatrix(matrix)
-
-# Create a [2, 3] batch of 5 x 6 linear operators.
-matrix_56 = tf.random_normal(shape=[2, 3, 5, 6])
-operator_56 = LinearOperatorMatrix(matrix_56)
-
-# Compose to create a [2, 3] batch of 4 x 6 operators.
-opeartor_46 = LinearOperatorComposition([operator_45, operator_56])
-
-# Create a shape [2, 3, 6, 2] vector.
-x = tf.random_normal(shape=[2, 3, 6, 2])
-operator.apply(x)
-==> Shape [2, 3, 4, 2] Tensor
-```
-
-#### Performance
-
-The performance of `LinearOperatorComposition` on any operation is equal to
-the sum of the individual operators' operations.
-
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite`.
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.__init__(operators, is_non_singular=None, is_self_adjoint=None, is_positive_definite=None, name=None)` {#LinearOperatorComposition.__init__}
-
-Initialize a `LinearOperatorComposition`.
-
-`LinearOperatorComposition` is initialized with a list of operators
-`[op_1,...,op_J]`. For the `apply` method to be well defined, the
-composition `op_i.apply(op_{i+1}(x))` must be defined. Other methods have
-similar constraints.
-
-##### Args:
-
-
-* <b>`operators`</b>: Iterable of `LinearOperator` objects, each with
- the same `dtype` and composible shape.
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite,
- meaning the real part of all eigenvalues is positive. We do not require
- the operator to be self-adjoint to be positive-definite. See:
-* <b>`https`</b>: //en.wikipedia.org/wiki/Positive-definite_matrix
- #Extension_for_non_symmetric_matrices
-* <b>`name`</b>: A name for this `LinearOperator`. Default is the individual
- operators names joined with `_o_`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If all operators do not have the same `dtype`.
-* <b>`ValueError`</b>: If `operators` is empty.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.add_to_tensor(x, name='add_to_tensor')` {#LinearOperatorComposition.add_to_tensor}
-
-Add matrix represented by this operator to `x`. Equivalent to `A + x`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with same `dtype` and shape broadcastable to `self.shape`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.apply(x, adjoint=False, name='apply')` {#LinearOperatorComposition.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.assert_non_singular(name='assert_non_singular')` {#LinearOperatorComposition.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.assert_positive_definite(name='assert_positive_definite')` {#LinearOperatorComposition.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperatorComposition.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.batch_shape` {#LinearOperatorComposition.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperatorComposition.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.determinant(name='det')` {#LinearOperatorComposition.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.diag_part(name='diag_part')` {#LinearOperatorComposition.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.domain_dimension` {#LinearOperatorComposition.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperatorComposition.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.dtype` {#LinearOperatorComposition.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.graph_parents` {#LinearOperatorComposition.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.is_non_singular` {#LinearOperatorComposition.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.is_positive_definite` {#LinearOperatorComposition.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.is_self_adjoint` {#LinearOperatorComposition.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.is_square` {#LinearOperatorComposition.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.log_abs_determinant(name='log_abs_det')` {#LinearOperatorComposition.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.name` {#LinearOperatorComposition.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.operators` {#LinearOperatorComposition.operators}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.range_dimension` {#LinearOperatorComposition.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperatorComposition.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.shape` {#LinearOperatorComposition.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.shape_tensor(name='shape_tensor')` {#LinearOperatorComposition.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.solve(rhs, adjoint=False, name='solve')` {#LinearOperatorComposition.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.tensor_rank` {#LinearOperatorComposition.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperatorComposition.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorComposition.to_dense(name='to_dense')` {#LinearOperatorComposition.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.linalg.LinearOperatorIdentity.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.linalg.LinearOperatorIdentity.md
deleted file mode 100644
index 80f6b13b73..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.linalg.LinearOperatorIdentity.md
+++ /dev/null
@@ -1,562 +0,0 @@
-`LinearOperator` acting like a [batch] square identity matrix.
-
-This operator acts like a [batch] identity matrix `A` with shape
-`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a
-batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is
-an `N x N` matrix. This matrix `A` is not materialized, but for
-purposes of broadcasting this shape will be relevant.
-
-`LinearOperatorIdentity` is initialized with `num_rows`, and optionally
-`batch_shape`, and `dtype` arguments. If `batch_shape` is `None`, this
-operator efficiently passes through all arguments. If `batch_shape` is
-provided, broadcasting may occur, which will require making copies.
-
-```python
-# Create a 2 x 2 identity matrix.
-operator = LinearOperatorIdentity(num_rows=2, dtype=tf.float32)
-
-operator.to_dense()
-==> [[1., 0.]
- [0., 1.]]
-
-operator.shape
-==> [2, 2]
-
-operator.log_determinant()
-==> 0.
-
-x = ... Shape [2, 4] Tensor
-operator.apply(x)
-==> Shape [2, 4] Tensor, same as x.
-
-y = tf.random_normal(shape=[3, 2, 4])
-# Note that y.shape is compatible with operator.shape because operator.shape
-# is broadcast to [3, 2, 2].
-# This broadcast does NOT require copying data, since we can infer that y
-# will be passed through without changing shape. We are always able to infer
-# this if the operator has no batch_shape.
-x = operator.solve(y)
-==> Shape [3, 2, 4] Tensor, same as y.
-
-# Create a 2-batch of 2x2 identity matrices
-operator = LinearOperatorIdentity(num_rows=2, batch_shape=[2])
-operator.to_dense()
-==> [[[1., 0.]
- [0., 1.]],
- [[1., 0.]
- [0., 1.]]]
-
-# Here, even though the operator has a batch shape, the input is the same as
-# the output, so x can be passed through without a copy. The operator is able
-# to detect that no broadcast is necessary because both x and the operator
-# have statically defined shape.
-x = ... Shape [2, 2, 3]
-operator.apply(x)
-==> Shape [2, 2, 3] Tensor, same as x
-
-# Here the operator and x have different batch_shape, and are broadcast.
-# This requires a copy, since the output is different size than the input.
-x = ... Shape [1, 2, 3]
-operator.apply(x)
-==> Shape [2, 2, 3] Tensor, equal to [x, x]
-```
-
-### Shape compatibility
-
-This operator acts on [batch] matrix with compatible shape.
-`x` is a batch matrix with compatible shape for `apply` and `solve` if
-
-```
-operator.shape = [B1,...,Bb] + [N, N], with b >= 0
-x.shape = [C1,...,Cc] + [N, R],
-and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]
-```
-
-### Performance
-
-If `batch_shape` initialization arg is `None`:
-
-* `operator.apply(x)` is `O(1)`
-* `operator.solve(x)` is `O(1)`
-* `operator.determinant()` is `O(1)`
-
-If `batch_shape` initialization arg is provided, and static checks cannot
-rule out the need to broadcast:
-
-* `operator.apply(x)` is `O(D1*...*Dd*N*R)`
-* `operator.solve(x)` is `O(D1*...*Dd*N*R)`
-* `operator.determinant()` is `O(B1*...*Bb)`
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite`.
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.__init__(num_rows, batch_shape=None, dtype=None, is_non_singular=True, is_self_adjoint=True, is_positive_definite=True, assert_proper_shapes=False, name='LinearOperatorIdentity')` {#LinearOperatorIdentity.__init__}
-
-Initialize a `LinearOperatorIdentity`.
-
-The `LinearOperatorIdentity` is initialized with arguments defining `dtype`
-and shape.
-
-This operator is able to broadcast the leading (batch) dimensions, which
-sometimes requires copying data. If `batch_shape` is `None`, the operator
-can take arguments of any batch shape without copying. See examples.
-
-##### Args:
-
-
-* <b>`num_rows`</b>: Scalar non-negative integer `Tensor`. Number of rows in the
- corresponding identity matrix.
-* <b>`batch_shape`</b>: Optional `1-D` integer `Tensor`. The shape of the leading
- dimensions. If `None`, this operator has no leading dimensions.
-* <b>`dtype`</b>: Data type of the matrix that this operator represents.
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite.
-* <b>`assert_proper_shapes`</b>: Python `bool`. If `False`, only perform static
- checks that initialization and method arguments have proper shape.
- If `True`, and static checks are inconclusive, add asserts to the graph.
-* <b>`name`</b>: A name for this `LinearOperator`
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `num_rows` is determined statically to be non-scalar, or
- negative.
-* <b>`ValueError`</b>: If `batch_shape` is determined statically to not be 1-D, or
- negative.
-* <b>`ValueError`</b>: If any of the following is not `True`:
- `{is_self_adjoint, is_non_singular, is_positive_definite}`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.add_to_tensor(mat, name='add_to_tensor')` {#LinearOperatorIdentity.add_to_tensor}
-
-Add matrix represented by this operator to `mat`. Equiv to `I + mat`.
-
-##### Args:
-
-
-* <b>`mat`</b>: `Tensor` with same `dtype` and shape broadcastable to `self`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.apply(x, adjoint=False, name='apply')` {#LinearOperatorIdentity.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.assert_non_singular(name='assert_non_singular')` {#LinearOperatorIdentity.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.assert_positive_definite(name='assert_positive_definite')` {#LinearOperatorIdentity.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperatorIdentity.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.batch_shape` {#LinearOperatorIdentity.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperatorIdentity.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.determinant(name='det')` {#LinearOperatorIdentity.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.diag_part(name='diag_part')` {#LinearOperatorIdentity.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.domain_dimension` {#LinearOperatorIdentity.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperatorIdentity.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.dtype` {#LinearOperatorIdentity.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.graph_parents` {#LinearOperatorIdentity.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.is_non_singular` {#LinearOperatorIdentity.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.is_positive_definite` {#LinearOperatorIdentity.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.is_self_adjoint` {#LinearOperatorIdentity.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.is_square` {#LinearOperatorIdentity.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.log_abs_determinant(name='log_abs_det')` {#LinearOperatorIdentity.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.name` {#LinearOperatorIdentity.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.range_dimension` {#LinearOperatorIdentity.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperatorIdentity.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.shape` {#LinearOperatorIdentity.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.shape_tensor(name='shape_tensor')` {#LinearOperatorIdentity.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.solve(rhs, adjoint=False, name='solve')` {#LinearOperatorIdentity.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.tensor_rank` {#LinearOperatorIdentity.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperatorIdentity.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorIdentity.to_dense(name='to_dense')` {#LinearOperatorIdentity.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.losses.absolute_difference.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.losses.absolute_difference.md
deleted file mode 100644
index 1f900a6ffc..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.losses.absolute_difference.md
+++ /dev/null
@@ -1,35 +0,0 @@
-### `tf.contrib.losses.absolute_difference(*args, **kwargs)` {#absolute_difference}
-
-Adds an Absolute Difference loss to the training procedure. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.absolute_difference instead.
-
-`weights` acts as a coefficient for the loss. If a scalar is provided, then
-the loss is simply scaled by the given value. If `weights` is a tensor of size
-[batch_size], then the total loss for each sample of the batch is rescaled
-by the corresponding element in the `weights` vector. If the shape of
-`weights` matches the shape of `predictions`, then the loss of each
-measurable element of `predictions` is scaled by the corresponding value of
-`weights`.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted outputs.
-* <b>`labels`</b>: The ground truth output tensor, same dimensions as 'predictions'.
-* <b>`weights`</b>: Coefficients for the loss a scalar, a tensor of shape
- [batch_size] or a tensor whose shape matches `predictions`.
-* <b>`scope`</b>: The scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `labels` or
- if the shape of `weights` is invalid.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.losses.compute_weighted_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.losses.compute_weighted_loss.md
deleted file mode 100644
index 6f7d92f7bb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.losses.compute_weighted_loss.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.contrib.losses.compute_weighted_loss(*args, **kwargs)` {#compute_weighted_loss}
-
-Computes the weighted loss. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.compute_weighted_loss instead.
-
-##### Args:
-
-
-* <b>`losses`</b>: A tensor of size [batch_size, d1, ... dN].
-* <b>`weights`</b>: A tensor of size [1] or [batch_size, d1, ... dK] where K < N.
-* <b>`scope`</b>: the scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` that returns the weighted loss.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is `None` or the shape is not compatible with
- `losses`, or if the number of dimensions (rank) of either `losses` or
- `weights` is missing.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.losses.hinge_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.losses.hinge_loss.md
deleted file mode 100644
index ca98c0cc9f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.losses.hinge_loss.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.contrib.losses.hinge_loss(*args, **kwargs)` {#hinge_loss}
-
-Method that returns the loss tensor for hinge loss. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.hinge_loss instead. Note that the order of the predictions and labels arguments were changed.
-
-##### Args:
-
-
-* <b>`logits`</b>: The logits, a float tensor.
-* <b>`labels`</b>: The ground truth output tensor. Its shape should match the shape of
- logits. The values of the tensor are expected to be 0.0 or 1.0.
-* <b>`scope`</b>: The scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A `Tensor` of same shape as `logits` and `labels` representing the loss
- values across the batch.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shapes of `logits` and `labels` don't match.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_mean_squared_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_mean_squared_error.md
deleted file mode 100644
index 285b2528e0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_mean_squared_error.md
+++ /dev/null
@@ -1,51 +0,0 @@
-### `tf.contrib.metrics.streaming_mean_squared_error(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_squared_error}
-
-Computes the mean squared error between the labels and predictions.
-
-The `streaming_mean_squared_error` function creates two local variables,
-`total` and `count` that are used to compute the mean squared error.
-This average is weighted by `weights`, and it is ultimately returned as
-`mean_squared_error`: an idempotent operation that simply divides `total` by
-`count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`mean_squared_error`. Internally, a `squared_error` operation computes the
-element-wise square of the difference between `predictions` and `labels`. Then
-`update_op` increments `total` with the reduced sum of the product of
-`weights` and `squared_error`, and it increments `count` with the reduced sum
-of `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of arbitrary shape.
-* <b>`labels`</b>: A `Tensor` of the same shape as `predictions`.
-* <b>`weights`</b>: Optional `Tensor` indicating the frequency with which an example is
- sampled. Rank must be 0, or the same rank as `labels`, and must be
- broadcastable to `labels` (i.e., all dimensions must be either `1`, or
- the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that
- `mean_squared_error` should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`mean_squared_error`</b>: A `Tensor` representing the current mean, the value of
- `total` divided by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `mean_squared_error`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_sparse_recall_at_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_sparse_recall_at_k.md
deleted file mode 100644
index 19259dd2f3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.metrics.streaming_sparse_recall_at_k.md
+++ /dev/null
@@ -1,74 +0,0 @@
-### `tf.contrib.metrics.streaming_sparse_recall_at_k(predictions, labels, k, class_id=None, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_sparse_recall_at_k}
-
-Computes recall@k of the predictions with respect to sparse labels.
-
-If `class_id` is not specified, we'll calculate recall as the ratio of true
- positives (i.e., correct predictions, items in the top `k` highest
- `predictions` that are found in the corresponding row in `labels`) to
- actual positives (the full `labels` row).
-If `class_id` is specified, we calculate recall by considering only the rows
- in the batch for which `class_id` is in `labels`, and computing the
- fraction of them for which `class_id` is in the corresponding row in
- `labels`.
-
-`streaming_sparse_recall_at_k` creates two local variables,
-`true_positive_at_<k>` and `false_negative_at_<k>`, that are used to compute
-the recall_at_k frequency. This frequency is ultimately returned as
-`recall_at_<k>`: an idempotent operation that simply divides
-`true_positive_at_<k>` by total (`true_positive_at_<k>` +
-`false_negative_at_<k>`).
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`recall_at_<k>`. Internally, a `top_k` operation computes a `Tensor`
-indicating the top `k` `predictions`. Set operations applied to `top_k` and
-`labels` calculate the true positives and false negatives weighted by
-`weights`. Then `update_op` increments `true_positive_at_<k>` and
-`false_negative_at_<k>` using these values.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: Float `Tensor` with shape [D1, ... DN, num_classes] where
- N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes].
- The final dimension contains the logit values for each class. [D1, ... DN]
- must match `labels`.
-* <b>`labels`</b>: `int64` `Tensor` or `SparseTensor` with shape
- [D1, ... DN, num_labels], where N >= 1 and num_labels is the number of
- target classes for the associated prediction. Commonly, N=1 and `labels`
- has shape [batch_size, num_labels]. [D1, ... DN] must match `predictions`.
- Values should be in range [0, num_classes), where num_classes is the last
- dimension of `predictions`. Values outside this range always count
- towards `false_negative_at_<k>`.
-* <b>`k`</b>: Integer, k for @k metric.
-* <b>`class_id`</b>: Integer class ID for which we want binary metrics. This should be
- in range [0, num_classes), where num_classes is the last dimension of
- `predictions`. If class_id is outside this range, the method returns NAN.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or n-1, where n is the rank of
- `labels`. If the latter, it must be broadcastable to `labels` (i.e., all
- dimensions must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that values should
- be added to.
-* <b>`updates_collections`</b>: An optional list of collections that updates should
- be added to.
-* <b>`name`</b>: Name of new update operation, and namespace for other dependent ops.
-
-##### Returns:
-
-
-* <b>`recall`</b>: Scalar `float64` `Tensor` with the value of `true_positives` divided
- by the sum of `true_positives` and `false_negatives`.
-* <b>`update_op`</b>: `Operation` that increments `true_positives` and
- `false_negatives` variables appropriately, and whose value matches
- `recall`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is not `None` and its shape doesn't match
- `predictions`, or if either `metrics_collections` or `updates_collections`
- are not a list or tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.opt.VariableClippingOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.opt.VariableClippingOptimizer.md
deleted file mode 100644
index 7dac3ec66a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.opt.VariableClippingOptimizer.md
+++ /dev/null
@@ -1,66 +0,0 @@
-Wrapper optimizer that clips the norm of specified variables after update.
-
-This optimizer delegates all aspects of gradient calculation and application
-to an underlying optimizer. After applying gradients, this optimizer then
-clips the variable to have a maximum L2 norm along specified dimensions.
-NB: this is quite different from clipping the norm of the gradients.
-
-Multiple instances of `VariableClippingOptimizer` may be chained to specify
-different max norms for different subsets of variables.
-
-This is more efficient at serving-time than using normalization during
-embedding lookup, at the expense of more expensive training and fewer
-guarantees about the norms.
-
-- - -
-
-#### `tf.contrib.opt.VariableClippingOptimizer.__init__(opt, vars_to_clip_dims, max_norm, use_locking=False, colocate_clip_ops_with_vars=False, name='VariableClipping')` {#VariableClippingOptimizer.__init__}
-
-Construct a new clip-norm optimizer.
-
-##### Args:
-
-
-* <b>`opt`</b>: The actual optimizer that will be used to compute and apply the
- gradients. Must be one of the Optimizer classes.
-* <b>`vars_to_clip_dims`</b>: A dict with keys as Variables and values as lists
- of dimensions along which to compute the L2-norm. See
- `tf.clip_by_norm` for more details.
-* <b>`max_norm`</b>: The L2-norm to clip to, for all variables specified.
-* <b>`use_locking`</b>: If `True` use locks for clip update operations.
-* <b>`colocate_clip_ops_with_vars`</b>: If `True`, try colocating the clip norm
- ops with the corresponding variable.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "VariableClipping".
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.contrib.opt.VariableClippingOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#VariableClippingOptimizer.apply_gradients}
-
-
-
-
-- - -
-
-#### `tf.contrib.opt.VariableClippingOptimizer.compute_gradients(*args, **kwargs)` {#VariableClippingOptimizer.compute_gradients}
-
-
-
-
-- - -
-
-#### `tf.contrib.opt.VariableClippingOptimizer.get_slot(*args, **kwargs)` {#VariableClippingOptimizer.get_slot}
-
-
-
-
-- - -
-
-#### `tf.contrib.opt.VariableClippingOptimizer.get_slot_names(*args, **kwargs)` {#VariableClippingOptimizer.get_slot_names}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.BasicLSTMCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.BasicLSTMCell.md
deleted file mode 100644
index eb4a38a8c3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.BasicLSTMCell.md
+++ /dev/null
@@ -1,72 +0,0 @@
-Basic LSTM recurrent network cell.
-
-The implementation is based on: http://arxiv.org/abs/1409.2329.
-
-We add forget_bias (default: 1) to the biases of the forget gate in order to
-reduce the scale of forgetting in the beginning of the training.
-
-It does not allow cell clipping, a projection layer, and does not
-use peep-hole connections: it is the basic baseline.
-
-For advanced models, please use the full LSTMCell that follows.
-- - -
-
-#### `tf.contrib.rnn.BasicLSTMCell.__call__(inputs, state, scope=None)` {#BasicLSTMCell.__call__}
-
-Long short-term memory cell (LSTM).
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicLSTMCell.__init__(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=tanh)` {#BasicLSTMCell.__init__}
-
-Initialize the basic LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell.
-* <b>`forget_bias`</b>: float, The bias added to forget gates (see above).
-* <b>`input_size`</b>: Deprecated and unused.
-* <b>`state_is_tuple`</b>: If True, accepted and returned states are 2-tuples of
- the `c_state` and `m_state`. If False, they are concatenated
- along the column axis. The latter behavior will soon be deprecated.
-* <b>`activation`</b>: Activation function of the inner states.
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicLSTMCell.output_size` {#BasicLSTMCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicLSTMCell.state_size` {#BasicLSTMCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicLSTMCell.zero_state(batch_size, dtype)` {#BasicLSTMCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.GRUCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.GRUCell.md
deleted file mode 100644
index 4f7cf4402f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.GRUCell.md
+++ /dev/null
@@ -1,51 +0,0 @@
-Gated Recurrent Unit cell (cf. http://arxiv.org/abs/1406.1078).
-- - -
-
-#### `tf.contrib.rnn.GRUCell.__call__(inputs, state, scope=None)` {#GRUCell.__call__}
-
-Gated recurrent unit (GRU) with nunits cells.
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUCell.__init__(num_units, input_size=None, activation=tanh)` {#GRUCell.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUCell.output_size` {#GRUCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUCell.state_size` {#GRUCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUCell.zero_state(batch_size, dtype)` {#GRUCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.GridLSTMCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.GridLSTMCell.md
deleted file mode 100644
index a88d5f8977..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.GridLSTMCell.md
+++ /dev/null
@@ -1,134 +0,0 @@
-Grid Long short-term memory unit (LSTM) recurrent network cell.
-
-The default is based on:
- Nal Kalchbrenner, Ivo Danihelka and Alex Graves
- "Grid Long Short-Term Memory," Proc. ICLR 2016.
- http://arxiv.org/abs/1507.01526
-
-When peephole connections are used, the implementation is based on:
- Tara N. Sainath and Bo Li
- "Modeling Time-Frequency Patterns with LSTM vs. Convolutional Architectures
- for LVCSR Tasks." submitted to INTERSPEECH, 2016.
-
-The code uses optional peephole connections, shared_weights and cell clipping.
-- - -
-
-#### `tf.contrib.rnn.GridLSTMCell.__call__(inputs, state, scope=None)` {#GridLSTMCell.__call__}
-
-Run one step of LSTM.
-
-##### Args:
-
-
-* <b>`inputs`</b>: input Tensor, 2D, [batch, feature_size].
-* <b>`state`</b>: Tensor or tuple of Tensors, 2D, [batch, state_size], depends on the
- flag self._state_is_tuple.
-* <b>`scope`</b>: (optional) VariableScope for the created subgraph; if None, it
- defaults to "GridLSTMCell".
-
-##### Returns:
-
- A tuple containing:
- - A 2D, [batch, output_dim], Tensor representing the output of the LSTM
- after reading "inputs" when previous state was "state".
- Here output_dim is num_units.
- - A 2D, [batch, state_size], Tensor representing the new state of LSTM
- after reading "inputs" when previous state was "state".
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an input_size was specified and the provided inputs have
- a different dimension.
-
-
-- - -
-
-#### `tf.contrib.rnn.GridLSTMCell.__init__(num_units, use_peepholes=False, share_time_frequency_weights=False, cell_clip=None, initializer=None, num_unit_shards=1, forget_bias=1.0, feature_size=None, frequency_skip=None, num_frequency_blocks=None, start_freqindex_list=None, end_freqindex_list=None, couple_input_forget_gates=False, state_is_tuple=False)` {#GridLSTMCell.__init__}
-
-Initialize the parameters for an LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell
-* <b>`use_peepholes`</b>: (optional) bool, default False. Set True to enable
- diagonal/peephole connections.
-* <b>`share_time_frequency_weights`</b>: (optional) bool, default False. Set True to
- enable shared cell weights between time and frequency LSTMs.
-* <b>`cell_clip`</b>: (optional) A float value, default None, if provided the cell
- state is clipped by this value prior to the cell output activation.
-* <b>`initializer`</b>: (optional) The initializer to use for the weight and
- projection matrices, default None.
-* <b>`num_unit_shards`</b>: (optional) int, defualt 1, How to split the weight
- matrix. If > 1,the weight matrix is stored across num_unit_shards.
-* <b>`forget_bias`</b>: (optional) float, default 1.0, The initial bias of the
- forget gates, used to reduce the scale of forgetting at the beginning
- of the training.
-* <b>`feature_size`</b>: (optional) int, default None, The size of the input feature
- the LSTM spans over.
-* <b>`frequency_skip`</b>: (optional) int, default None, The amount the LSTM filter
- is shifted by in frequency.
-* <b>`num_frequency_blocks`</b>: [required] A list of frequency blocks needed to
- cover the whole input feature splitting defined by start_freqindex_list
- and end_freqindex_list.
-* <b>`start_freqindex_list`</b>: [optional], list of ints, default None, The
- starting frequency index for each frequency block.
-* <b>`end_freqindex_list`</b>: [optional], list of ints, default None. The ending
- frequency index for each frequency block.
-* <b>`couple_input_forget_gates`</b>: (optional) bool, default False, Whether to
- couple the input and forget gates, i.e. f_gate = 1.0 - i_gate, to reduce
- model parameters and computation cost.
-* <b>`state_is_tuple`</b>: If True, accepted and returned states are 2-tuples of
- the `c_state` and `m_state`. By default (False), they are concatenated
- along the column axis. This default behavior will soon be deprecated.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the num_frequency_blocks list is not specified
-
-
-- - -
-
-#### `tf.contrib.rnn.GridLSTMCell.output_size` {#GridLSTMCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GridLSTMCell.state_size` {#GridLSTMCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GridLSTMCell.state_tuple_type` {#GridLSTMCell.state_tuple_type}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GridLSTMCell.zero_state(batch_size, dtype)` {#GridLSTMCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.TimeReversedFusedRNN.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.TimeReversedFusedRNN.md
deleted file mode 100644
index 0d9eb1ff90..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.TimeReversedFusedRNN.md
+++ /dev/null
@@ -1,25 +0,0 @@
-This is an adaptor to time-reverse a FusedRNNCell.
-
-For example,
-
-```python
-cell = tf.contrib.rnn.BasicRNNCell(10)
-fw_lstm = tf.contrib.rnn.FusedRNNCellAdaptor(cell, use_dynamic_rnn=True)
-bw_lstm = tf.contrib.rnn.TimeReversedFusedRNN(fw_lstm)
-fw_out, fw_state = fw_lstm(inputs)
-bw_out, bw_state = bw_lstm(inputs)
-```
-- - -
-
-#### `tf.contrib.rnn.TimeReversedFusedRNN.__call__(inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)` {#TimeReversedFusedRNN.__call__}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.TimeReversedFusedRNN.__init__(cell)` {#TimeReversedFusedRNN.__init__}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.static_state_saving_rnn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.static_state_saving_rnn.md
deleted file mode 100644
index 0f6ef9e409..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.rnn.static_state_saving_rnn.md
+++ /dev/null
@@ -1,33 +0,0 @@
-### `tf.contrib.rnn.static_state_saving_rnn(cell, inputs, state_saver, state_name, sequence_length=None, scope=None)` {#static_state_saving_rnn}
-
-RNN that accepts a state saver for time-truncated RNN calculation.
-
-##### Args:
-
-
-* <b>`cell`</b>: An instance of `RNNCell`.
-* <b>`inputs`</b>: A length T list of inputs, each a `Tensor` of shape
- `[batch_size, input_size]`.
-* <b>`state_saver`</b>: A state saver object with methods `state` and `save_state`.
-* <b>`state_name`</b>: Python string or tuple of strings. The name to use with the
- state_saver. If the cell returns tuples of states (i.e.,
- `cell.state_size` is a tuple) then `state_name` should be a tuple of
- strings having the same length as `cell.state_size`. Otherwise it should
- be a single string.
-* <b>`sequence_length`</b>: (optional) An int32/int64 vector size [batch_size].
- See the documentation for rnn() for more details about sequence_length.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
-
-##### Returns:
-
- A pair (outputs, state) where:
- outputs is a length T list of outputs (one for each input)
- states is the final state
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cell` is not an instance of RNNCell.
-* <b>`ValueError`</b>: If `inputs` is `None` or an empty list, or if the arity and
- type of `state_name` does not match that of `cell.state_size`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.training.resample_at_rate.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.training.resample_at_rate.md
deleted file mode 100644
index 7142c66c2e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.training.resample_at_rate.md
+++ /dev/null
@@ -1,36 +0,0 @@
-### `tf.contrib.training.resample_at_rate(inputs, rates, scope=None, seed=None, back_prop=False)` {#resample_at_rate}
-
-Given `inputs` tensors, stochastically resamples each at a given rate.
-
-For example, if the inputs are `[[a1, a2], [b1, b2]]` and the rates
-tensor contains `[3, 1]`, then the return value may look like `[[a1,
-a2, a1, a1], [b1, b2, b1, b1]]`. However, many other outputs are
-possible, since this is stochastic -- averaged over many repeated
-calls, each set of inputs should appear in the output `rate` times
-the number of invocations.
-
-Uses Knuth's method to generate samples from the poisson
-distribution (but instead of just incrementing a count, actually
-emits the input); this is described at
-https://en.wikipedia.org/wiki/Poisson_distribution in the section on
-generating Poisson-distributed random variables.
-
-Note that this method is not appropriate for large rate values: with
-float16 it will stop performing correctly for rates above 9.17;
-float32, 87; and float64, 708. (These are the base-e versions of the
-minimum representable exponent for each type.)
-
-##### Args:
-
-
-* <b>`inputs`</b>: A list of tensors, each of which has a shape of `[batch_size, ...]`
-* <b>`rates`</b>: A tensor of shape `[batch_size]` contiaining the resampling rates
- for each input.
-* <b>`scope`</b>: Scope for the op.
-* <b>`seed`</b>: Random seed to use.
-* <b>`back_prop`</b>: Whether to allow back-propagation through this op.
-
-##### Returns:
-
- Selections from the input tensors.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.util.constant_value.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.util.constant_value.md
deleted file mode 100644
index 58ba7b0abb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.contrib.util.constant_value.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.contrib.util.constant_value(tensor)` {#constant_value}
-
-Returns the constant value of the given tensor, if efficiently calculable.
-
-This function attempts to partially evaluate the given tensor, and
-returns its value as a numpy ndarray if this succeeds.
-
-TODO(mrry): Consider whether this function should use a registration
-mechanism like gradients and ShapeFunctions, so that it is easily
-extensible.
-
-NOTE: If `constant_value(tensor)` returns a non-`None` result, it will no
-longer be possible to feed a different value for `tensor`. This allows the
-result of this function to influence the graph that is constructed, and
-permits static shape optimizations.
-
-##### Args:
-
-
-* <b>`tensor`</b>: The Tensor to be evaluated.
-
-##### Returns:
-
- A numpy ndarray containing the constant value of the given `tensor`,
- or None if it cannot be calculated.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if tensor is not an ops.Tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.convert_to_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.convert_to_tensor.md
deleted file mode 100644
index 226e01ead0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.convert_to_tensor.md
+++ /dev/null
@@ -1,50 +0,0 @@
-### `tf.convert_to_tensor(value, dtype=None, name=None, preferred_dtype=None)` {#convert_to_tensor}
-
-Converts the given `value` to a `Tensor`.
-
-This function converts Python objects of various types to `Tensor`
-objects. It accepts `Tensor` objects, numpy arrays, Python lists,
-and Python scalars. For example:
-
-```python
-import numpy as np
-
-def my_func(arg):
- arg = tf.convert_to_tensor(arg, dtype=tf.float32)
- return tf.matmul(arg, arg) + arg
-
-# The following calls are equivalent.
-value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]]))
-value_2 = my_func([[1.0, 2.0], [3.0, 4.0]])
-value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32))
-```
-
-This function can be useful when composing a new operation in Python
-(such as `my_func` in the example above). All standard Python op
-constructors apply this function to each of their Tensor-valued
-inputs, which allows those ops to accept numpy arrays, Python lists,
-and scalars in addition to `Tensor` objects.
-
-##### Args:
-
-
-* <b>`value`</b>: An object whose type has a registered `Tensor` conversion function.
-* <b>`dtype`</b>: Optional element type for the returned tensor. If missing, the
- type is inferred from the type of `value`.
-* <b>`name`</b>: Optional name to use if a new `Tensor` is created.
-* <b>`preferred_dtype`</b>: Optional element type for the returned tensor,
- used when dtype is None. In some cases, a caller may not have a
- dtype in mind when converting to a tensor, so preferred_dtype
- can be used as a soft preference. If the conversion to
- `preferred_dtype` is not possible, this argument has no effect.
-
-##### Returns:
-
- An `Output` based on `value`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If no conversion function is registered for `value`.
-* <b>`RuntimeError`</b>: If a registered conversion function returns an invalid value.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.diag.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.diag.md
deleted file mode 100644
index 3279122875..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.diag.md
+++ /dev/null
@@ -1,33 +0,0 @@
-### `tf.diag(diagonal, name=None)` {#diag}
-
-Returns a diagonal tensor with a given diagonal values.
-
-Given a `diagonal`, this operation returns a tensor with the `diagonal` and
-everything else padded with zeros. The diagonal is computed as follows:
-
-Assume `diagonal` has dimensions [D1,..., Dk], then the output is a tensor of
-rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:
-
-`output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]` and 0 everywhere else.
-
-For example:
-
-```prettyprint
-# 'diagonal' is [1, 2, 3, 4]
-tf.diag(diagonal) ==> [[1, 0, 0, 0]
- [0, 2, 0, 0]
- [0, 0, 3, 0]
- [0, 0, 0, 4]]
-```
-
-##### Args:
-
-
-* <b>`diagonal`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
- Rank k tensor where k is at most 3.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `diagonal`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.divide.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.divide.md
deleted file mode 100644
index 8db7bf156c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.divide.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.divide(x, y, name=None)` {#divide}
-
-Computes Python style division of `x` by `y`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.einsum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.einsum.md
deleted file mode 100644
index 45597f2056..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.einsum.md
+++ /dev/null
@@ -1,74 +0,0 @@
-### `tf.einsum(equation, *inputs)` {#einsum}
-
-A generalized contraction between tensors of arbitrary dimension.
-
-This function returns a tensor whose elements are defined by `equation`,
-which is written in a shorthand form inspired by the Einstein summation
-convention. As an example, consider multiplying two matrices
-A and B to form a matrix C. The elements of C are given by:
-
-```
- C[i,k] = sum_j A[i,j] * B[j,k]
-```
-
-The corresponding `equation` is:
-
-```
- ij,jk->ik
-```
-
-In general, the `equation` is obtained from the more familiar element-wise
-equation by
- 1. removing variable names, brackets, and commas,
- 2. replacing "*" with ",",
- 3. dropping summation signs, and
- 4. moving the output to the right, and replacing "=" with "->".
-
-Many common operations can be expressed in this way. For example:
-
-```python
-# Matrix multiplication
->>> einsum('ij,jk->ik', m0, m1) # output[i,k] = sum_j m0[i,j] * m1[j, k]
-
-# Dot product
->>> einsum('i,i->', u, v) # output = sum_i u[i]*v[i]
-
-# Outer product
->>> einsum('i,j->ij', u, v) # output[i,j] = u[i]*v[j]
-
-# Transpose
->>> einsum('ij->ji', m) # output[j,i] = m[i,j]
-
-# Batch matrix multiplication
->>> einsum('aij,ajk->aik', s, t) # out[a,i,k] = sum_j s[a,i,j] * t[a, j, k]
-```
-
-This function behaves like `numpy.einsum`, but does not support:
-* Ellipses (subscripts like `ij...,jk...->ik...`)
-* Subscripts where an axis appears more than once for a single input
- (e.g. `ijj,k->ik`).
-* Subscripts that are summed across multiple inputs (e.g., `ij,ij,jk->ik`).
-
-##### Args:
-
-
-* <b>`equation`</b>: a `str` describing the contraction, in the same format as
- `numpy.einsum`.
-* <b>`inputs`</b>: the inputs to contract (each one a `Tensor`), whose shapes should
- be consistent with `equation`.
-
-##### Returns:
-
- The contracted `Tensor`, with shape determined by `equation`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If
- - the format of `equation` is incorrect,
- - the number of inputs implied by `equation` does not match `len(inputs)`,
- - an axis appears in the output subscripts but not in any of the inputs,
- - the number of dimensions of an input differs from the number of
- indices in its subscript, or
- - the input shapes are inconsistent along a particular axis.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.erf.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.erf.md
deleted file mode 100644
index 21e0f14be4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.erf.md
+++ /dev/null
@@ -1,15 +0,0 @@
-### `tf.erf(x, name=None)` {#erf}
-
-Computes the Gauss error function of `x` element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of `SparseTensor`. Must be one of the following types: `half`,
- `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.AlreadyExistsError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.AlreadyExistsError.md
deleted file mode 100644
index 85425df298..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.errors.AlreadyExistsError.md
+++ /dev/null
@@ -1,14 +0,0 @@
-Raised when an entity that we attempted to create already exists.
-
-For example, running an operation that saves a file
-(e.g. [`tf.train.Saver.save()`](../../api_docs/python/train.md#Saver.save))
-could potentially raise this exception if an explicit filename for an
-existing file was passed.
-
-- - -
-
-#### `tf.errors.AlreadyExistsError.__init__(node_def, op, message)` {#AlreadyExistsError.__init__}
-
-Creates an `AlreadyExistsError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.fake_quant_with_min_max_args_gradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.fake_quant_with_min_max_args_gradient.md
deleted file mode 100644
index 5c93c3e046..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.fake_quant_with_min_max_args_gradient.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.fake_quant_with_min_max_args_gradient(gradients, inputs, min=None, max=None, name=None)` {#fake_quant_with_min_max_args_gradient}
-
-Compute gradients for a FakeQuantWithMinMaxArgs operation.
-
-##### Args:
-
-
-* <b>`gradients`</b>: A `Tensor` of type `float32`.
- Backpropagated gradients above the FakeQuantWithMinMaxArgs operation.
-* <b>`inputs`</b>: A `Tensor` of type `float32`.
- Values passed as inputs to the FakeQuantWithMinMaxArgs operation.
-* <b>`min`</b>: An optional `float`. Defaults to `-6`.
-* <b>`max`</b>: An optional `float`. Defaults to `6`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`.
- Backpropagated gradients below the FakeQuantWithMinMaxArgs operation:
- `gradients * (inputs >= min && inputs <= max)`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.get_default_session.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.get_default_session.md
deleted file mode 100644
index c564366e8b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.get_default_session.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.get_default_session()` {#get_default_session}
-
-Returns the default session for the current thread.
-
-The returned `Session` will be the innermost session on which a
-`Session` or `Session.as_default()` context has been entered.
-
-NOTE: The default session is a property of the current thread. If you
-create a new thread, and wish to use the default session in that
-thread, you must explicitly add a `with sess.as_default():` in that
-thread's function.
-
-##### Returns:
-
- The default `Session` being used in the current thread.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.gradients.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.gradients.md
deleted file mode 100644
index ea710b2a15..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.gradients.md
+++ /dev/null
@@ -1,48 +0,0 @@
-### `tf.gradients(ys, xs, grad_ys=None, name='gradients', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None)` {#gradients}
-
-Constructs symbolic partial derivatives of sum of `ys` w.r.t. x in `xs`.
-
-`ys` and `xs` are each a `Tensor` or a list of tensors. `grad_ys`
-is a list of `Tensor`, holding the gradients received by the
-`ys`. The list must be the same length as `ys`.
-
-`gradients()` adds ops to the graph to output the partial
-derivatives of `ys` with respect to `xs`. It returns a list of
-`Tensor` of length `len(xs)` where each tensor is the `sum(dy/dx)`
-for y in `ys`.
-
-`grad_ys` is a list of tensors of the same length as `ys` that holds
-the initial gradients for each y in `ys`. When `grad_ys` is None,
-we fill in a tensor of '1's of the shape of y for each y in `ys`. A
-user can provide their own initial `grad_ys` to compute the
-derivatives using a different initial gradient for each y (e.g., if
-one wanted to weight the gradient differently for each value in
-each y).
-
-##### Args:
-
-
-* <b>`ys`</b>: A `Tensor` or list of tensors to be differentiated.
-* <b>`xs`</b>: A `Tensor` or list of tensors to be used for differentiation.
-* <b>`grad_ys`</b>: Optional. A `Tensor` or list of tensors the same size as
- `ys` and holding the gradients computed for each y in `ys`.
-* <b>`name`</b>: Optional name to use for grouping all the gradient ops together.
- defaults to 'gradients'.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`gate_gradients`</b>: If True, add a tuple around the gradients returned
- for an operations. This avoids some race conditions.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Accepted values are constants defined in the class `AggregationMethod`.
-
-##### Returns:
-
- A list of `sum(dy/dx)` for each x in `xs`.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: if one of the operations between `x` and `y` does not
- have a registered gradient function.
-* <b>`ValueError`</b>: if the arguments are invalid.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.greater_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.greater_equal.md
deleted file mode 100644
index d6ce057c13..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.greater_equal.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.greater_equal(x, y, name=None)` {#greater_equal}
-
-Returns the truth value of (x >= y) element-wise.
-
-*NOTE*: `GreaterEqual` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.igammac.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.igammac.md
deleted file mode 100644
index 2d935bb6e3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.igammac.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.igammac(a, x, name=None)` {#igammac}
-
-Compute the upper regularized incomplete Gamma function `Q(a, x)`.
-
-The upper regularized incomplete Gamma function is defined as:
-
-```
-Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)
-```
-where
-```
-Gamma(a, x) = int_{x}^{\infty} t^{a-1} exp(-t) dt
-```
-is the upper incomplete Gama function.
-
-Note, above `P(a, x)` (`Igamma`) is the lower regularized complete
-Gamma function.
-
-##### Args:
-
-
-* <b>`a`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`x`</b>: A `Tensor`. Must have the same type as `a`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `a`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.adjust_hue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.adjust_hue.md
deleted file mode 100644
index e334e26184..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.adjust_hue.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.image.adjust_hue(image, delta, name=None)` {#adjust_hue}
-
-Adjust hue of an RGB image.
-
-This is a convenience method that converts an RGB image to float
-representation, converts it to HSV, add an offset to the hue channel, converts
-back to RGB and then back to the original data type. If several adjustments
-are chained it is advisable to minimize the number of redundant conversions.
-
-`image` is an RGB image. The image hue is adjusted by converting the
-image to HSV and rotating the hue channel (H) by
-`delta`. The image is then converted back to RGB.
-
-`delta` must be in the interval `[-1, 1]`.
-
-##### Args:
-
-
-* <b>`image`</b>: RGB image or images. Size of the last dimension must be 3.
-* <b>`delta`</b>: float. How much to add to the hue channel.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- Adjusted image(s), same shape and DType as `image`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.central_crop.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.central_crop.md
deleted file mode 100644
index 4e6b6115f8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.central_crop.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.image.central_crop(image, central_fraction)` {#central_crop}
-
-Crop the central region of the image.
-
-Remove the outer parts of an image but retain the central region of the image
-along each dimension. If we specify central_fraction = 0.5, this function
-returns the region marked with "X" in the below diagram.
-
- --------
- | |
- | XXXX |
- | XXXX |
- | | where "X" is the central 50% of the image.
- --------
-
-##### Args:
-
-
-* <b>`image`</b>: 3-D float Tensor of shape [height, width, depth]
-* <b>`central_fraction`</b>: float (0, 1], fraction of size to crop
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if central_crop_fraction is not within (0, 1].
-
-##### Returns:
-
- 3-D float Tensor
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.random_hue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.random_hue.md
deleted file mode 100644
index 09a4ebc17f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.random_hue.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.image.random_hue(image, max_delta, seed=None)` {#random_hue}
-
-Adjust the hue of an RGB image by a random factor.
-
-Equivalent to `adjust_hue()` but uses a `delta` randomly
-picked in the interval `[-max_delta, max_delta]`.
-
-`max_delta` must be in the interval `[0, 0.5]`.
-
-##### Args:
-
-
-* <b>`image`</b>: RGB image or images. Size of the last dimension must be 3.
-* <b>`max_delta`</b>: float. Maximum value for the random delta.
-* <b>`seed`</b>: An operation-specific seed. It will be used in conjunction
- with the graph-level seed to determine the real seeds that will be
- used in this operation. Please see the documentation of
- set_random_seed for its interaction with the graph-level random seed.
-
-##### Returns:
-
- 3-D float tensor of shape `[height, width, channels]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `max_delta` is invalid.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.resize_bicubic.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.resize_bicubic.md
deleted file mode 100644
index 1805c7423d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.image.resize_bicubic.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.image.resize_bicubic(images, size, align_corners=None, name=None)` {#resize_bicubic}
-
-Resize `images` to `size` using bicubic interpolation.
-
-Input images can be of different types but output images are always float.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`.
- 4-D with shape `[batch, height, width, channels]`.
-* <b>`size`</b>: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The
- new size for the images.
-* <b>`align_corners`</b>: An optional `bool`. Defaults to `False`.
- If true, rescale input by (new_height - 1) / (height - 1), which
- exactly aligns the 4 corners of images and resized images. If false, rescale
- by new_height / height. Treat similarly the width dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`. 4-D with shape
- `[batch, new_height, new_width, channels]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.invert_permutation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.invert_permutation.md
deleted file mode 100644
index 20cab18208..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.invert_permutation.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.invert_permutation(x, name=None)` {#invert_permutation}
-
-Computes the inverse permutation of a tensor.
-
-This operation computes the inverse of an index permutation. It takes a 1-D
-integer tensor `x`, which represents the indices of a zero-based array, and
-swaps each value with its index position. In other words, for an output tensor
-`y` and an input tensor `x`, this operation computes the following:
-
-`y[x[i]] = i for i in [0, 1, ..., len(x) - 1]`
-
-The values must include 0. There can be no duplicate values or negative values.
-
-For example:
-
-```prettyprint
-# tensor `x` is [3, 4, 0, 2, 1]
-invert_permutation(x) ==> [2, 4, 3, 0, 1]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`. 1-D.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`. 1-D.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.is_strictly_increasing.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.is_strictly_increasing.md
deleted file mode 100644
index bdaedd519e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.is_strictly_increasing.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.is_strictly_increasing(x, name=None)` {#is_strictly_increasing}
-
-Returns `True` if `x` is strictly increasing.
-
-Elements of `x` are compared in row-major order. The tensor `[x[0],...]`
-is strictly increasing if for every adjacent pair we have `x[i] < x[i+1]`.
-If `x` has less than two elements, it is trivially strictly increasing.
-
-See also: `is_non_decreasing`
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`name`</b>: A name for this operation (optional).
- Defaults to "is_strictly_increasing"
-
-##### Returns:
-
- Boolean `Tensor`, equal to `True` iff `x` is strictly increasing.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `x` is not a numeric tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.linspace.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.linspace.md
deleted file mode 100644
index 29b8993fe6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.linspace.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.linspace(start, stop, num, name=None)` {#linspace}
-
-Generates values in an interval.
-
-A sequence of `num` evenly-spaced values are generated beginning at `start`.
-If `num > 1`, the values in the sequence increase by `stop - start / num - 1`,
-so that the last one is exactly `stop`.
-
-For example:
-
-```
-tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0]
-```
-
-##### Args:
-
-
-* <b>`start`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
- First entry in the range.
-* <b>`stop`</b>: A `Tensor`. Must have the same type as `start`.
- Last entry in the range.
-* <b>`num`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- Number of values to generate.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `start`. 1-D. The generated values.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.log1p.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.log1p.md
deleted file mode 100644
index e861034528..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.log1p.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.log1p(x, name=None)` {#log1p}
-
-Computes natural logarithm of (1 + x) element-wise.
-
-I.e., \\(y = \log_e (1 + x)\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.map_fn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.map_fn.md
deleted file mode 100644
index 1cbc6177de..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.map_fn.md
+++ /dev/null
@@ -1,96 +0,0 @@
-### `tf.map_fn(fn, elems, dtype=None, parallel_iterations=10, back_prop=True, swap_memory=False, infer_shape=True, name=None)` {#map_fn}
-
-map on the list of tensors unpacked from `elems` on dimension 0.
-
-The simplest version of `map` repeatedly applies the callable `fn` to a
-sequence of elements from first to last. The elements are made of the
-tensors unpacked from `elems`. `dtype` is the data type of the return
-value of `fn`. Users must provide `dtype` if it is different from
-the data type of `elems`.
-
-Suppose that `elems` is unpacked into `values`, a list of tensors. The shape
-of the result tensor is `[values.shape[0]] + fn(values[0]).shape`.
-
-This method also allows multi-arity `elems` and output of `fn`. If `elems`
-is a (possibly nested) list or tuple of tensors, then each of these tensors
-must have a matching first (unpack) dimension. The signature of `fn` may
-match the structure of `elems`. That is, if `elems` is
-`(t1, [t2, t3, [t4, t5]])`, then an appropriate signature for `fn` is:
-`fn = lambda (t1, [t2, t3, [t4, t5]]):`.
-
-Furthermore, `fn` may emit a different structure than its input. For example,
-`fn` may look like: `fn = lambda t1: return (t1 + 1, t1 - 1)`. In this case,
-the `dtype` parameter is not optional: `dtype` must be a type or (possibly
-nested) tuple of types matching the output of `fn`.
-
-To apply a functional operation to the nonzero elements of a SparseTensor
-one of the following methods is recommended. First, if the function is
-expressible as TensorFlow ops, use
-
-```python
- result = SparseTensor(input.indices, fn(input.values), input.dense_shape)
-```
-
-If, however, the function is not expressible as a TensorFlow op, then use
-
-```python
-result = SparseTensor(
- input.indices, map_fn(fn, input.values), input.dense_shape)
-```
-
-instead.
-
-##### Args:
-
-
-* <b>`fn`</b>: The callable to be performed. It accepts one argument, which will
- have the same (possibly nested) structure as `elems`. Its output
- must have the same structure as `dtype` if one is provided, otherwise
- it must have the same structure as `elems`.
-* <b>`elems`</b>: A tensor or (possibly nested) sequence of tensors, each of which
- will be unpacked along their first dimension. The nested sequence
- of the resulting slices will be applied to `fn`.
-* <b>`dtype`</b>: (optional) The output type(s) of `fn`. If `fn` returns a structure
- of Tensors differing from the structure of `elems`, then `dtype` is not
- optional and must have the same structure as the output of `fn`.
-* <b>`parallel_iterations`</b>: (optional) The number of iterations allowed to run
- in parallel.
-* <b>`back_prop`</b>: (optional) True enables support for back propagation.
-* <b>`swap_memory`</b>: (optional) True enables GPU-CPU memory swapping.
-* <b>`infer_shape`</b>: (optional) False disables tests for consistent output shapes.
-* <b>`name`</b>: (optional) Name prefix for the returned tensors.
-
-##### Returns:
-
- A tensor or (possibly nested) sequence of tensors. Each tensor packs the
- results of applying `fn` to tensors unpacked from `elems` along the first
- dimension, from first to last.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `fn` is not callable or the structure of the output of
- `fn` and `dtype` do not match, or if elems is a SparseTensor.
-* <b>`ValueError`</b>: if the lengths of the output of `fn` and `dtype` do not match.
-
-##### Examples:
-
- ```python
- elems = np.array([1, 2, 3, 4, 5, 6])
- squares = map_fn(lambda x: x * x, elems)
- # squares == [1, 4, 9, 16, 25, 36]
- ```
-
- ```python
- elems = (np.array([1, 2, 3]), np.array([-1, 1, -1]))
- alternate = map_fn(lambda x: x[0] * x[1], elems, dtype=tf.int64)
- # alternate == [-1, 2, -3]
- ```
-
- ```python
- elems = np.array([1, 2, 3])
- alternates = map_fn(lambda x: (x, -x), elems, dtype=(tf.int64, tf.int64))
- # alternates[0] == [1, 2, 3]
- # alternates[1] == [-1, -2, -3]
- ```
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.matching_files.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.matching_files.md
deleted file mode 100644
index 19262f8dd5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.matching_files.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.matching_files(pattern, name=None)` {#matching_files}
-
-Returns the set of files matching one or more glob patterns.
-
-Note that this routine only supports wildcard characters in the
-basename portion of the pattern, not in the directory portion.
-
-##### Args:
-
-
-* <b>`pattern`</b>: A `Tensor` of type `string`.
- Shell wildcard pattern(s). Scalar or vector of type string.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`. A vector of matching filenames.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.negative.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.negative.md
deleted file mode 100644
index 80a062f68f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.negative.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.negative(x, name=None)` {#negative}
-
-Computes numerical negative value element-wise.
-
-I.e., \(y = -x\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,
- `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.compute_accidental_hits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.compute_accidental_hits.md
deleted file mode 100644
index 9d5bb30303..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.compute_accidental_hits.md
+++ /dev/null
@@ -1,45 +0,0 @@
-### `tf.nn.compute_accidental_hits(true_classes, sampled_candidates, num_true, seed=None, name=None)` {#compute_accidental_hits}
-
-Compute the position ids in `sampled_candidates` matching `true_classes`.
-
-In Candidate Sampling, this operation facilitates virtually removing
-sampled classes which happen to match target classes. This is done
-in Sampled Softmax and Sampled Logistic.
-
-See our [Candidate Sampling Algorithms
-Reference](http://www.tensorflow.org/extras/candidate_sampling.pdf).
-
-We presuppose that the `sampled_candidates` are unique.
-
-We call it an 'accidental hit' when one of the target classes
-matches one of the sampled classes. This operation reports
-accidental hits as triples `(index, id, weight)`, where `index`
-represents the row number in `true_classes`, `id` represents the
-position in `sampled_candidates`, and weight is `-FLOAT_MAX`.
-
-The result of this op should be passed through a `sparse_to_dense`
-operation, then added to the logits of the sampled classes. This
-removes the contradictory effect of accidentally sampling the true
-target classes as noise classes for the same example.
-
-##### Args:
-
-
-* <b>`true_classes`</b>: A `Tensor` of type `int64` and shape `[batch_size,
- num_true]`. The target classes.
-* <b>`sampled_candidates`</b>: A tensor of type `int64` and shape `[num_sampled]`.
- The sampled_candidates output of CandidateSampler.
-* <b>`num_true`</b>: An `int`. The number of target classes per training example.
-* <b>`seed`</b>: An `int`. An operation-specific seed. Default is 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
-
-* <b>`indices`</b>: A `Tensor` of type `int32` and shape `[num_accidental_hits]`.
- Values indicate rows in `true_classes`.
-* <b>`ids`</b>: A `Tensor` of type `int64` and shape `[num_accidental_hits]`.
- Values indicate positions in `sampled_candidates`.
-* <b>`weights`</b>: A `Tensor` of type `float` and shape `[num_accidental_hits]`.
- Each value is `-FLOAT_MAX`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.conv1d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.conv1d.md
deleted file mode 100644
index d073ce7fb2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.conv1d.md
+++ /dev/null
@@ -1,50 +0,0 @@
-### `tf.nn.conv1d(value, filters, stride, padding, use_cudnn_on_gpu=None, data_format=None, name=None)` {#conv1d}
-
-Computes a 1-D convolution given 3-D input and filter tensors.
-
-Given an input tensor of shape
- [batch, in_width, in_channels]
-if data_format is "NHWC", or
- [batch, in_channels, in_width]
-if data_format is "NCHW",
-and a filter / kernel tensor of shape
-[filter_width, in_channels, out_channels], this op reshapes
-the arguments to pass them to conv2d to perform the equivalent
-convolution operation.
-
-Internally, this op reshapes the input tensors and invokes `tf.nn.conv2d`.
-For example, if `data_format` does not start with "NC", a tensor of shape
- [batch, in_width, in_channels]
-is reshaped to
- [batch, 1, in_width, in_channels],
-and the filter is reshaped to
- [1, filter_width, in_channels, out_channels].
-The result is then reshaped back to
- [batch, out_width, out_channels]
-(where out_width is a function of the stride and padding as in conv2d) and
-returned to the caller.
-
-##### Args:
-
-
-* <b>`value`</b>: A 3D `Tensor`. Must be of type `float32` or `float64`.
-* <b>`filters`</b>: A 3D `Tensor`. Must have the same type as `input`.
-* <b>`stride`</b>: An `integer`. The number of entries by which
- the filter is moved right at each step.
-* <b>`padding`</b>: 'SAME' or 'VALID'
-* <b>`use_cudnn_on_gpu`</b>: An optional `bool`. Defaults to `True`.
-* <b>`data_format`</b>: An optional `string` from `"NHWC", "NCHW"`. Defaults
- to `"NHWC"`, the data is stored in the order of
- [batch, in_width, in_channels]. The `"NCHW"` format stores
- data as [batch, in_channels, in_width].
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as input.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `data_format` is invalid.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.embedding_lookup_sparse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.embedding_lookup_sparse.md
deleted file mode 100644
index 23e0fe4a54..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.embedding_lookup_sparse.md
+++ /dev/null
@@ -1,76 +0,0 @@
-### `tf.nn.embedding_lookup_sparse(params, sp_ids, sp_weights, partition_strategy='mod', name=None, combiner=None, max_norm=None)` {#embedding_lookup_sparse}
-
-Computes embeddings for the given ids and weights.
-
-This op assumes that there is at least one id for each row in the dense tensor
-represented by sp_ids (i.e. there are no rows with empty features), and that
-all the indices of sp_ids are in canonical row-major order.
-
-It also assumes that all id values lie in the range [0, p0), where p0
-is the sum of the size of params along dimension 0.
-
-##### Args:
-
-
-* <b>`params`</b>: A single tensor representing the complete embedding tensor,
- or a list of P tensors all of same shape except for the first dimension,
- representing sharded embedding tensors. Alternatively, a
- `PartitionedVariable`, created by partitioning along dimension 0. Each
- element must be appropriately sized for the given `partition_strategy`.
-* <b>`sp_ids`</b>: N x M SparseTensor of int64 ids (typically from FeatureValueToId),
- where N is typically batch size and M is arbitrary.
-* <b>`sp_weights`</b>: either a SparseTensor of float / double weights, or None to
- indicate all weights should be taken to be 1. If specified, sp_weights
- must have exactly the same shape and indices as sp_ids.
-* <b>`partition_strategy`</b>: A string specifying the partitioning strategy, relevant
- if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default
- is `"mod"`. See `tf.nn.embedding_lookup` for more details.
-* <b>`name`</b>: Optional name for the op.
-* <b>`combiner`</b>: A string specifying the reduction op. Currently "mean", "sqrtn"
- and "sum" are supported.
- "sum" computes the weighted sum of the embedding results for each row.
- "mean" is the weighted sum divided by the total weight.
- "sqrtn" is the weighted sum divided by the square root of the sum of the
- squares of the weights.
-* <b>`max_norm`</b>: If not None, each embedding is normalized to have l2 norm equal
- to max_norm before combining.
-
-##### Returns:
-
- A dense tensor representing the combined embeddings for the
- sparse ids. For each row in the dense tensor represented by sp_ids, the op
- looks up the embeddings for all ids in that row, multiplies them by the
- corresponding weight, and combines these embeddings as specified.
-
- In other words, if
-
- shape(combined params) = [p0, p1, ..., pm]
-
- and
-
- shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn]
-
- then
-
- shape(output) = [d0, d1, ..., dn-1, p1, ..., pm].
-
- For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are
-
- [0, 0]: id 1, weight 2.0
- [0, 1]: id 3, weight 0.5
- [1, 0]: id 0, weight 1.0
- [2, 3]: id 1, weight 3.0
-
- with `combiner`="mean", then the output will be a 3x20 matrix where
-
- output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5)
- output[1, :] = params[0, :] * 1.0
- output[2, :] = params[1, :] * 3.0
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If sp_ids is not a SparseTensor, or if sp_weights is neither
- None nor SparseTensor.
-* <b>`ValueError`</b>: If combiner is not one of {"mean", "sqrtn", "sum"}.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.erosion2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.erosion2d.md
deleted file mode 100644
index a6fc19fb6d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.erosion2d.md
+++ /dev/null
@@ -1,52 +0,0 @@
-### `tf.nn.erosion2d(value, kernel, strides, rates, padding, name=None)` {#erosion2d}
-
-Computes the grayscale erosion of 4-D `value` and 3-D `kernel` tensors.
-
-The `value` tensor has shape `[batch, in_height, in_width, depth]` and the
-`kernel` tensor has shape `[kernel_height, kernel_width, depth]`, i.e.,
-each input channel is processed independently of the others with its own
-structuring function. The `output` tensor has shape
-`[batch, out_height, out_width, depth]`. The spatial dimensions of the
-output tensor depend on the `padding` algorithm. We currently only support the
-default "NHWC" `data_format`.
-
-In detail, the grayscale morphological 2-D erosion is given by:
-
- output[b, y, x, c] =
- min_{dy, dx} value[b,
- strides[1] * y - rates[1] * dy,
- strides[2] * x - rates[2] * dx,
- c] -
- kernel[dy, dx, c]
-
-Duality: The erosion of `value` by the `kernel` is equal to the negation of
-the dilation of `-value` by the reflected `kernel`.
-
-##### Args:
-
-
-* <b>`value`</b>: A `Tensor`. 4-D with shape `[batch, in_height, in_width, depth]`.
-* <b>`kernel`</b>: A `Tensor`. Must have the same type as `value`.
- 3-D with shape `[kernel_height, kernel_width, depth]`.
-* <b>`strides`</b>: A list of `ints` that has length `>= 4`.
- 1-D of length 4. The stride of the sliding window for each dimension of
- the input tensor. Must be: `[1, stride_height, stride_width, 1]`.
-* <b>`rates`</b>: A list of `ints` that has length `>= 4`.
- 1-D of length 4. The input stride for atrous morphological dilation.
- Must be: `[1, rate_height, rate_width, 1]`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional). If not specified "erosion2d"
- is used.
-
-##### Returns:
-
- A `Tensor`. Has the same type as `value`.
- 4-D with shape `[batch, out_height, out_width, depth]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `value` depth does not match `kernel`' shape, or if
- padding is other than `'VALID'` or `'SAME'`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.moments.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.moments.md
deleted file mode 100644
index dd56055311..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.moments.md
+++ /dev/null
@@ -1,35 +0,0 @@
-### `tf.nn.moments(x, axes, shift=None, name=None, keep_dims=False)` {#moments}
-
-Calculate the mean and variance of `x`.
-
-The mean and variance are calculated by aggregating the contents of `x`
-across `axes`. If `x` is 1-D and `axes = [0]` this is just the mean
-and variance of a vector.
-
-Note: for numerical stability, when shift=None, the true mean
-would be computed and used as shift.
-
-When using these moments for batch normalization (see
-`tf.nn.batch_normalization`):
-
- * for so-called "global normalization", used with convolutional filters with
- shape `[batch, height, width, depth]`, pass `axes=[0, 1, 2]`.
- * for simple batch normalization pass `axes=[0]` (batch only).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`.
-* <b>`axes`</b>: Array of ints. Axes along which to compute mean and
- variance.
-* <b>`shift`</b>: A `Tensor` containing the value by which to shift the data for
- numerical stability, or `None` in which case the true mean of the data is
- used as shift. A shift close to the true mean provides the most
- numerically stable results.
-* <b>`name`</b>: Name used to scope the operations that compute the moments.
-* <b>`keep_dims`</b>: produce moments with the same dimensionality as the input.
-
-##### Returns:
-
- Two `Tensor` objects: `mean` and `variance`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.normalize_moments.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.normalize_moments.md
deleted file mode 100644
index d7a6b9cab4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.normalize_moments.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.nn.normalize_moments(counts, mean_ss, variance_ss, shift, name=None)` {#normalize_moments}
-
-Calculate the mean and variance of based on the sufficient statistics.
-
-##### Args:
-
-
-* <b>`counts`</b>: A `Tensor` containing a the total count of the data (one value).
-* <b>`mean_ss`</b>: A `Tensor` containing the mean sufficient statistics: the (possibly
- shifted) sum of the elements to average over.
-* <b>`variance_ss`</b>: A `Tensor` containing the variance sufficient statistics: the
- (possibly shifted) squared sum of the data to compute the variance over.
-* <b>`shift`</b>: A `Tensor` containing the value by which the data is shifted for
- numerical stability, or `None` if no shift was performed.
-* <b>`name`</b>: Name used to scope the operations that compute the moments.
-
-##### Returns:
-
- Two `Tensor` objects: `mean` and `variance`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.sampled_softmax_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.sampled_softmax_loss.md
deleted file mode 100644
index 6cddbd7d17..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.nn.sampled_softmax_loss.md
+++ /dev/null
@@ -1,49 +0,0 @@
-### `tf.nn.sampled_softmax_loss(weights, biases, labels, inputs, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=True, partition_strategy='mod', name='sampled_softmax_loss')` {#sampled_softmax_loss}
-
-Computes and returns the sampled softmax training loss.
-
-This is a faster way to train a softmax classifier over a huge number of
-classes.
-
-This operation is for training only. It is generally an underestimate of
-the full softmax loss.
-
-At inference time, you can compute full softmax probabilities with the
-expression `tf.nn.softmax(tf.matmul(inputs, tf.transpose(weights)) + biases)`.
-
-See our [Candidate Sampling Algorithms Reference]
-(../../extras/candidate_sampling.pdf)
-
-Also see Section 3 of [Jean et al., 2014](http://arxiv.org/abs/1412.2007)
-([pdf](http://arxiv.org/pdf/1412.2007.pdf)) for the math.
-
-##### Args:
-
-
-* <b>`weights`</b>: A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor`
- objects whose concatenation along dimension 0 has shape
- [num_classes, dim]. The (possibly-sharded) class embeddings.
-* <b>`biases`</b>: A `Tensor` of shape `[num_classes]`. The class biases.
-* <b>`labels`</b>: A `Tensor` of type `int64` and shape `[batch_size,
- num_true]`. The target classes. Note that this format differs from
- the `labels` argument of `nn.softmax_cross_entropy_with_logits`.
-* <b>`inputs`</b>: A `Tensor` of shape `[batch_size, dim]`. The forward
- activations of the input network.
-* <b>`num_sampled`</b>: An `int`. The number of classes to randomly sample per batch.
-* <b>`num_classes`</b>: An `int`. The number of possible classes.
-* <b>`num_true`</b>: An `int`. The number of target classes per training example.
-* <b>`sampled_values`</b>: a tuple of (`sampled_candidates`, `true_expected_count`,
- `sampled_expected_count`) returned by a `*_candidate_sampler` function.
- (if None, we default to `log_uniform_candidate_sampler`)
-* <b>`remove_accidental_hits`</b>: A `bool`. whether to remove "accidental hits"
- where a sampled class equals one of the target classes. Default is
- True.
-* <b>`partition_strategy`</b>: A string specifying the partitioning strategy, relevant
- if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported.
- Default is `"mod"`. See `tf.nn.embedding_lookup` for more details.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `batch_size` 1-D tensor of per-example sampled softmax losses.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.parse_single_example.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.parse_single_example.md
deleted file mode 100644
index e5ac731bce..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.parse_single_example.md
+++ /dev/null
@@ -1,38 +0,0 @@
-### `tf.parse_single_example(serialized, features, name=None, example_names=None)` {#parse_single_example}
-
-Parses a single `Example` proto.
-
-Similar to `parse_example`, except:
-
-For dense tensors, the returned `Tensor` is identical to the output of
-`parse_example`, except there is no batch dimension, the output shape is the
-same as the shape given in `dense_shape`.
-
-For `SparseTensor`s, the first (batch) column of the indices matrix is removed
-(the indices matrix is a column vector), the values vector is unchanged, and
-the first (`batch_size`) entry of the shape vector is removed (it is now a
-single element vector).
-
-One might see performance advantages by batching `Example` protos with
-`parse_example` instead of using this function directly.
-
-##### Args:
-
-
-* <b>`serialized`</b>: A scalar string Tensor, a single serialized Example.
- See `_parse_single_example_raw` documentation for more details.
-* <b>`features`</b>: A `dict` mapping feature keys to `FixedLenFeature` or
- `VarLenFeature` values.
-* <b>`name`</b>: A name for this operation (optional).
-* <b>`example_names`</b>: (Optional) A scalar string Tensor, the associated name.
- See `_parse_single_example_raw` documentation for more details.
-
-##### Returns:
-
- A `dict` mapping feature keys to `Tensor` and `SparseTensor` values.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if any feature is invalid.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.qr.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.qr.md
deleted file mode 100644
index 64467b22a7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.qr.md
+++ /dev/null
@@ -1,36 +0,0 @@
-### `tf.qr(input, full_matrices=None, name=None)` {#qr}
-
-Computes the QR decompositions of one or more matrices.
-
-Computes the QR decomposition of each inner matrix in `tensor` such that
-`tensor[..., :, :] = q[..., :, :] * r[..., :,:])`
-
-```prettyprint
-# a is a tensor.
-# q is a tensor of orthonormal matrices.
-# r is a tensor of upper triangular matrices.
-q, r = qr(a)
-q_full, r_full = qr(a, full_matrices=True)
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`, `complex64`, `complex128`.
- A tensor of shape `[..., M, N]` whose inner-most 2 dimensions
- form matrices of size `[M, N]`. Let `P` be the minimum of `M` and `N`.
-* <b>`full_matrices`</b>: An optional `bool`. Defaults to `False`.
- If true, compute full-sized `q` and `r`. If false
- (the default), compute only the leading `P` columns of `q`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (q, r).
-
-* <b>`q`</b>: A `Tensor`. Has the same type as `input`. Orthonormal basis for range of `a`. If `full_matrices` is `False` then
- shape is `[..., M, P]`; if `full_matrices` is `True` then shape is
- `[..., M, M]`.
-* <b>`r`</b>: A `Tensor`. Has the same type as `input`. Triangular factor. If `full_matrices` is `False` then shape is
- `[..., P, N]`. If `full_matrices` is `True` then shape is `[..., M, N]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.random_uniform_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.random_uniform_initializer.md
deleted file mode 100644
index 65cf607305..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.random_uniform_initializer.md
+++ /dev/null
@@ -1,25 +0,0 @@
-Initializer that generates tensors with a uniform distribution.
-
-Args:
- minval: A python scalar or a scalar tensor. Lower bound of the range
- of random values to generate.
- maxval: A python scalar or a scalar tensor. Upper bound of the range
- of random values to generate. Defaults to 1 for float types.
- seed: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
- dtype: The data type.
-- - -
-
-#### `tf.random_uniform_initializer.__call__(shape, dtype=None, partition_info=None)` {#random_uniform_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.random_uniform_initializer.__init__(minval=0, maxval=None, seed=None, dtype=tf.float32)` {#random_uniform_initializer.__init__}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.read_file.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.read_file.md
deleted file mode 100644
index 3c0ad3652a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.read_file.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.read_file(filename, name=None)` {#read_file}
-
-Reads and outputs the entire contents of the input filename.
-
-##### Args:
-
-
-* <b>`filename`</b>: A `Tensor` of type `string`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_any.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_any.md
deleted file mode 100644
index ef4468dae2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_any.md
+++ /dev/null
@@ -1,40 +0,0 @@
-### `tf.reduce_any(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_any}
-
-Computes the "logical or" of elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-For example:
-
-```python
-# 'x' is [[True, True]
-# [False, False]]
-tf.reduce_any(x) ==> True
-tf.reduce_any(x, 0) ==> [True, True]
-tf.reduce_any(x, 1) ==> [True, False]
-```
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The boolean tensor to reduce.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
-@compatibility(numpy)
-Equivalent to np.any
-@end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_join.md
deleted file mode 100644
index 2a6d631b6b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_join.md
+++ /dev/null
@@ -1,46 +0,0 @@
-### `tf.reduce_join(inputs, axis=None, keep_dims=False, separator='', name=None, reduction_indices=None)` {#reduce_join}
-
-Joins a string Tensor across the given dimensions.
-
-Computes the string join across dimensions in the given string Tensor of shape
-`[d_0, d_1, ..., d_n-1]`. Returns a new Tensor created by joining the input
-strings with the given separator (default: empty string). Negative indices are
-counted backwards from the end, with `-1` being equivalent to `n - 1`.
-
-For example:
-
-```
-# tensor `a` is [["a", "b"], ["c", "d"]]
-tf.reduce_join(a, 0) ==> ["ac", "bd"]
-tf.reduce_join(a, 1) ==> ["ab", "cd"]
-tf.reduce_join(a, -2) = tf.reduce_join(a, 0) ==> ["ac", "bd"]
-tf.reduce_join(a, -1) = tf.reduce_join(a, 1) ==> ["ab", "cd"]
-tf.reduce_join(a, 0, keep_dims=True) ==> [["ac", "bd"]]
-tf.reduce_join(a, 1, keep_dims=True) ==> [["ab"], ["cd"]]
-tf.reduce_join(a, 0, separator=".") ==> ["a.c", "b.d"]
-tf.reduce_join(a, [0, 1]) ==> ["acbd"]
-tf.reduce_join(a, [1, 0]) ==> ["abcd"]
-tf.reduce_join(a, []) ==> ["abcd"]
-```
-
-##### Args:
-
-
-* <b>`inputs`</b>: A `Tensor` of type `string`.
- The input to be joined. All reduced indices must have non-zero size.
-* <b>`axis`</b>: A `Tensor` of type `int32`.
- The dimensions to reduce over. Dimensions are reduced in the
- order specified. Omitting `axis` is equivalent to passing
- `[n-1, n-2, ..., 0]`. Negative indices from `-n` to `-1` are supported.
-* <b>`keep_dims`</b>: An optional `bool`. Defaults to `False`.
- If `True`, retain reduced dimensions with length `1`.
-* <b>`separator`</b>: An optional `string`. Defaults to `""`.
- The separator to use when joining.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`.
- Has shape equal to that of the input with reduced dimensions removed or
- set to `1` depending on `keep_dims`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_logsumexp.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_logsumexp.md
deleted file mode 100644
index 485d8fb9be..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.reduce_logsumexp.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.reduce_logsumexp(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_logsumexp}
-
-Computes log(sum(exp(elements across dimensions of a tensor))).
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-This function is more numerically stable than log(sum(exp(input))). It avoids
-overflows caused by taking the exp of large inputs and underflows caused by
-taking the log of small inputs.
-
-For example:
-
-```python
-# 'x' is [[0, 0, 0]]
-# [0, 0, 0]]
-tf.reduce_logsumexp(x) ==> log(6)
-tf.reduce_logsumexp(x, 0) ==> [log(2), log(2), log(2)]
-tf.reduce_logsumexp(x, 1) ==> [log(3), log(3)]
-tf.reduce_logsumexp(x, 1, keep_dims=True) ==> [[log(3)], [log(3)]]
-tf.reduce_logsumexp(x, [0, 1]) ==> log(6)
-```
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The tensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.sparse_minimum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.sparse_minimum.md
deleted file mode 100644
index 1455e3e533..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.sparse_minimum.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.sparse_minimum(sp_a, sp_b, name=None)` {#sparse_minimum}
-
-Returns the element-wise min of two SparseTensors.
-
-Assumes the two SparseTensors have the same shape, i.e., no broadcasting.
-Example:
-
-```python
-sp_zero = sparse_tensor.SparseTensor([[0]], [0], [7])
-sp_one = sparse_tensor.SparseTensor([[1]], [1], [7])
-res = tf.sparse_minimum(sp_zero, sp_one).eval()
-# "res" should be equal to SparseTensor([[0], [1]], [0, 0], [7]).
-```
-
-##### Args:
-
-
-* <b>`sp_a`</b>: a `SparseTensor` operand whose dtype is real, and indices
- lexicographically ordered.
-* <b>`sp_b`</b>: the other `SparseTensor` operand with the same requirements (and the
- same shape).
-* <b>`name`</b>: optional name of the operation.
-
-##### Returns:
-
-
-* <b>`output`</b>: the output SparseTensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.string_to_hash_bucket_strong.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.string_to_hash_bucket_strong.md
deleted file mode 100644
index 764dfe8431..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.string_to_hash_bucket_strong.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.string_to_hash_bucket_strong(input, num_buckets, key, name=None)` {#string_to_hash_bucket_strong}
-
-Converts each string in the input Tensor to its hash mod by a number of buckets.
-
-The hash function is deterministic on the content of the string within the
-process. The hash function is a keyed hash function, where attribute `key`
-defines the key of the hash function. `key` is an array of 2 elements.
-
-A strong hash is important when inputs may be malicious, e.g. URLs with
-additional components. Adversaries could try to make their inputs hash to the
-same bucket for a denial-of-service attack or to skew the results. A strong
-hash prevents this by making it dificult, if not infeasible, to compute inputs
-that hash to the same bucket. This comes at a cost of roughly 4x higher compute
-time than `tf.string_to_hash_bucket_fast`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `string`. The strings to assign a hash bucket.
-* <b>`num_buckets`</b>: An `int` that is `>= 1`. The number of buckets.
-* <b>`key`</b>: A list of `ints`.
- The key for the keyed hash function passed as a list of two uint64
- elements.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `int64`.
- A Tensor of the same shape as the input `string_tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.Benchmark.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.Benchmark.md
deleted file mode 100644
index d4a2f78b5d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.Benchmark.md
+++ /dev/null
@@ -1,61 +0,0 @@
-Abstract class that provides helpers for TensorFlow benchmarks.
-- - -
-
-#### `tf.test.Benchmark.is_abstract(cls)` {#Benchmark.is_abstract}
-
-
-
-
-- - -
-
-#### `tf.test.Benchmark.report_benchmark(iters=None, cpu_time=None, wall_time=None, throughput=None, extras=None, name=None)` {#Benchmark.report_benchmark}
-
-Report a benchmark.
-
-##### Args:
-
-
-* <b>`iters`</b>: (optional) How many iterations were run
-* <b>`cpu_time`</b>: (optional) Total cpu time in seconds
-* <b>`wall_time`</b>: (optional) Total wall time in seconds
-* <b>`throughput`</b>: (optional) Throughput (in MB/s)
-* <b>`extras`</b>: (optional) Dict mapping string keys to additional benchmark info.
- Values may be either floats or values that are convertible to strings.
-* <b>`name`</b>: (optional) Override the BenchmarkEntry name with `name`.
- Otherwise it is inferred from the top-level method name.
-
-
-- - -
-
-#### `tf.test.Benchmark.run_op_benchmark(sess, op_or_tensor, feed_dict=None, burn_iters=2, min_iters=10, store_trace=False, store_memory_usage=True, name=None, extras=None, mbs=0)` {#Benchmark.run_op_benchmark}
-
-Run an op or tensor in the given session. Report the results.
-
-##### Args:
-
-
-* <b>`sess`</b>: `Session` object to use for timing.
-* <b>`op_or_tensor`</b>: `Operation` or `Tensor` to benchmark.
-* <b>`feed_dict`</b>: A `dict` of values to feed for each op iteration (see the
- `feed_dict` parameter of `Session.run`).
-* <b>`burn_iters`</b>: Number of burn-in iterations to run.
-* <b>`min_iters`</b>: Minimum number of iterations to use for timing.
-* <b>`store_trace`</b>: Boolean, whether to run an extra untimed iteration and
- store the trace of iteration in the benchmark report.
- The trace will be stored as a string in Google Chrome trace format
- in the extras field "full_trace_chrome_format".
-* <b>`store_memory_usage`</b>: Boolean, whether to run an extra untimed iteration,
- calculate memory usage, and store that in extras fields.
-* <b>`name`</b>: (optional) Override the BenchmarkEntry name with `name`.
- Otherwise it is inferred from the top-level method name.
-* <b>`extras`</b>: (optional) Dict mapping string keys to additional benchmark info.
- Values may be either floats or values that are convertible to strings.
-* <b>`mbs`</b>: (optional) The number of megabytes moved by this op, used to
- calculate the ops throughput.
-
-##### Returns:
-
- A `dict` containing the key-value pairs that were passed to
- `report_benchmark`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.get_temp_dir.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.get_temp_dir.md
deleted file mode 100644
index e36d6163a7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.test.get_temp_dir.md
+++ /dev/null
@@ -1,10 +0,0 @@
-### `tf.test.get_temp_dir()` {#get_temp_dir}
-
-Returns a temporary directory for use during tests.
-
-There is no need to delete the directory after the test.
-
-##### Returns:
-
- The temporary directory.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdadeltaOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdadeltaOptimizer.md
deleted file mode 100644
index 08d37ac815..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.AdadeltaOptimizer.md
+++ /dev/null
@@ -1,176 +0,0 @@
-Optimizer that implements the Adadelta algorithm.
-
-See [M. D. Zeiler](http://arxiv.org/abs/1212.5701)
-([pdf](http://arxiv.org/pdf/1212.5701v1.pdf))
-- - -
-
-#### `tf.train.AdadeltaOptimizer.__init__(learning_rate=0.001, rho=0.95, epsilon=1e-08, use_locking=False, name='Adadelta')` {#AdadeltaOptimizer.__init__}
-
-Construct a new Adadelta optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A `Tensor` or a floating point value. The learning rate.
-* <b>`rho`</b>: A `Tensor` or a floating point value. The decay rate.
-* <b>`epsilon`</b>: A `Tensor` or a floating point value. A constant epsilon used
- to better conditioning the grad update.
-* <b>`use_locking`</b>: If `True` use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "Adadelta".
-
-
-- - -
-
-#### `tf.train.AdadeltaOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdadeltaOptimizer.apply_gradients}
-
-Apply gradients to variables.
-
-This is the second part of `minimize()`. It returns an `Operation` that
-applies gradients.
-
-##### Args:
-
-
-* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
- `compute_gradients()`.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`name`</b>: Optional name for the returned operation. Default to the
- name passed to the `Optimizer` constructor.
-
-##### Returns:
-
- An `Operation` that applies the specified gradients. If `global_step`
- was not None, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
-* <b>`ValueError`</b>: If none of the variables have gradients.
-
-
-- - -
-
-#### `tf.train.AdadeltaOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdadeltaOptimizer.compute_gradients}
-
-Compute gradients of `loss` for the variables in `var_list`.
-
-This is the first part of `minimize()`. It returns a list
-of (gradient, variable) pairs where "gradient" is the gradient
-for "variable". Note that "gradient" can be a `Tensor`, an
-`IndexedSlices`, or `None` if there is no gradient for the
-given variable.
-
-##### Args:
-
-
-* <b>`loss`</b>: A Tensor containing the value to minimize.
-* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKey.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- A list of (gradient, variable) pairs. Variable is always present, but
- gradient can be `None`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
-* <b>`ValueError`</b>: If some arguments are invalid.
-
-
-- - -
-
-#### `tf.train.AdadeltaOptimizer.get_name()` {#AdadeltaOptimizer.get_name}
-
-
-
-
-- - -
-
-#### `tf.train.AdadeltaOptimizer.get_slot(var, name)` {#AdadeltaOptimizer.get_slot}
-
-Return a slot named `name` created for `var` by the Optimizer.
-
-Some `Optimizer` subclasses use additional variables. For example
-`Momentum` and `Adagrad` use variables to accumulate updates. This method
-gives access to these `Variable` objects if for some reason you need them.
-
-Use `get_slot_names()` to get the list of slot names created by the
-`Optimizer`.
-
-##### Args:
-
-
-* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
-* <b>`name`</b>: A string.
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-- - -
-
-#### `tf.train.AdadeltaOptimizer.get_slot_names()` {#AdadeltaOptimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-See `get_slot()`.
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.train.AdadeltaOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdadeltaOptimizer.minimize}
-
-Add operations to minimize `loss` by updating `var_list`.
-
-This method simply combines calls `compute_gradients()` and
-`apply_gradients()`. If you want to process the gradient before applying
-them call `compute_gradients()` and `apply_gradients()` explicitly instead
-of using this function.
-
-##### Args:
-
-
-* <b>`loss`</b>: A `Tensor` containing the value to minimize.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`name`</b>: Optional name for the returned operation.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- An Operation that updates the variables in `var_list`. If `global_step`
- was not `None`, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.ClusterSpec.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.ClusterSpec.md
deleted file mode 100644
index bd4c26b2d3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.ClusterSpec.md
+++ /dev/null
@@ -1,210 +0,0 @@
-Represents a cluster as a set of "tasks", organized into "jobs".
-
-A `tf.train.ClusterSpec` represents the set of processes that
-participate in a distributed TensorFlow computation. Every
-[`tf.train.Server`](#Server) is constructed in a particular cluster.
-
-To create a cluster with two jobs and five tasks, you specify the
-mapping from job names to lists of network addresses (typically
-hostname-port pairs).
-
-```python
-cluster = tf.train.ClusterSpec({"worker": ["worker0.example.com:2222",
- "worker1.example.com:2222",
- "worker2.example.com:2222"],
- "ps": ["ps0.example.com:2222",
- "ps1.example.com:2222"]})
-```
-
-Each job may also be specified as a sparse mapping from task indices
-to network addresses. This enables a server to be configured without
-needing to know the identity of (for example) all other worker
-tasks:
-
-```python
-cluster = tf.train.ClusterSpec({"worker": {1: "worker1.example.com:2222"},
- "ps": ["ps0.example.com:2222",
- "ps1.example.com:2222"]})
-```
-
-- - -
-
-#### `tf.train.ClusterSpec.as_cluster_def()` {#ClusterSpec.as_cluster_def}
-
-Returns a `tf.train.ClusterDef` protocol buffer based on this cluster.
-
-
-- - -
-
-#### `tf.train.ClusterSpec.as_dict()` {#ClusterSpec.as_dict}
-
-Returns a dictionary from job names to their tasks.
-
-For each job, if the task index space is dense, the corresponding
-value will be a list of network addresses; otherwise it will be a
-dictionary mapping (sparse) task indices to the corresponding
-addresses.
-
-##### Returns:
-
- A dictionary mapping job names to lists or dictionaries
- describing the tasks in those jobs.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.train.ClusterSpec.__bool__()` {#ClusterSpec.__bool__}
-
-
-
-
-- - -
-
-#### `tf.train.ClusterSpec.__eq__(other)` {#ClusterSpec.__eq__}
-
-
-
-
-- - -
-
-#### `tf.train.ClusterSpec.__init__(cluster)` {#ClusterSpec.__init__}
-
-Creates a `ClusterSpec`.
-
-##### Args:
-
-
-* <b>`cluster`</b>: A dictionary mapping one or more job names to (i) a
- list of network addresses, or (ii) a dictionary mapping integer
- task indices to network addresses; or a `tf.train.ClusterDef`
- protocol buffer.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cluster` is not a dictionary mapping strings to lists
- of strings, and not a `tf.train.ClusterDef` protobuf.
-
-
-- - -
-
-#### `tf.train.ClusterSpec.__ne__(other)` {#ClusterSpec.__ne__}
-
-
-
-
-- - -
-
-#### `tf.train.ClusterSpec.__nonzero__()` {#ClusterSpec.__nonzero__}
-
-
-
-
-- - -
-
-#### `tf.train.ClusterSpec.job_tasks(job_name)` {#ClusterSpec.job_tasks}
-
-Returns a mapping from task ID to address in the given job.
-
-NOTE: For backwards compatibility, this method returns a list. If
-the given job was defined with a sparse set of task indices, the
-length of this list may not reflect the number of tasks defined in
-this job. Use the [`num_tasks()`](#ClusterSpec.num_tasks) method
-to find the number of tasks defined in a particular job.
-
-##### Args:
-
-
-* <b>`job_name`</b>: The string name of a job in this cluster.
-
-##### Returns:
-
- A list of task addresses, where the index in the list
- corresponds to the task index of each task. The list may contain
- `None` if the job was defined with a sparse set of task indices.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `job_name` does not name a job in this cluster.
-
-
-- - -
-
-#### `tf.train.ClusterSpec.jobs` {#ClusterSpec.jobs}
-
-Returns a list of job names in this cluster.
-
-##### Returns:
-
- A list of strings, corresponding to the names of jobs in this cluster.
-
-
-- - -
-
-#### `tf.train.ClusterSpec.num_tasks(job_name)` {#ClusterSpec.num_tasks}
-
-Returns the number of tasks defined in the given job.
-
-##### Args:
-
-
-* <b>`job_name`</b>: The string name of a job in this cluster.
-
-##### Returns:
-
- The number of tasks defined in the given job.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `job_name` does not name a job in this cluster.
-
-
-- - -
-
-#### `tf.train.ClusterSpec.task_address(job_name, task_index)` {#ClusterSpec.task_address}
-
-Returns the address of the given task in the given job.
-
-##### Args:
-
-
-* <b>`job_name`</b>: The string name of a job in this cluster.
-* <b>`task_index`</b>: A non-negative integer.
-
-##### Returns:
-
- The address of the given task in the given job.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `job_name` does not name a job in this cluster,
- or no task with index `task_index` is defined in that job.
-
-
-- - -
-
-#### `tf.train.ClusterSpec.task_indices(job_name)` {#ClusterSpec.task_indices}
-
-Returns a list of valid task indices in the given job.
-
-##### Args:
-
-
-* <b>`job_name`</b>: The string name of a job in this cluster.
-
-##### Returns:
-
- A list of valid task indices in the given job.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `job_name` does not name a job in this cluster,
- or no task with index `task_index` is defined in that job.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.GradientDescentOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.GradientDescentOptimizer.md
deleted file mode 100644
index 99a5f1f0b1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.GradientDescentOptimizer.md
+++ /dev/null
@@ -1,18 +0,0 @@
-Optimizer that implements the gradient descent algorithm.
-
-- - -
-
-#### `tf.train.GradientDescentOptimizer.__init__(learning_rate, use_locking=False, name='GradientDescent')` {#GradientDescentOptimizer.__init__}
-
-Construct a new gradient descent optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A Tensor or a floating point value. The learning
- rate to use.
-* <b>`use_locking`</b>: If True use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "GradientDescent".
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.LoggingTensorHook.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.LoggingTensorHook.md
deleted file mode 100644
index e76b7838ed..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.LoggingTensorHook.md
+++ /dev/null
@@ -1,85 +0,0 @@
-Prints the given tensors once every N local steps or once every N seconds.
-
-The tensors will be printed to the log, with `INFO` severity.
-- - -
-
-#### `tf.train.LoggingTensorHook.__init__(tensors, every_n_iter=None, every_n_secs=None, formatter=None)` {#LoggingTensorHook.__init__}
-
-Initializes a LoggingHook monitor.
-
-##### Args:
-
-
-* <b>`tensors`</b>: `dict` that maps string-valued tags to tensors/tensor names,
- or `iterable` of tensors/tensor names.
-* <b>`every_n_iter`</b>: `int`, print the values of `tensors` once every N local
- steps taken on the current worker.
-* <b>`every_n_secs`</b>: `int` or `float`, print the values of `tensors` once every N
- seconds. Exactly one of `every_n_iter` and `every_n_secs` should be
- provided.
-* <b>`formatter`</b>: function, takes dict of `tag`->`Tensor` and returns a string.
- If `None` uses default printing all tensors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `every_n_iter` is non-positive.
-
-
-- - -
-
-#### `tf.train.LoggingTensorHook.after_create_session(session, coord)` {#LoggingTensorHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.LoggingTensorHook.after_run(run_context, run_values)` {#LoggingTensorHook.after_run}
-
-
-
-
-- - -
-
-#### `tf.train.LoggingTensorHook.before_run(run_context)` {#LoggingTensorHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.LoggingTensorHook.begin()` {#LoggingTensorHook.begin}
-
-
-
-
-- - -
-
-#### `tf.train.LoggingTensorHook.end(session)` {#LoggingTensorHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.LooperThread.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.LooperThread.md
deleted file mode 100644
index d4fc63d870..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.LooperThread.md
+++ /dev/null
@@ -1,222 +0,0 @@
-A thread that runs code repeatedly, optionally on a timer.
-
-This thread class is intended to be used with a `Coordinator`. It repeatedly
-runs code specified either as `target` and `args` or by the `run_loop()`
-method.
-
-Before each run the thread checks if the coordinator has requested stop. In
-that case the looper thread terminates immediately.
-
-If the code being run raises an exception, that exception is reported to the
-coordinator and the thread terminates. The coordinator will then request all
-the other threads it coordinates to stop.
-
-You typically pass looper threads to the supervisor `Join()` method.
-- - -
-
-#### `tf.train.LooperThread.__init__(coord, timer_interval_secs, target=None, args=None, kwargs=None)` {#LooperThread.__init__}
-
-Create a LooperThread.
-
-##### Args:
-
-
-* <b>`coord`</b>: A Coordinator.
-* <b>`timer_interval_secs`</b>: Time boundaries at which to call Run(), or None
- if it should be called back to back.
-* <b>`target`</b>: Optional callable object that will be executed in the thread.
-* <b>`args`</b>: Optional arguments to pass to `target` when calling it.
-* <b>`kwargs`</b>: Optional keyword arguments to pass to `target` when calling it.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If one of the arguments is invalid.
-
-
-- - -
-
-#### `tf.train.LooperThread.__repr__()` {#LooperThread.__repr__}
-
-
-
-
-- - -
-
-#### `tf.train.LooperThread.daemon` {#LooperThread.daemon}
-
-A boolean value indicating whether this thread is a daemon thread (True) or not (False).
-
-This must be set before start() is called, otherwise RuntimeError is
-raised. Its initial value is inherited from the creating thread; the
-main thread is not a daemon thread and therefore all threads created in
-the main thread default to daemon = False.
-
-The entire Python program exits when no alive non-daemon threads are
-left.
-
-
-- - -
-
-#### `tf.train.LooperThread.getName()` {#LooperThread.getName}
-
-
-
-
-- - -
-
-#### `tf.train.LooperThread.ident` {#LooperThread.ident}
-
-Thread identifier of this thread or None if it has not been started.
-
-This is a nonzero integer. See the thread.get_ident() function. Thread
-identifiers may be recycled when a thread exits and another thread is
-created. The identifier is available even after the thread has exited.
-
-
-- - -
-
-#### `tf.train.LooperThread.isAlive()` {#LooperThread.isAlive}
-
-Return whether the thread is alive.
-
-This method returns True just before the run() method starts until just
-after the run() method terminates. The module function enumerate()
-returns a list of all alive threads.
-
-
-- - -
-
-#### `tf.train.LooperThread.isDaemon()` {#LooperThread.isDaemon}
-
-
-
-
-- - -
-
-#### `tf.train.LooperThread.is_alive()` {#LooperThread.is_alive}
-
-Return whether the thread is alive.
-
-This method returns True just before the run() method starts until just
-after the run() method terminates. The module function enumerate()
-returns a list of all alive threads.
-
-
-- - -
-
-#### `tf.train.LooperThread.join(timeout=None)` {#LooperThread.join}
-
-Wait until the thread terminates.
-
-This blocks the calling thread until the thread whose join() method is
-called terminates -- either normally or through an unhandled exception
-or until the optional timeout occurs.
-
-When the timeout argument is present and not None, it should be a
-floating point number specifying a timeout for the operation in seconds
-(or fractions thereof). As join() always returns None, you must call
-isAlive() after join() to decide whether a timeout happened -- if the
-thread is still alive, the join() call timed out.
-
-When the timeout argument is not present or None, the operation will
-block until the thread terminates.
-
-A thread can be join()ed many times.
-
-join() raises a RuntimeError if an attempt is made to join the current
-thread as that would cause a deadlock. It is also an error to join() a
-thread before it has been started and attempts to do so raises the same
-exception.
-
-
-- - -
-
-#### `tf.train.LooperThread.loop(coord, timer_interval_secs, target, args=None, kwargs=None)` {#LooperThread.loop}
-
-Start a LooperThread that calls a function periodically.
-
-If `timer_interval_secs` is None the thread calls `target(args)`
-repeatedly. Otherwise `target(args)` is called every `timer_interval_secs`
-seconds. The thread terminates when a stop of the coordinator is
-requested.
-
-##### Args:
-
-
-* <b>`coord`</b>: A Coordinator.
-* <b>`timer_interval_secs`</b>: Number. Time boundaries at which to call `target`.
-* <b>`target`</b>: A callable object.
-* <b>`args`</b>: Optional arguments to pass to `target` when calling it.
-* <b>`kwargs`</b>: Optional keyword arguments to pass to `target` when calling it.
-
-##### Returns:
-
- The started thread.
-
-
-- - -
-
-#### `tf.train.LooperThread.name` {#LooperThread.name}
-
-A string used for identification purposes only.
-
-It has no semantics. Multiple threads may be given the same name. The
-initial name is set by the constructor.
-
-
-- - -
-
-#### `tf.train.LooperThread.run()` {#LooperThread.run}
-
-
-
-
-- - -
-
-#### `tf.train.LooperThread.run_loop()` {#LooperThread.run_loop}
-
-Called at 'timer_interval_secs' boundaries.
-
-
-- - -
-
-#### `tf.train.LooperThread.setDaemon(daemonic)` {#LooperThread.setDaemon}
-
-
-
-
-- - -
-
-#### `tf.train.LooperThread.setName(name)` {#LooperThread.setName}
-
-
-
-
-- - -
-
-#### `tf.train.LooperThread.start()` {#LooperThread.start}
-
-Start the thread's activity.
-
-It must be called at most once per thread object. It arranges for the
-object's run() method to be invoked in a separate thread of control.
-
-This method will raise a RuntimeError if called more than once on the
-same thread object.
-
-
-- - -
-
-#### `tf.train.LooperThread.start_loop()` {#LooperThread.start_loop}
-
-Called when the thread starts.
-
-
-- - -
-
-#### `tf.train.LooperThread.stop_loop()` {#LooperThread.stop_loop}
-
-Called when the thread stops.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredSession.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredSession.md
deleted file mode 100644
index 0e96f64ff4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredSession.md
+++ /dev/null
@@ -1,128 +0,0 @@
-Session-like object that handles initialization, recovery and hooks.
-
-Example usage:
-
-```python
-saver_hook = CheckpointSaverHook(...)
-summary_hook = SummaryHook(...)
-with MonitoredSession(session_creator=ChiefSessionCreator(...),
- hooks=[saver_hook, summary_hook]) as sess:
- while not sess.should_stop():
- sess.run(train_op)
-```
-
-Initialization: At creation time the monitored session does following things
-in given order:
-
-* calls `hook.begin()` for each given hook
-* finalizes the graph via `scaffold.finalize()`
-* create session
-* initializes the model via initialization ops provided by `Scaffold`
-* restores variables if a checkpoint exists
-* launches queue runners
-
-Run: When `run()` is called, the monitored session does following things:
-
-* calls `hook.before_run()`
-* calls TensorFlow `session.run()` with merged fetches and feed_dict
-* calls `hook.after_run()`
-* returns result of `session.run()` asked by user
-* if `AbortedError` occurs, it recovers or reinitializes the session before
- executing the run() call again
-
-
-Exit: At the `close()`, the monitored session does following things in order:
-
-* calls `hook.end()`
-* closes the queue runners and the session
-* suppresses `OutOfRange` error which indicates that all inputs have been
- processed if the monitored_session is used as a context
-
-How to set `tf.Session` arguments:
-
-* In most cases you can set session arguments as follows:
-
-```python
-MonitoredSession(
- session_creator=ChiefSessionCreator(master=..., config=...))
-```
-
-* In distributed setting for a non-chief worker, you can use following:
-
-```python
-MonitoredSession(
- session_creator=WorkerSessionCreator(master=..., config=...))
-```
-
-See `MonitoredTrainingSession` for an example usage based on chief or worker.
-
-Args:
- session_creator: A factory object to create session. Typically a
- `ChiefSessionCreator` which is the default one.
- hooks: An iterable of `SessionRunHook' objects.
-
-Returns:
- A MonitoredSession object.
-- - -
-
-#### `tf.train.MonitoredSession.__enter__()` {#MonitoredSession.__enter__}
-
-
-
-
-- - -
-
-#### `tf.train.MonitoredSession.__exit__(exception_type, exception_value, traceback)` {#MonitoredSession.__exit__}
-
-
-
-
-- - -
-
-#### `tf.train.MonitoredSession.__init__(session_creator=None, hooks=None, stop_grace_period_secs=120)` {#MonitoredSession.__init__}
-
-
-
-
-- - -
-
-#### `tf.train.MonitoredSession.close()` {#MonitoredSession.close}
-
-
-
-
-- - -
-
-#### `tf.train.MonitoredSession.graph` {#MonitoredSession.graph}
-
-The graph that was launched in this session.
-
-
-- - -
-
-#### `tf.train.MonitoredSession.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#MonitoredSession.run}
-
-Run ops in the monitored session.
-
-This method is completely compatible with the `tf.Session.run()` method.
-
-##### Args:
-
-
-* <b>`fetches`</b>: Same as `tf.Session.run()`.
-* <b>`feed_dict`</b>: Same as `tf.Session.run()`.
-* <b>`options`</b>: Same as `tf.Session.run()`.
-* <b>`run_metadata`</b>: Same as `tf.Session.run()`.
-
-##### Returns:
-
- Same as `tf.Session.run()`.
-
-
-- - -
-
-#### `tf.train.MonitoredSession.should_stop()` {#MonitoredSession.should_stop}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredTrainingSession.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredTrainingSession.md
deleted file mode 100644
index d84ddbe277..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.MonitoredTrainingSession.md
+++ /dev/null
@@ -1,44 +0,0 @@
-### `tf.train.MonitoredTrainingSession(master='', is_chief=True, checkpoint_dir=None, scaffold=None, hooks=None, chief_only_hooks=None, save_checkpoint_secs=600, save_summaries_steps=100, save_summaries_secs=None, config=None, stop_grace_period_secs=120)` {#MonitoredTrainingSession}
-
-Creates a `MonitoredSession` for training.
-
-For a chief, this utility sets proper session initializer/restorer. It also
-creates hooks related to checkpoint and summary saving. For workers, this
-utility sets proper session creator which waits for the chief to
-inialize/restore.
-
-
-##### Args:
-
-
-* <b>`master`</b>: `String` the TensorFlow master to use.
-* <b>`is_chief`</b>: If `True`, it will take care of initialization and recovery the
- underlying TensorFlow session. If `False`, it will wait on a chief to
- initialize or recover the TensorFlow session.
-* <b>`checkpoint_dir`</b>: A string. Optional path to a directory where to restore
- variables.
-* <b>`scaffold`</b>: A `Scaffold` used for gathering or building supportive ops. If
- not specified, a default one is created. It's used to finalize the graph.
-* <b>`hooks`</b>: Optional list of `SessionRunHook` objects.
-* <b>`chief_only_hooks`</b>: list of `SessionRunHook` objects. Activate these hooks if
- `is_chief==True`, ignore otherwise.
-* <b>`save_checkpoint_secs`</b>: The frequency, in seconds, that a checkpoint is saved
- using a default checkpoint saver. If `save_checkpoint_secs` is set to
- `None`, then the default checkpoint saver isn't used.
-* <b>`save_summaries_steps`</b>: The frequency, in number of global steps, that the
- summaries are written to disk using a default summary saver. If both
- `save_summaries_steps` and `save_summaries_secs` are set to `None`, then
- the default summary saver isn't used.
-* <b>`save_summaries_secs`</b>: The frequency, in secs, that the summaries are written
- to disk using a default summary saver. If both `save_summaries_steps` and
- `save_summaries_secs` are set to `None`, then the default summary saver
- isn't used.
-* <b>`config`</b>: an instance of `tf.ConfigProto` proto used to configure the session.
- It's the `config` argument of constructor of `tf.Session`.
-* <b>`stop_grace_period_secs`</b>: Number of seconds given to threads to stop after
- `close()` has been called.
-
-##### Returns:
-
- A `MonitoredSession` object.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.NanTensorHook.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.NanTensorHook.md
deleted file mode 100644
index 6e509684c2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.NanTensorHook.md
+++ /dev/null
@@ -1,80 +0,0 @@
-NaN Loss monitor.
-
-Monitors loss and stops training if loss is NaN.
-Can either fail with exception or just stop training.
-- - -
-
-#### `tf.train.NanTensorHook.__init__(loss_tensor, fail_on_nan_loss=True)` {#NanTensorHook.__init__}
-
-Initializes NanLoss monitor.
-
-##### Args:
-
-
-* <b>`loss_tensor`</b>: `Tensor`, the loss tensor.
-* <b>`fail_on_nan_loss`</b>: `bool`, whether to raise exception when loss is NaN.
-
-
-- - -
-
-#### `tf.train.NanTensorHook.after_create_session(session, coord)` {#NanTensorHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.NanTensorHook.after_run(run_context, run_values)` {#NanTensorHook.after_run}
-
-
-
-
-- - -
-
-#### `tf.train.NanTensorHook.before_run(run_context)` {#NanTensorHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.NanTensorHook.begin()` {#NanTensorHook.begin}
-
-Called once before using the session.
-
-When called, the default graph is the one that will be launched in the
-session. The hook can modify the graph by adding new operations to it.
-After the `begin()` call the graph will be finalized and the other callbacks
-can not modify the graph anymore. Second call of `begin()` on the same
-graph, should not change the graph.
-
-
-- - -
-
-#### `tf.train.NanTensorHook.end(session)` {#NanTensorHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.Server.create_local_server.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.Server.create_local_server.md
deleted file mode 100644
index 9834004957..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.Server.create_local_server.md
+++ /dev/null
@@ -1,21 +0,0 @@
-#### `tf.train.Server.create_local_server(config=None, start=True)` {#Server.create_local_server}
-
-Creates a new single-process cluster running on the local host.
-
-This method is a convenience wrapper for creating a
-`tf.train.Server` with a `tf.train.ServerDef` that specifies a
-single-process cluster containing a single task in a job called
-`"local"`.
-
-##### Args:
-
-
-* <b>`config`</b>: (Options.) A `tf.ConfigProto` that specifies default
- configuration options for all sessions that run on this server.
-* <b>`start`</b>: (Optional.) Boolean, indicating whether to start the server after
- creating it. Defaults to `True`.
-
-##### Returns:
-
- A local `tf.train.Server`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.generate_checkpoint_state_proto.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.generate_checkpoint_state_proto.md
deleted file mode 100644
index 7405b289e3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.generate_checkpoint_state_proto.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.train.generate_checkpoint_state_proto(save_dir, model_checkpoint_path, all_model_checkpoint_paths=None)` {#generate_checkpoint_state_proto}
-
-Generates a checkpoint state proto.
-
-##### Args:
-
-
-* <b>`save_dir`</b>: Directory where the model was saved.
-* <b>`model_checkpoint_path`</b>: The checkpoint file.
-* <b>`all_model_checkpoint_paths`</b>: List of strings. Paths to all not-yet-deleted
- checkpoints, sorted from oldest to newest. If this is a non-empty list,
- the last element must be equal to model_checkpoint_path. These paths
- are also saved in the CheckpointState proto.
-
-##### Returns:
-
- CheckpointState proto with model_checkpoint_path and
- all_model_checkpoint_paths updated to either absolute paths or
- relative paths to the current save_dir.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.maybe_batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.maybe_batch.md
deleted file mode 100644
index ed68f0e240..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.maybe_batch.md
+++ /dev/null
@@ -1,41 +0,0 @@
-### `tf.train.maybe_batch(tensors, keep_input, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_batch}
-
-Conditionally creates batches of tensors based on `keep_input`.
-
-See docstring in `batch` for more details.
-
-##### Args:
-
-
-* <b>`tensors`</b>: The list or dictionary of tensors to enqueue.
-* <b>`keep_input`</b>: A `bool` Tensor. This tensor controls whether the input is
- added to the queue or not. If it is a scalar and evaluates `True`, then
- `tensors` are all added to the queue. If it is a vector and `enqueue_many`
- is `True`, then each example is added to the queue only if the
- corresonding value in `keep_input` is `True`. This tensor essentially acts
- as a filtering mechanism.
-* <b>`batch_size`</b>: The new batch size pulled from the queue.
-* <b>`num_threads`</b>: The number of threads enqueuing `tensors`.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensors` is a single example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensors`.
-* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
- The given dimensions are padded upon dequeue so that tensors within a
- batch have the same shapes.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (Optional). If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the same types as `tensors`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensors`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.shuffle_batch_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.shuffle_batch_join.md
deleted file mode 100644
index 13a925678a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.train.shuffle_batch_join.md
+++ /dev/null
@@ -1,77 +0,0 @@
-### `tf.train.shuffle_batch_join(tensors_list, batch_size, capacity, min_after_dequeue, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#shuffle_batch_join}
-
-Create batches by randomly shuffling tensors.
-
-The `tensors_list` argument is a list of tuples of tensors, or a list of
-dictionaries of tensors. Each element in the list is treated similarly
-to the `tensors` argument of `tf.train.shuffle_batch()`.
-
-This version enqueues a different list of tensors in different threads.
-It adds the following to the current `Graph`:
-
-* A shuffling queue into which tensors from `tensors_list` are enqueued.
-* A `dequeue_many` operation to create batches from the queue.
-* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
- from `tensors_list`.
-
-`len(tensors_list)` threads will be started, with thread `i` enqueuing
-the tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match
-`tensors_list[i2][j]` in type and shape, except in the first dimension if
-`enqueue_many` is true.
-
-If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
-to represent a single example. An input tensor with shape `[x, y, z]`
-will be output as a tensor with shape `[batch_size, x, y, z]`.
-
-If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
-represent a batch of examples, where the first dimension is indexed
-by example, and all members of `tensors_list[i]` should have the
-same size in the first dimension. If an input tensor has shape `[*, x,
-y, z]`, the output will have shape `[batch_size, x, y, z]`.
-
-The `capacity` argument controls the how long the prefetching is allowed to
-grow the queues.
-
-The returned operation is a dequeue operation and will throw
-`tf.errors.OutOfRangeError` if the input queue is exhausted. If this
-operation is feeding another input queue, its queue runner will catch
-this exception, however, if this operation is used in your main thread
-you are responsible for catching this yourself.
-
-If `allow_smaller_final_batch` is `True`, a smaller batch value than
-`batch_size` is returned when the queue is closed and there are not enough
-elements to fill the batch, otherwise the pending elements are discarded.
-In addition, all output tensors' static shapes, as accessed via the
-`get_shape` method will have a first `Dimension` value of `None`, and
-operations that depend on fixed batch_size would fail.
-
-##### Args:
-
-
-* <b>`tensors_list`</b>: A list of tuples or dictionaries of tensors to enqueue.
-* <b>`batch_size`</b>: An integer. The new batch size pulled from the queue.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`min_after_dequeue`</b>: Minimum number elements in the queue after a
- dequeue, used to ensure a level of mixing of elements.
-* <b>`seed`</b>: Seed for the random shuffling within the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list_list` is a single
- example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensors_list[i]`.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (optional). If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the same number and types as
- `tensors_list[i]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensors_list`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.unsorted_segment_max.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.unsorted_segment_max.md
deleted file mode 100644
index 655f08dbe9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.unsorted_segment_max.md
+++ /dev/null
@@ -1,38 +0,0 @@
-### `tf.unsorted_segment_max(data, segment_ids, num_segments, name=None)` {#unsorted_segment_max}
-
-Computes the Max along segments of a tensor.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-This operator is similar to the [unsorted segment sum operator](../../api_docs/python/math_ops.md#UnsortedSegmentSum).
-Instead of computing the sum over segments, it computes the maximum
-such that:
-
-\\(output_i = \max_j data_j\\) where max is over `j` such
-that `segment_ids[j] == i`.
-
-If the maximum is empty for a given segment ID `i`, it outputs the smallest possible value for specific numeric type,
- `output[i] = numeric_limits<T>::min()`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/UnsortedSegmentSum.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`segment_ids`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor whose rank is equal to the rank of `data`'s
- first dimension.
-* <b>`num_segments`</b>: A `Tensor` of type `int32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `num_segments`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.unstack.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.unstack.md
deleted file mode 100644
index 872ef968c1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.unstack.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.unstack(value, num=None, axis=0, name='unstack')` {#unstack}
-
-Unpacks the given dimension of a rank-`R` tensor into rank-`(R-1)` tensors.
-
-Unpacks `num` tensors from `value` by chipping it along the `axis` dimension.
-If `num` is not specified (the default), it is inferred from `value`'s shape.
-If `value.shape[axis]` is not known, `ValueError` is raised.
-
-For example, given a tensor of shape `(A, B, C, D)`;
-
-If `axis == 0` then the i'th tensor in `output` is the slice
- `value[i, :, :, :]` and each tensor in `output` will have shape `(B, C, D)`.
- (Note that the dimension unpacked along is gone, unlike `split`).
-
-If `axis == 1` then the i'th tensor in `output` is the slice
- `value[:, i, :, :]` and each tensor in `output` will have shape `(A, C, D)`.
-Etc.
-
-This is the opposite of pack. The numpy equivalent is
-
- tf.unstack(x, n) = list(x)
-
-##### Args:
-
-
-* <b>`value`</b>: A rank `R > 0` `Tensor` to be unstacked.
-* <b>`num`</b>: An `int`. The length of the dimension `axis`. Automatically inferred
- if `None` (the default).
-* <b>`axis`</b>: An `int`. The axis to unstack along. Defaults to the first
- dimension. Supports negative indexes.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The list of `Tensor` objects unstacked from `value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `num` is unspecified and cannot be inferred.
-* <b>`ValueError`</b>: If `axis` is out of the range [-R, R).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.zeros.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.zeros.md
deleted file mode 100644
index 590294db65..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.zeros.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.zeros(shape, dtype=tf.float32, name=None)` {#zeros}
-
-Creates a tensor with all elements set to zero.
-
-This operation returns a tensor of type `dtype` with shape `shape` and
-all elements set to zero.
-
-For example:
-
-```python
-tf.zeros([3, 4], tf.int32) ==> [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
-```
-
-##### Args:
-
-
-* <b>`shape`</b>: Either a list of integers, or a 1-D `Tensor` of type `int32`.
-* <b>`dtype`</b>: The type of an element in the resulting `Tensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` with all elements set to zero.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.zeros_like.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.zeros_like.md
deleted file mode 100644
index 178c2ae467..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf.zeros_like.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.zeros_like(tensor, dtype=None, name=None, optimize=True)` {#zeros_like}
-
-Creates a tensor with all elements set to zero.
-
-Given a single tensor (`tensor`), this operation returns a tensor of the
-same type and shape as `tensor` with all elements set to zero. Optionally,
-you can use `dtype` to specify a new type for the returned tensor.
-
-For example:
-
-```python
-# 'tensor' is [[1, 2, 3], [4, 5, 6]]
-tf.zeros_like(tensor) ==> [[0, 0, 0], [0, 0, 0]]
-```
-
-##### Args:
-
-
-* <b>`tensor`</b>: A `Tensor`.
-* <b>`dtype`</b>: A type for the returned `Tensor`. Must be `float32`, `float64`,
- `int8`, `int16`, `int32`, `int64`, `uint8`, `complex64`, or `complex128`.
-
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`optimize`</b>: if true, attempt to statically determine the shape of 'tensor'
- and encode it as a constant.
-
-##### Returns:
-
- A `Tensor` with all elements set to zero.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf_debug.LocalCLIDebugHook.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf_debug.LocalCLIDebugHook.md
deleted file mode 100644
index eeb4226633..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard1/tf_debug.LocalCLIDebugHook.md
+++ /dev/null
@@ -1,256 +0,0 @@
-Command-line-interface debugger hook.
-
-Can be used as a monitor/hook for `tf.train.MonitoredSession`s and
-`tf.contrib.learn`'s `Estimator`s and `Experiment`s.
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.__enter__()` {#LocalCLIDebugHook.__enter__}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.__exit__(exec_type, exec_value, exec_tb)` {#LocalCLIDebugHook.__exit__}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.__init__(ui_type='curses')` {#LocalCLIDebugHook.__init__}
-
-Create a local debugger command-line interface (CLI) hook.
-
-##### Args:
-
-
-* <b>`ui_type`</b>: (str) user-interface type.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.add_tensor_filter(filter_name, tensor_filter)` {#LocalCLIDebugHook.add_tensor_filter}
-
-Add a tensor filter.
-
-See doc of `LocalCLIDebugWrapperSession.add_tensor_filter()` for details.
-Override default behavior to accomodate the possibility of this method being
-called prior to the initialization of the underlying
-`LocalCLIDebugWrapperSession` object.
-
-##### Args:
-
-
-* <b>`filter_name`</b>: See doc of `LocalCLIDebugWrapperSession.add_tensor_filter()`
- for details.
-* <b>`tensor_filter`</b>: See doc of
- `LocalCLIDebugWrapperSession.add_tensor_filter()` for details.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.after_create_session(session, coord)` {#LocalCLIDebugHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.after_run(run_context, run_values)` {#LocalCLIDebugHook.after_run}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.before_run(run_context)` {#LocalCLIDebugHook.before_run}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.begin()` {#LocalCLIDebugHook.begin}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.close()` {#LocalCLIDebugHook.close}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.end(session)` {#LocalCLIDebugHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.graph` {#LocalCLIDebugHook.graph}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.invoke_node_stepper(node_stepper, restore_variable_values_on_exit=True)` {#LocalCLIDebugHook.invoke_node_stepper}
-
-Overrides method in base class to implement interactive node stepper.
-
-##### Args:
-
-
-* <b>`node_stepper`</b>: (`stepper.NodeStepper`) The underlying NodeStepper API
- object.
-* <b>`restore_variable_values_on_exit`</b>: (`bool`) Whether any variables whose
- values have been altered during this node-stepper invocation should be
- restored to their old values when this invocation ends.
-
-##### Returns:
-
- The same return values as the `Session.run()` call on the same fetches as
- the NodeStepper.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.on_run_end(request)` {#LocalCLIDebugHook.on_run_end}
-
-Overrides on-run-end callback.
-
-##### Actions taken:
-
- 1) Load the debug dump.
- 2) Bring up the Analyzer CLI.
-
-##### Args:
-
-
-* <b>`request`</b>: An instance of OnSessionInitRequest.
-
-##### Returns:
-
- An instance of OnSessionInitResponse.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.on_run_start(request)` {#LocalCLIDebugHook.on_run_start}
-
-Overrides on-run-start callback.
-
-##### Invoke the CLI to let user choose what action to take:
-
- `run` / `invoke_stepper`.
-
-##### Args:
-
-
-* <b>`request`</b>: An instance of `OnSessionInitRequest`.
-
-##### Returns:
-
- An instance of `OnSessionInitResponse`.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If user chooses to prematurely exit the debugger.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.on_session_init(request)` {#LocalCLIDebugHook.on_session_init}
-
-Overrides on-session-init callback.
-
-##### Args:
-
-
-* <b>`request`</b>: An instance of `OnSessionInitRequest`.
-
-##### Returns:
-
- An instance of `OnSessionInitResponse`.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.partial_run(handle, fetches, feed_dict=None)` {#LocalCLIDebugHook.partial_run}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.partial_run_setup(fetches, feeds=None)` {#LocalCLIDebugHook.partial_run_setup}
-
-Sets up the feeds and fetches for partial runs in the session.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#LocalCLIDebugHook.run}
-
-Wrapper around Session.run() that inserts tensor watch options.
-
-##### Args:
-
-
-* <b>`fetches`</b>: Same as the `fetches` arg to regular `Session.run()`.
-* <b>`feed_dict`</b>: Same as the `feed_dict` arg to regular `Session.run()`.
-* <b>`options`</b>: Same as the `options` arg to regular `Session.run()`.
-* <b>`run_metadata`</b>: Same as the `run_metadata` arg to regular `Session.run()`.
-
-##### Returns:
-
- Simply forwards the output of the wrapped `Session.run()` call.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: On invalid `OnRunStartAction` value.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.sess_str` {#LocalCLIDebugHook.sess_str}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.session` {#LocalCLIDebugHook.session}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DType.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DType.md
deleted file mode 100644
index 7035798b17..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.DType.md
+++ /dev/null
@@ -1,250 +0,0 @@
-Represents the type of the elements in a `Tensor`.
-
-The following `DType` objects are defined:
-
-* `tf.float16`: 16-bit half-precision floating-point.
-* `tf.float32`: 32-bit single-precision floating-point.
-* `tf.float64`: 64-bit double-precision floating-point.
-* `tf.bfloat16`: 16-bit truncated floating-point.
-* `tf.complex64`: 64-bit single-precision complex.
-* `tf.complex128`: 128-bit double-precision complex.
-* `tf.int8`: 8-bit signed integer.
-* `tf.uint8`: 8-bit unsigned integer.
-* `tf.uint16`: 16-bit unsigned integer.
-* `tf.int16`: 16-bit signed integer.
-* `tf.int32`: 32-bit signed integer.
-* `tf.int64`: 64-bit signed integer.
-* `tf.bool`: Boolean.
-* `tf.string`: String.
-* `tf.qint8`: Quantized 8-bit signed integer.
-* `tf.quint8`: Quantized 8-bit unsigned integer.
-* `tf.qint16`: Quantized 16-bit signed integer.
-* `tf.quint16`: Quantized 16-bit unsigned integer.
-* `tf.qint32`: Quantized 32-bit signed integer.
-* `tf.resource`: Handle to a mutable resource.
-
-In addition, variants of these types with the `_ref` suffix are
-defined for reference-typed tensors.
-
-The `tf.as_dtype()` function converts numpy types and string type
-names to a `DType` object.
-- - -
-
-#### `tf.DType.__eq__(other)` {#DType.__eq__}
-
-Returns True iff this DType refers to the same type as `other`.
-
-
-- - -
-
-#### `tf.DType.__hash__()` {#DType.__hash__}
-
-
-
-
-- - -
-
-#### `tf.DType.__init__(type_enum)` {#DType.__init__}
-
-Creates a new `DataType`.
-
-NOTE(mrry): In normal circumstances, you should not need to
-construct a `DataType` object directly. Instead, use the
-`tf.as_dtype()` function.
-
-##### Args:
-
-
-* <b>`type_enum`</b>: A `types_pb2.DataType` enum value.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `type_enum` is not a value `types_pb2.DataType`.
-
-
-- - -
-
-#### `tf.DType.__ne__(other)` {#DType.__ne__}
-
-Returns True iff self != other.
-
-
-- - -
-
-#### `tf.DType.__repr__()` {#DType.__repr__}
-
-
-
-
-- - -
-
-#### `tf.DType.__str__()` {#DType.__str__}
-
-
-
-
-- - -
-
-#### `tf.DType.as_datatype_enum` {#DType.as_datatype_enum}
-
-Returns a `types_pb2.DataType` enum value based on this `DType`.
-
-
-- - -
-
-#### `tf.DType.as_numpy_dtype` {#DType.as_numpy_dtype}
-
-Returns a `numpy.dtype` based on this `DType`.
-
-
-- - -
-
-#### `tf.DType.base_dtype` {#DType.base_dtype}
-
-Returns a non-reference `DType` based on this `DType`.
-
-
-- - -
-
-#### `tf.DType.is_bool` {#DType.is_bool}
-
-Returns whether this is a boolean data type
-
-
-- - -
-
-#### `tf.DType.is_compatible_with(other)` {#DType.is_compatible_with}
-
-Returns True if the `other` DType will be converted to this DType.
-
-The conversion rules are as follows:
-
-```python
-DType(T) .is_compatible_with(DType(T)) == True
-DType(T) .is_compatible_with(DType(T).as_ref) == True
-DType(T).as_ref.is_compatible_with(DType(T)) == False
-DType(T).as_ref.is_compatible_with(DType(T).as_ref) == True
-```
-
-##### Args:
-
-
-* <b>`other`</b>: A `DType` (or object that may be converted to a `DType`).
-
-##### Returns:
-
- True if a Tensor of the `other` `DType` will be implicitly converted to
- this `DType`.
-
-
-- - -
-
-#### `tf.DType.is_complex` {#DType.is_complex}
-
-Returns whether this is a complex floating point type.
-
-
-- - -
-
-#### `tf.DType.is_floating` {#DType.is_floating}
-
-Returns whether this is a (non-quantized, real) floating point type.
-
-
-- - -
-
-#### `tf.DType.is_integer` {#DType.is_integer}
-
-Returns whether this is a (non-quantized) integer type.
-
-
-- - -
-
-#### `tf.DType.is_numpy_compatible` {#DType.is_numpy_compatible}
-
-
-
-
-- - -
-
-#### `tf.DType.is_quantized` {#DType.is_quantized}
-
-Returns whether this is a quantized data type.
-
-
-- - -
-
-#### `tf.DType.is_unsigned` {#DType.is_unsigned}
-
-Returns whether this type is unsigned.
-
-Non-numeric, unordered, and quantized types are not considered unsigned, and
-this function returns `False`.
-
-##### Returns:
-
- Whether a `DType` is unsigned.
-
-
-- - -
-
-#### `tf.DType.limits` {#DType.limits}
-
-Return intensity limits, i.e. (min, max) tuple, of the dtype.
-
-##### Args:
-
- clip_negative : bool, optional
- If True, clip the negative range (i.e. return 0 for min intensity)
- even if the image dtype allows negative values.
-Returns
- min, max : tuple
- Lower and upper intensity limits.
-
-
-- - -
-
-#### `tf.DType.max` {#DType.max}
-
-Returns the maximum representable value in this data type.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if this is a non-numeric, unordered, or quantized type.
-
-
-- - -
-
-#### `tf.DType.min` {#DType.min}
-
-Returns the minimum representable value in this data type.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if this is a non-numeric, unordered, or quantized type.
-
-
-- - -
-
-#### `tf.DType.name` {#DType.name}
-
-Returns the string name for this `DType`.
-
-
-- - -
-
-#### `tf.DType.real_dtype` {#DType.real_dtype}
-
-Returns the dtype correspond to this dtype's real part.
-
-
-- - -
-
-#### `tf.DType.size` {#DType.size}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.FIFOQueue.from_list.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.FIFOQueue.from_list.md
deleted file mode 100644
index f27017af74..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.FIFOQueue.from_list.md
+++ /dev/null
@@ -1,21 +0,0 @@
-#### `tf.FIFOQueue.from_list(index, queues)` {#FIFOQueue.from_list}
-
-Create a queue using the queue reference from `queues[index]`.
-
-##### Args:
-
-
-* <b>`index`</b>: An integer scalar tensor that determines the input that gets
- selected.
-* <b>`queues`</b>: A list of `QueueBase` objects.
-
-##### Returns:
-
- A `QueueBase` object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
- or when the data types of `queues` are not all the same.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.Graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.Graph.md
deleted file mode 100644
index dc1e898211..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.Graph.md
+++ /dev/null
@@ -1,885 +0,0 @@
-A TensorFlow computation, represented as a dataflow graph.
-
-A `Graph` contains a set of
-[`Operation`](../../api_docs/python/framework.md#Operation) objects,
-which represent units of computation; and
-[`Tensor`](../../api_docs/python/framework.md#Tensor) objects, which represent
-the units of data that flow between operations.
-
-A default `Graph` is always registered, and accessible by calling
-[`tf.get_default_graph()`](../../api_docs/python/framework.md#get_default_graph).
-To add an operation to the default graph, simply call one of the functions
-that defines a new `Operation`:
-
-```python
-c = tf.constant(4.0)
-assert c.graph is tf.get_default_graph()
-```
-
-Another typical usage involves the
-[`Graph.as_default()`](../../api_docs/python/framework.md#Graph.as_default)
-context manager, which overrides the current default graph for the
-lifetime of the context:
-
-```python
-g = tf.Graph()
-with g.as_default():
- # Define operations and tensors in `g`.
- c = tf.constant(30.0)
- assert c.graph is g
-```
-
-Important note: This class *is not* thread-safe for graph construction. All
-operations should be created from a single thread, or external
-synchronization must be provided. Unless otherwise specified, all methods
-are not thread-safe.
-
-- - -
-
-#### `tf.Graph.__init__()` {#Graph.__init__}
-
-Creates a new, empty Graph.
-
-
-- - -
-
-#### `tf.Graph.as_default()` {#Graph.as_default}
-
-Returns a context manager that makes this `Graph` the default graph.
-
-This method should be used if you want to create multiple graphs
-in the same process. For convenience, a global default graph is
-provided, and all ops will be added to this graph if you do not
-create a new graph explicitly. Use this method with the `with` keyword
-to specify that ops created within the scope of a block should be
-added to this graph.
-
-The default graph is a property of the current thread. If you
-create a new thread, and wish to use the default graph in that
-thread, you must explicitly add a `with g.as_default():` in that
-thread's function.
-
-The following code examples are equivalent:
-
-```python
-# 1. Using Graph.as_default():
-g = tf.Graph()
-with g.as_default():
- c = tf.constant(5.0)
- assert c.graph is g
-
-# 2. Constructing and making default:
-with tf.Graph().as_default() as g:
- c = tf.constant(5.0)
- assert c.graph is g
-```
-
-##### Returns:
-
- A context manager for using this graph as the default graph.
-
-
-- - -
-
-#### `tf.Graph.as_graph_def(from_version=None, add_shapes=False)` {#Graph.as_graph_def}
-
-Returns a serialized `GraphDef` representation of this graph.
-
-The serialized `GraphDef` can be imported into another `Graph`
-(using [`import_graph_def()`](#import_graph_def)) or used with the
-[C++ Session API](../../api_docs/cc/index.md).
-
-This method is thread-safe.
-
-##### Args:
-
-
-* <b>`from_version`</b>: Optional. If this is set, returns a `GraphDef`
- containing only the nodes that were added to this graph since
- its `version` property had the given value.
-* <b>`add_shapes`</b>: If true, adds an "_output_shapes" list attr to each
- node with the inferred shapes of each of its outputs.
-
-##### Returns:
-
- A [`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto)
- protocol buffer.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `graph_def` would be too large.
-
-
-- - -
-
-#### `tf.Graph.finalize()` {#Graph.finalize}
-
-Finalizes this graph, making it read-only.
-
-After calling `g.finalize()`, no new operations can be added to
-`g`. This method is used to ensure that no operations are added
-to a graph when it is shared between multiple threads, for example
-when using a [`QueueRunner`](../../api_docs/python/train.md#QueueRunner).
-
-
-- - -
-
-#### `tf.Graph.finalized` {#Graph.finalized}
-
-True if this graph has been finalized.
-
-
-
-- - -
-
-#### `tf.Graph.control_dependencies(control_inputs)` {#Graph.control_dependencies}
-
-Returns a context manager that specifies control dependencies.
-
-Use with the `with` keyword to specify that all operations constructed
-within the context should have control dependencies on
-`control_inputs`. For example:
-
-```python
-with g.control_dependencies([a, b, c]):
- # `d` and `e` will only run after `a`, `b`, and `c` have executed.
- d = ...
- e = ...
-```
-
-Multiple calls to `control_dependencies()` can be nested, and in
-that case a new `Operation` will have control dependencies on the union
-of `control_inputs` from all active contexts.
-
-```python
-with g.control_dependencies([a, b]):
- # Ops constructed here run after `a` and `b`.
- with g.control_dependencies([c, d]):
- # Ops constructed here run after `a`, `b`, `c`, and `d`.
-```
-
-You can pass None to clear the control dependencies:
-
-```python
-with g.control_dependencies([a, b]):
- # Ops constructed here run after `a` and `b`.
- with g.control_dependencies(None):
- # Ops constructed here run normally, not waiting for either `a` or `b`.
- with g.control_dependencies([c, d]):
- # Ops constructed here run after `c` and `d`, also not waiting
- # for either `a` or `b`.
-```
-
-*N.B.* The control dependencies context applies *only* to ops that
-are constructed within the context. Merely using an op or tensor
-in the context does not add a control dependency. The following
-example illustrates this point:
-
-```python
-# WRONG
-def my_func(pred, tensor):
- t = tf.matmul(tensor, tensor)
- with tf.control_dependencies([pred]):
- # The matmul op is created outside the context, so no control
- # dependency will be added.
- return t
-
-# RIGHT
-def my_func(pred, tensor):
- with tf.control_dependencies([pred]):
- # The matmul op is created in the context, so a control dependency
- # will be added.
- return tf.matmul(tensor, tensor)
-```
-
-##### Args:
-
-
-* <b>`control_inputs`</b>: A list of `Operation` or `Tensor` objects which
- must be executed or computed before running the operations
- defined in the context. Can also be `None` to clear the control
- dependencies.
-
-##### Returns:
-
- A context manager that specifies control dependencies for all
- operations constructed within the context.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `control_inputs` is not a list of `Operation` or
- `Tensor` objects.
-
-
-- - -
-
-#### `tf.Graph.device(device_name_or_function)` {#Graph.device}
-
-Returns a context manager that specifies the default device to use.
-
-The `device_name_or_function` argument may either be a device name
-string, a device function, or None:
-
-* If it is a device name string, all operations constructed in
- this context will be assigned to the device with that name, unless
- overridden by a nested `device()` context.
-* If it is a function, it will be treated as a function from
- Operation objects to device name strings, and invoked each time
- a new Operation is created. The Operation will be assigned to
- the device with the returned name.
-* If it is None, all `device()` invocations from the enclosing context
- will be ignored.
-
-For information about the valid syntax of device name strings, see
-the documentation in
-[`DeviceNameUtils`](https://www.tensorflow.org/code/tensorflow/core/util/device_name_utils.h).
-
-For example:
-
-```python
-with g.device('/gpu:0'):
- # All operations constructed in this context will be placed
- # on GPU 0.
- with g.device(None):
- # All operations constructed in this context will have no
- # assigned device.
-
-# Defines a function from `Operation` to device string.
-def matmul_on_gpu(n):
- if n.type == "MatMul":
- return "/gpu:0"
- else:
- return "/cpu:0"
-
-with g.device(matmul_on_gpu):
- # All operations of type "MatMul" constructed in this context
- # will be placed on GPU 0; all other operations will be placed
- # on CPU 0.
-```
-
-**N.B.** The device scope may be overridden by op wrappers or
-other library code. For example, a variable assignment op
-`v.assign()` must be colocated with the `tf.Variable` `v`, and
-incompatible device scopes will be ignored.
-
-##### Args:
-
-
-* <b>`device_name_or_function`</b>: The device name or function to use in
- the context.
-
-##### Returns:
-
- A context manager that specifies the default device to use for newly
- created ops.
-
-
-- - -
-
-#### `tf.Graph.name_scope(name)` {#Graph.name_scope}
-
-Returns a context manager that creates hierarchical names for operations.
-
-A graph maintains a stack of name scopes. A `with name_scope(...):`
-statement pushes a new name onto the stack for the lifetime of the context.
-
-The `name` argument will be interpreted as follows:
-
-* A string (not ending with '/') will create a new name scope, in which
- `name` is appended to the prefix of all operations created in the
- context. If `name` has been used before, it will be made unique by
- calling `self.unique_name(name)`.
-* A scope previously captured from a `with g.name_scope(...) as
- scope:` statement will be treated as an "absolute" name scope, which
- makes it possible to re-enter existing scopes.
-* A value of `None` or the empty string will reset the current name scope
- to the top-level (empty) name scope.
-
-For example:
-
-```python
-with tf.Graph().as_default() as g:
- c = tf.constant(5.0, name="c")
- assert c.op.name == "c"
- c_1 = tf.constant(6.0, name="c")
- assert c_1.op.name == "c_1"
-
- # Creates a scope called "nested"
- with g.name_scope("nested") as scope:
- nested_c = tf.constant(10.0, name="c")
- assert nested_c.op.name == "nested/c"
-
- # Creates a nested scope called "inner".
- with g.name_scope("inner"):
- nested_inner_c = tf.constant(20.0, name="c")
- assert nested_inner_c.op.name == "nested/inner/c"
-
- # Create a nested scope called "inner_1".
- with g.name_scope("inner"):
- nested_inner_1_c = tf.constant(30.0, name="c")
- assert nested_inner_1_c.op.name == "nested/inner_1/c"
-
- # Treats `scope` as an absolute name scope, and
- # switches to the "nested/" scope.
- with g.name_scope(scope):
- nested_d = tf.constant(40.0, name="d")
- assert nested_d.op.name == "nested/d"
-
- with g.name_scope(""):
- e = tf.constant(50.0, name="e")
- assert e.op.name == "e"
-```
-
-The name of the scope itself can be captured by `with
-g.name_scope(...) as scope:`, which stores the name of the scope
-in the variable `scope`. This value can be used to name an
-operation that represents the overall result of executing the ops
-in a scope. For example:
-
-```python
-inputs = tf.constant(...)
-with g.name_scope('my_layer') as scope:
- weights = tf.Variable(..., name="weights")
- biases = tf.Variable(..., name="biases")
- affine = tf.matmul(inputs, weights) + biases
- output = tf.nn.relu(affine, name=scope)
-```
-
-NOTE: This constructor validates the given `name`. Valid scope
-names match one of the following regular expressions:
-
- [A-Za-z0-9.][A-Za-z0-9_.\\-/]* (for scopes at the root)
- [A-Za-z0-9_.\\-/]* (for other scopes)
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the scope.
-
-##### Returns:
-
- A context manager that installs `name` as a new name scope.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `name` is not a valid scope name, according to the rules
- above.
-
-
-
-A `Graph` instance supports an arbitrary number of "collections"
-that are identified by name. For convenience when building a large
-graph, collections can store groups of related objects: for
-example, the `tf.Variable` uses a collection (named
-[`tf.GraphKeys.GLOBAL_VARIABLES`](../../api_docs/python/framework.md#GraphKeys)) for
-all variables that are created during the construction of a graph. The caller
-may define additional collections by specifying a new name.
-
-- - -
-
-#### `tf.Graph.add_to_collection(name, value)` {#Graph.add_to_collection}
-
-Stores `value` in the collection with the given `name`.
-
-Note that collections are not sets, so it is possible to add a value to
-a collection several times.
-
-##### Args:
-
-
-* <b>`name`</b>: The key for the collection. The `GraphKeys` class
- contains many standard names for collections.
-* <b>`value`</b>: The value to add to the collection.
-
-
-- - -
-
-#### `tf.Graph.add_to_collections(names, value)` {#Graph.add_to_collections}
-
-Stores `value` in the collections given by `names`.
-
-Note that collections are not sets, so it is possible to add a value to
-a collection several times. This function makes sure that duplicates in
-`names` are ignored, but it will not check for pre-existing membership of
-`value` in any of the collections in `names`.
-
-`names` can be any iterable, but if `names` is a string, it is treated as a
-single collection name.
-
-##### Args:
-
-
-* <b>`names`</b>: The keys for the collections to add to. The `GraphKeys` class
- contains many standard names for collections.
-* <b>`value`</b>: The value to add to the collections.
-
-
-- - -
-
-#### `tf.Graph.get_collection(name, scope=None)` {#Graph.get_collection}
-
-Returns a list of values in the collection with the given `name`.
-
-This is different from `get_collection_ref()` which always returns the
-actual collection list if it exists in that it returns a new list each time
-it is called.
-
-##### Args:
-
-
-* <b>`name`</b>: The key for the collection. For example, the `GraphKeys` class
- contains many standard names for collections.
-* <b>`scope`</b>: (Optional.) If supplied, the resulting list is filtered to include
- only items whose `name` attribute matches using `re.match`. Items
- without a `name` attribute are never returned if a scope is supplied and
- the choice or `re.match` means that a `scope` without special tokens
- filters by prefix.
-
-##### Returns:
-
- The list of values in the collection with the given `name`, or
- an empty list if no value has been added to that collection. The
- list contains the values in the order under which they were
- collected.
-
-
-- - -
-
-#### `tf.Graph.get_collection_ref(name)` {#Graph.get_collection_ref}
-
-Returns a list of values in the collection with the given `name`.
-
-If the collection exists, this returns the list itself, which can
-be modified in place to change the collection. If the collection does
-not exist, it is created as an empty list and the list is returned.
-
-This is different from `get_collection()` which always returns a copy of
-the collection list if it exists and never creates an empty collection.
-
-##### Args:
-
-
-* <b>`name`</b>: The key for the collection. For example, the `GraphKeys` class
- contains many standard names for collections.
-
-##### Returns:
-
- The list of values in the collection with the given `name`, or an empty
- list if no value has been added to that collection.
-
-
-
-- - -
-
-#### `tf.Graph.as_graph_element(obj, allow_tensor=True, allow_operation=True)` {#Graph.as_graph_element}
-
-Returns the object referred to by `obj`, as an `Operation` or `Tensor`.
-
-This function validates that `obj` represents an element of this
-graph, and gives an informative error message if it is not.
-
-This function is the canonical way to get/validate an object of
-one of the allowed types from an external argument reference in the
-Session API.
-
-This method may be called concurrently from multiple threads.
-
-##### Args:
-
-
-* <b>`obj`</b>: A `Tensor`, an `Operation`, or the name of a tensor or operation.
- Can also be any object with an `_as_graph_element()` method that returns
- a value of one of these types.
-* <b>`allow_tensor`</b>: If true, `obj` may refer to a `Tensor`.
-* <b>`allow_operation`</b>: If true, `obj` may refer to an `Operation`.
-
-##### Returns:
-
- The `Tensor` or `Operation` in the Graph corresponding to `obj`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `obj` is not a type we support attempting to convert
- to types.
-* <b>`ValueError`</b>: If `obj` is of an appropriate type but invalid. For
- example, an invalid string.
-* <b>`KeyError`</b>: If `obj` is not an object in the graph.
-
-
-- - -
-
-#### `tf.Graph.get_operation_by_name(name)` {#Graph.get_operation_by_name}
-
-Returns the `Operation` with the given `name`.
-
-This method may be called concurrently from multiple threads.
-
-##### Args:
-
-
-* <b>`name`</b>: The name of the `Operation` to return.
-
-##### Returns:
-
- The `Operation` with the given `name`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `name` is not a string.
-* <b>`KeyError`</b>: If `name` does not correspond to an operation in this graph.
-
-
-- - -
-
-#### `tf.Graph.get_tensor_by_name(name)` {#Graph.get_tensor_by_name}
-
-Returns the `Tensor` with the given `name`.
-
-This method may be called concurrently from multiple threads.
-
-##### Args:
-
-
-* <b>`name`</b>: The name of the `Tensor` to return.
-
-##### Returns:
-
- The `Tensor` with the given `name`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `name` is not a string.
-* <b>`KeyError`</b>: If `name` does not correspond to a tensor in this graph.
-
-
-- - -
-
-#### `tf.Graph.get_operations()` {#Graph.get_operations}
-
-Return the list of operations in the graph.
-
-You can modify the operations in place, but modifications
-to the list such as inserts/delete have no effect on the
-list of operations known to the graph.
-
-This method may be called concurrently from multiple threads.
-
-##### Returns:
-
- A list of Operations.
-
-
-
-- - -
-
-#### `tf.Graph.seed` {#Graph.seed}
-
-The graph-level random seed of this graph.
-
-
-- - -
-
-#### `tf.Graph.unique_name(name, mark_as_used=True)` {#Graph.unique_name}
-
-Return a unique operation name for `name`.
-
-Note: You rarely need to call `unique_name()` directly. Most of
-the time you just need to create `with g.name_scope()` blocks to
-generate structured names.
-
-`unique_name` is used to generate structured names, separated by
-`"/"`, to help identify operations when debugging a graph.
-Operation names are displayed in error messages reported by the
-TensorFlow runtime, and in various visualization tools such as
-TensorBoard.
-
-If `mark_as_used` is set to `True`, which is the default, a new
-unique name is created and marked as in use. If it's set to `False`,
-the unique name is returned without actually being marked as used.
-This is useful when the caller simply wants to know what the name
-to be created will be.
-
-##### Args:
-
-
-* <b>`name`</b>: The name for an operation.
-* <b>`mark_as_used`</b>: Whether to mark this name as being used.
-
-##### Returns:
-
- A string to be passed to `create_op()` that will be used
- to name the operation being created.
-
-
-- - -
-
-#### `tf.Graph.version` {#Graph.version}
-
-Returns a version number that increases as ops are added to the graph.
-
-Note that this is unrelated to the
-[GraphDef version](#Graph.graph_def_version).
-
-
-- - -
-
-#### `tf.Graph.graph_def_versions` {#Graph.graph_def_versions}
-
-The GraphDef version information of this graph.
-
-For details on the meaning of each version, see
-[`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto).
-
-##### Returns:
-
- A `VersionDef`.
-
-
-
-- - -
-
-#### `tf.Graph.create_op(op_type, inputs, dtypes, input_types=None, name=None, attrs=None, op_def=None, compute_shapes=True, compute_device=True)` {#Graph.create_op}
-
-Creates an `Operation` in this graph.
-
-This is a low-level interface for creating an `Operation`. Most
-programs will not call this method directly, and instead use the
-Python op constructors, such as `tf.constant()`, which add ops to
-the default graph.
-
-##### Args:
-
-
-* <b>`op_type`</b>: The `Operation` type to create. This corresponds to the
- `OpDef.name` field for the proto that defines the operation.
-* <b>`inputs`</b>: A list of `Tensor` objects that will be inputs to the `Operation`.
-* <b>`dtypes`</b>: A list of `DType` objects that will be the types of the tensors
- that the operation produces.
-* <b>`input_types`</b>: (Optional.) A list of `DType`s that will be the types of
- the tensors that the operation consumes. By default, uses the base
- `DType` of each input in `inputs`. Operations that expect
- reference-typed inputs must specify `input_types` explicitly.
-* <b>`name`</b>: (Optional.) A string name for the operation. If not specified, a
- name is generated based on `op_type`.
-* <b>`attrs`</b>: (Optional.) A dictionary where the key is the attribute name (a
- string) and the value is the respective `attr` attribute of the
- `NodeDef` proto that will represent the operation (an `AttrValue`
- proto).
-* <b>`op_def`</b>: (Optional.) The `OpDef` proto that describes the `op_type` that
- the operation will have.
-* <b>`compute_shapes`</b>: (Optional.) If True, shape inference will be performed
- to compute the shapes of the outputs.
-* <b>`compute_device`</b>: (Optional.) If True, device functions will be executed
- to compute the device property of the Operation.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if any of the inputs is not a `Tensor`.
-* <b>`ValueError`</b>: if colocation conflicts with existing device assignment.
-
-##### Returns:
-
- An `Operation` object.
-
-
-- - -
-
-#### `tf.Graph.gradient_override_map(op_type_map)` {#Graph.gradient_override_map}
-
-EXPERIMENTAL: A context manager for overriding gradient functions.
-
-This context manager can be used to override the gradient function
-that will be used for ops within the scope of the context.
-
-For example:
-
-```python
-@tf.RegisterGradient("CustomSquare")
-def _custom_square_grad(op, grad):
- # ...
-
-with tf.Graph().as_default() as g:
- c = tf.constant(5.0)
- s_1 = tf.square(c) # Uses the default gradient for tf.square.
- with g.gradient_override_map({"Square": "CustomSquare"}):
- s_2 = tf.square(s_2) # Uses _custom_square_grad to compute the
- # gradient of s_2.
-```
-
-##### Args:
-
-
-* <b>`op_type_map`</b>: A dictionary mapping op type strings to alternative op
- type strings.
-
-##### Returns:
-
- A context manager that sets the alternative op type to be used for one
- or more ops created in that context.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `op_type_map` is not a dictionary mapping strings to
- strings.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.Graph.building_function` {#Graph.building_function}
-
-Returns True iff this graph represents a function.
-
-
-- - -
-
-#### `tf.Graph.clear_collection(name)` {#Graph.clear_collection}
-
-Clears all values in a collection.
-
-##### Args:
-
-
-* <b>`name`</b>: The key for the collection. The `GraphKeys` class contains many
- standard names for collections.
-
-
-- - -
-
-#### `tf.Graph.colocate_with(op, ignore_existing=False)` {#Graph.colocate_with}
-
-Returns a context manager that specifies an op to colocate with.
-
-Note: this function is not for public use, only for internal libraries.
-
-For example:
-
-```python
-a = tf.Variable([1.0])
-with g.colocate_with(a):
- b = tf.constant(1.0)
- c = tf.add(a, b)
-```
-
-`b` and `c` will always be colocated with `a`, no matter where `a`
-is eventually placed.
-
-**NOTE** Using a colocation scope resets any existing device constraints.
-
-If `op` is `None` then `ignore_existing` must be `True` and the new
-scope resets all colocation and device constraints.
-
-##### Args:
-
-
-* <b>`op`</b>: The op to colocate all created ops with, or `None`.
-* <b>`ignore_existing`</b>: If true, only applies colocation of this op within
- the context, rather than applying all colocation properties
- on the stack. If `op` is `None`, this value must be `True`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if op is None but ignore_existing is False.
-
-##### Yields:
-
- A context manager that specifies the op with which to colocate
- newly created ops.
-
-
-- - -
-
-#### `tf.Graph.container(container_name)` {#Graph.container}
-
-Returns a context manager that specifies the resource container to use.
-
-Stateful operations, such as variables and queues, can maintain their
-states on devices so that they can be shared by multiple processes.
-A resource container is a string name under which these stateful
-operations are tracked. These resources can be released or cleared
-with `tf.Session.reset()`.
-
-For example:
-
-```python
-with g.container('experiment0'):
- # All stateful Operations constructed in this context will be placed
- # in resource container "experiment0".
- v1 = tf.Variable([1.0])
- v2 = tf.Variable([2.0])
- with g.container("experiment1"):
- # All stateful Operations constructed in this context will be
- # placed in resource container "experiment1".
- v3 = tf.Variable([3.0])
- q1 = tf.FIFOQueue(10, tf.float32)
- # All stateful Operations constructed in this context will be
- # be created in the "experiment0".
- v4 = tf.Variable([4.0])
- q1 = tf.FIFOQueue(20, tf.float32)
- with g.container(""):
- # All stateful Operations constructed in this context will be
- # be placed in the default resource container.
- v5 = tf.Variable([5.0])
- q3 = tf.FIFOQueue(30, tf.float32)
-
-# Resets container "experiment0", after which the state of v1, v2, v4, q1
-# will become undefined (such as uninitialized).
-tf.Session.reset(target, ["experiment0"])
-```
-
-##### Args:
-
-
-* <b>`container_name`</b>: container name string.
-
-##### Returns:
-
- A context manager for defining resource containers for stateful ops,
- yields the container name.
-
-
-- - -
-
-#### `tf.Graph.get_all_collection_keys()` {#Graph.get_all_collection_keys}
-
-Returns a list of collections used in this graph.
-
-
-- - -
-
-#### `tf.Graph.is_feedable(tensor)` {#Graph.is_feedable}
-
-Returns `True` if and only if `tensor` is feedable.
-
-
-- - -
-
-#### `tf.Graph.is_fetchable(tensor_or_op)` {#Graph.is_fetchable}
-
-Returns `True` if and only if `tensor_or_op` is fetchable.
-
-
-- - -
-
-#### `tf.Graph.prevent_feeding(tensor)` {#Graph.prevent_feeding}
-
-Marks the given `tensor` as unfeedable in this graph.
-
-
-- - -
-
-#### `tf.Graph.prevent_fetching(op)` {#Graph.prevent_fetching}
-
-Marks the given `op` as unfetchable in this graph.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.InteractiveSession.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.InteractiveSession.md
deleted file mode 100644
index 5c0c5892bd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.InteractiveSession.md
+++ /dev/null
@@ -1,346 +0,0 @@
-A TensorFlow `Session` for use in interactive contexts, such as a shell.
-
-The only difference with a regular `Session` is that an `InteractiveSession`
-installs itself as the default session on construction.
-The methods [`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval)
-and [`Operation.run()`](../../api_docs/python/framework.md#Operation.run)
-will use that session to run ops.
-
-This is convenient in interactive shells and [IPython
-notebooks](http://ipython.org), as it avoids having to pass an explicit
-`Session` object to run ops.
-
-For example:
-
-```python
-sess = tf.InteractiveSession()
-a = tf.constant(5.0)
-b = tf.constant(6.0)
-c = a * b
-# We can just use 'c.eval()' without passing 'sess'
-print(c.eval())
-sess.close()
-```
-
-Note that a regular session installs itself as the default session when it
-is created in a `with` statement. The common usage in non-interactive
-programs is to follow that pattern:
-
-```python
-a = tf.constant(5.0)
-b = tf.constant(6.0)
-c = a * b
-with tf.Session():
- # We can also use 'c.eval()' here.
- print(c.eval())
-```
-- - -
-
-#### `tf.InteractiveSession.__del__()` {#InteractiveSession.__del__}
-
-
-
-
-- - -
-
-#### `tf.InteractiveSession.__init__(target='', graph=None, config=None)` {#InteractiveSession.__init__}
-
-Creates a new interactive TensorFlow session.
-
-If no `graph` argument is specified when constructing the session,
-the default graph will be launched in the session. If you are
-using more than one graph (created with `tf.Graph()` in the same
-process, you will have to use different sessions for each graph,
-but each graph can be used in multiple sessions. In this case, it
-is often clearer to pass the graph to be launched explicitly to
-the session constructor.
-
-##### Args:
-
-
-* <b>`target`</b>: (Optional.) The execution engine to connect to.
- Defaults to using an in-process engine.
-* <b>`graph`</b>: (Optional.) The `Graph` to be launched (described above).
-* <b>`config`</b>: (Optional) `ConfigProto` proto used to configure the session.
-
-
-- - -
-
-#### `tf.InteractiveSession.as_default()` {#InteractiveSession.as_default}
-
-Returns a context manager that makes this object the default session.
-
-Use with the `with` keyword to specify that calls to
-[`Operation.run()`](../../api_docs/python/framework.md#Operation.run) or
-[`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval) should be
-executed in this session.
-
-```python
-c = tf.constant(..)
-sess = tf.Session()
-
-with sess.as_default():
- assert tf.get_default_session() is sess
- print(c.eval())
-```
-
-To get the current default session, use
-[`tf.get_default_session()`](#get_default_session).
-
-
-*N.B.* The `as_default` context manager *does not* close the
-session when you exit the context, and you must close the session
-explicitly.
-
-```python
-c = tf.constant(...)
-sess = tf.Session()
-with sess.as_default():
- print(c.eval())
-# ...
-with sess.as_default():
- print(c.eval())
-
-sess.close()
-```
-
-Alternatively, you can use `with tf.Session():` to create a
-session that is automatically closed on exiting the context,
-including when an uncaught exception is raised.
-
-*N.B.* The default graph is a property of the current thread. If you
-create a new thread, and wish to use the default session in that
-thread, you must explicitly add a `with sess.as_default():` in that
-thread's function.
-
-##### Returns:
-
- A context manager using this session as the default session.
-
-
-- - -
-
-#### `tf.InteractiveSession.close()` {#InteractiveSession.close}
-
-Closes an `InteractiveSession`.
-
-
-- - -
-
-#### `tf.InteractiveSession.graph` {#InteractiveSession.graph}
-
-The graph that was launched in this session.
-
-
-- - -
-
-#### `tf.InteractiveSession.graph_def` {#InteractiveSession.graph_def}
-
-A serializable version of the underlying TensorFlow graph.
-
-##### Returns:
-
- A graph_pb2.GraphDef proto containing nodes for all of the Operations in
- the underlying TensorFlow graph.
-
-
-- - -
-
-#### `tf.InteractiveSession.partial_run(handle, fetches, feed_dict=None)` {#InteractiveSession.partial_run}
-
-Continues the execution with more feeds and fetches.
-
-This is EXPERIMENTAL and subject to change.
-
-To use partial execution, a user first calls `partial_run_setup()` and
-then a sequence of `partial_run()`. `partial_run_setup` specifies the
-list of feeds and fetches that will be used in the subsequent
-`partial_run` calls.
-
-The optional `feed_dict` argument allows the caller to override
-the value of tensors in the graph. See run() for more information.
-
-Below is a simple example:
-
-```python
-a = array_ops.placeholder(dtypes.float32, shape=[])
-b = array_ops.placeholder(dtypes.float32, shape=[])
-c = array_ops.placeholder(dtypes.float32, shape=[])
-r1 = math_ops.add(a, b)
-r2 = math_ops.multiply(r1, c)
-
-h = sess.partial_run_setup([r1, r2], [a, b, c])
-res = sess.partial_run(h, r1, feed_dict={a: 1, b: 2})
-res = sess.partial_run(h, r2, feed_dict={c: res})
-```
-
-##### Args:
-
-
-* <b>`handle`</b>: A handle for a sequence of partial runs.
-* <b>`fetches`</b>: A single graph element, a list of graph elements,
- or a dictionary whose values are graph elements or lists of graph
- elements (see documentation for `run`).
-* <b>`feed_dict`</b>: A dictionary that maps graph elements to values
- (described above).
-
-##### Returns:
-
- Either a single value if `fetches` is a single graph element, or
- a list of values if `fetches` is a list, or a dictionary with the
- same keys as `fetches` if that is a dictionary
- (see documentation for `run`).
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses on error.
-
-
-- - -
-
-#### `tf.InteractiveSession.partial_run_setup(fetches, feeds=None)` {#InteractiveSession.partial_run_setup}
-
-Sets up a graph with feeds and fetches for partial run.
-
-This is EXPERIMENTAL and subject to change.
-
-Note that contrary to `run`, `feeds` only specifies the graph elements.
-The tensors will be supplied by the subsequent `partial_run` calls.
-
-##### Args:
-
-
-* <b>`fetches`</b>: A single graph element, or a list of graph elements.
-* <b>`feeds`</b>: A single graph element, or a list of graph elements.
-
-##### Returns:
-
- A handle for partial run.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If this `Session` is in an invalid state (e.g. has been
- closed).
-* <b>`TypeError`</b>: If `fetches` or `feed_dict` keys are of an inappropriate type.
- tf.errors.OpError: Or one of its subclasses if a TensorFlow error happens.
-
-
-- - -
-
-#### `tf.InteractiveSession.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#InteractiveSession.run}
-
-Runs operations and evaluates tensors in `fetches`.
-
-This method runs one "step" of TensorFlow computation, by
-running the necessary graph fragment to execute every `Operation`
-and evaluate every `Tensor` in `fetches`, substituting the values in
-`feed_dict` for the corresponding input values.
-
-The `fetches` argument may be a single graph element, or an arbitrarily
-nested list, tuple, namedtuple, dict, or OrderedDict containing graph
-elements at its leaves. A graph element can be one of the following types:
-
-* An [`Operation`](../../api_docs/python/framework.md#Operation).
- The corresponding fetched value will be `None`.
-* A [`Tensor`](../../api_docs/python/framework.md#Tensor).
- The corresponding fetched value will be a numpy ndarray containing the
- value of that tensor.
-* A [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor).
- The corresponding fetched value will be a
- [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue)
- containing the value of that sparse tensor.
-* A `get_tensor_handle` op. The corresponding fetched value will be a
- numpy ndarray containing the handle of that tensor.
-* A `string` which is the name of a tensor or operation in the graph.
-
-The value returned by `run()` has the same shape as the `fetches` argument,
-where the leaves are replaced by the corresponding values returned by
-TensorFlow.
-
-Example:
-
-```python
- a = tf.constant([10, 20])
- b = tf.constant([1.0, 2.0])
- # 'fetches' can be a singleton
- v = session.run(a)
- # v is the numpy array [10, 20]
- # 'fetches' can be a list.
- v = session.run([a, b])
- # v a Python list with 2 numpy arrays: the numpy array [10, 20] and the
- # 1-D array [1.0, 2.0]
- # 'fetches' can be arbitrary lists, tuples, namedtuple, dicts:
- MyData = collections.namedtuple('MyData', ['a', 'b'])
- v = session.run({'k1': MyData(a, b), 'k2': [b, a]})
- # v is a dict with
- # v['k1'] is a MyData namedtuple with 'a' the numpy array [10, 20] and
- # 'b' the numpy array [1.0, 2.0]
- # v['k2'] is a list with the numpy array [1.0, 2.0] and the numpy array
- # [10, 20].
-```
-
-The optional `feed_dict` argument allows the caller to override
-the value of tensors in the graph. Each key in `feed_dict` can be
-one of the following types:
-
-* If the key is a [`Tensor`](../../api_docs/python/framework.md#Tensor), the
- value may be a Python scalar, string, list, or numpy ndarray
- that can be converted to the same `dtype` as that
- tensor. Additionally, if the key is a
- [placeholder](../../api_docs/python/io_ops.md#placeholder), the shape of
- the value will be checked for compatibility with the placeholder.
-* If the key is a
- [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor),
- the value should be a
- [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue).
-* If the key is a nested tuple of `Tensor`s or `SparseTensor`s, the value
- should be a nested tuple with the same structure that maps to their
- corresponding values as above.
-
-Each value in `feed_dict` must be convertible to a numpy array of the dtype
-of the corresponding key.
-
-The optional `options` argument expects a [`RunOptions`] proto. The options
-allow controlling the behavior of this particular step (e.g. turning tracing
-on).
-
-The optional `run_metadata` argument expects a [`RunMetadata`] proto. When
-appropriate, the non-Tensor output of this step will be collected there. For
-example, when users turn on tracing in `options`, the profiled info will be
-collected into this argument and passed back.
-
-##### Args:
-
-
-* <b>`fetches`</b>: A single graph element, a list of graph elements,
- or a dictionary whose values are graph elements or lists of graph
- elements (described above).
-* <b>`feed_dict`</b>: A dictionary that maps graph elements to values
- (described above).
-* <b>`options`</b>: A [`RunOptions`] protocol buffer
-* <b>`run_metadata`</b>: A [`RunMetadata`] protocol buffer
-
-##### Returns:
-
- Either a single value if `fetches` is a single graph element, or
- a list of values if `fetches` is a list, or a dictionary with the
- same keys as `fetches` if that is a dictionary (described above).
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If this `Session` is in an invalid state (e.g. has been
- closed).
-* <b>`TypeError`</b>: If `fetches` or `feed_dict` keys are of an inappropriate type.
-* <b>`ValueError`</b>: If `fetches` or `feed_dict` keys are invalid or refer to a
- `Tensor` that doesn't exist.
-
-
-- - -
-
-#### `tf.InteractiveSession.sess_str` {#InteractiveSession.sess_str}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.SparseFeature.__new__.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.SparseFeature.__new__.md
deleted file mode 100644
index 167611ebd5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.SparseFeature.__new__.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.SparseFeature.__new__(_cls, index_key, value_key, dtype, size, already_sorted=False)` {#SparseFeature.__new__}
-
-Create new instance of SparseFeature(index_key, value_key, dtype, size, already_sorted)
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.SparseTensorValue.__new__.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.SparseTensorValue.__new__.md
deleted file mode 100644
index cc3fb1c052..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.SparseTensorValue.__new__.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.SparseTensorValue.__new__(_cls, indices, values, dense_shape)` {#SparseTensorValue.__new__}
-
-Create new instance of SparseTensorValue(indices, values, dense_shape)
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.TFRecordReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.TFRecordReader.md
deleted file mode 100644
index dd8a5242da..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.TFRecordReader.md
+++ /dev/null
@@ -1,173 +0,0 @@
-A Reader that outputs the records from a TFRecords file.
-
-See ReaderBase for supported methods.
-- - -
-
-#### `tf.TFRecordReader.__init__(name=None, options=None)` {#TFRecordReader.__init__}
-
-Create a TFRecordReader.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`options`</b>: A TFRecordOptions object (optional).
-
-
-- - -
-
-#### `tf.TFRecordReader.num_records_produced(name=None)` {#TFRecordReader.num_records_produced}
-
-Returns the number of records this reader has produced.
-
-This is the same as the number of Read executions that have
-succeeded.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.TFRecordReader.num_work_units_completed(name=None)` {#TFRecordReader.num_work_units_completed}
-
-Returns the number of work units this reader has finished processing.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.TFRecordReader.read(queue, name=None)` {#TFRecordReader.read}
-
-Returns the next record (key, value pair) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g. when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (key, value).
-
-* <b>`key`</b>: A string scalar Tensor.
-* <b>`value`</b>: A string scalar Tensor.
-
-
-- - -
-
-#### `tf.TFRecordReader.read_up_to(queue, num_records, name=None)` {#TFRecordReader.read_up_to}
-
-Returns up to num_records (key, value pairs) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g., when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-It may return less than num_records even before the last batch.
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`num_records`</b>: Number of records to read.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (keys, values).
-
-* <b>`keys`</b>: A 1-D string Tensor.
-* <b>`values`</b>: A 1-D string Tensor.
-
-
-- - -
-
-#### `tf.TFRecordReader.reader_ref` {#TFRecordReader.reader_ref}
-
-Op that implements the reader.
-
-
-- - -
-
-#### `tf.TFRecordReader.reset(name=None)` {#TFRecordReader.reset}
-
-Restore a reader to its initial clean state.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.TFRecordReader.restore_state(state, name=None)` {#TFRecordReader.restore_state}
-
-Restore a reader to a previously saved state.
-
-Not all Readers support being restored, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`state`</b>: A string Tensor.
- Result of a SerializeState of a Reader with matching type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.TFRecordReader.serialize_state(name=None)` {#TFRecordReader.serialize_state}
-
-Produce a string tensor that encodes the state of a reader.
-
-Not all Readers support being serialized, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A string Tensor.
-
-
-- - -
-
-#### `tf.TFRecordReader.supports_serialize` {#TFRecordReader.supports_serialize}
-
-Whether the Reader implementation can serialize its state.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.TextLineReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.TextLineReader.md
deleted file mode 100644
index 9338435dde..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.TextLineReader.md
+++ /dev/null
@@ -1,175 +0,0 @@
-A Reader that outputs the lines of a file delimited by newlines.
-
-Newlines are stripped from the output.
-See ReaderBase for supported methods.
-- - -
-
-#### `tf.TextLineReader.__init__(skip_header_lines=None, name=None)` {#TextLineReader.__init__}
-
-Create a TextLineReader.
-
-##### Args:
-
-
-* <b>`skip_header_lines`</b>: An optional int. Defaults to 0. Number of lines
- to skip from the beginning of every file.
-* <b>`name`</b>: A name for the operation (optional).
-
-
-- - -
-
-#### `tf.TextLineReader.num_records_produced(name=None)` {#TextLineReader.num_records_produced}
-
-Returns the number of records this reader has produced.
-
-This is the same as the number of Read executions that have
-succeeded.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.TextLineReader.num_work_units_completed(name=None)` {#TextLineReader.num_work_units_completed}
-
-Returns the number of work units this reader has finished processing.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.TextLineReader.read(queue, name=None)` {#TextLineReader.read}
-
-Returns the next record (key, value pair) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g. when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (key, value).
-
-* <b>`key`</b>: A string scalar Tensor.
-* <b>`value`</b>: A string scalar Tensor.
-
-
-- - -
-
-#### `tf.TextLineReader.read_up_to(queue, num_records, name=None)` {#TextLineReader.read_up_to}
-
-Returns up to num_records (key, value pairs) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g., when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-It may return less than num_records even before the last batch.
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`num_records`</b>: Number of records to read.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (keys, values).
-
-* <b>`keys`</b>: A 1-D string Tensor.
-* <b>`values`</b>: A 1-D string Tensor.
-
-
-- - -
-
-#### `tf.TextLineReader.reader_ref` {#TextLineReader.reader_ref}
-
-Op that implements the reader.
-
-
-- - -
-
-#### `tf.TextLineReader.reset(name=None)` {#TextLineReader.reset}
-
-Restore a reader to its initial clean state.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.TextLineReader.restore_state(state, name=None)` {#TextLineReader.restore_state}
-
-Restore a reader to a previously saved state.
-
-Not all Readers support being restored, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`state`</b>: A string Tensor.
- Result of a SerializeState of a Reader with matching type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.TextLineReader.serialize_state(name=None)` {#TextLineReader.serialize_state}
-
-Produce a string tensor that encodes the state of a reader.
-
-Not all Readers support being serialized, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A string Tensor.
-
-
-- - -
-
-#### `tf.TextLineReader.supports_serialize` {#TextLineReader.supports_serialize}
-
-Whether the Reader implementation can serialize its state.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.WholeFileReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.WholeFileReader.md
deleted file mode 100644
index 0ae2d4e591..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.WholeFileReader.md
+++ /dev/null
@@ -1,175 +0,0 @@
-A Reader that outputs the entire contents of a file as a value.
-
-To use, enqueue filenames in a Queue. The output of Read will
-be a filename (key) and the contents of that file (value).
-
-See ReaderBase for supported methods.
-- - -
-
-#### `tf.WholeFileReader.__init__(name=None)` {#WholeFileReader.__init__}
-
-Create a WholeFileReader.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-
-- - -
-
-#### `tf.WholeFileReader.num_records_produced(name=None)` {#WholeFileReader.num_records_produced}
-
-Returns the number of records this reader has produced.
-
-This is the same as the number of Read executions that have
-succeeded.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.WholeFileReader.num_work_units_completed(name=None)` {#WholeFileReader.num_work_units_completed}
-
-Returns the number of work units this reader has finished processing.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.WholeFileReader.read(queue, name=None)` {#WholeFileReader.read}
-
-Returns the next record (key, value pair) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g. when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (key, value).
-
-* <b>`key`</b>: A string scalar Tensor.
-* <b>`value`</b>: A string scalar Tensor.
-
-
-- - -
-
-#### `tf.WholeFileReader.read_up_to(queue, num_records, name=None)` {#WholeFileReader.read_up_to}
-
-Returns up to num_records (key, value pairs) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g., when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-It may return less than num_records even before the last batch.
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`num_records`</b>: Number of records to read.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (keys, values).
-
-* <b>`keys`</b>: A 1-D string Tensor.
-* <b>`values`</b>: A 1-D string Tensor.
-
-
-- - -
-
-#### `tf.WholeFileReader.reader_ref` {#WholeFileReader.reader_ref}
-
-Op that implements the reader.
-
-
-- - -
-
-#### `tf.WholeFileReader.reset(name=None)` {#WholeFileReader.reset}
-
-Restore a reader to its initial clean state.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.WholeFileReader.restore_state(state, name=None)` {#WholeFileReader.restore_state}
-
-Restore a reader to a previously saved state.
-
-Not all Readers support being restored, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`state`</b>: A string Tensor.
- Result of a SerializeState of a Reader with matching type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.WholeFileReader.serialize_state(name=None)` {#WholeFileReader.serialize_state}
-
-Produce a string tensor that encodes the state of a reader.
-
-Not all Readers support being serialized, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A string Tensor.
-
-
-- - -
-
-#### `tf.WholeFileReader.supports_serialize` {#WholeFileReader.supports_serialize}
-
-Whether the Reader implementation can serialize its state.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.assert_non_negative.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.assert_non_negative.md
deleted file mode 100644
index aa835e51cd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.assert_non_negative.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.assert_non_negative(x, data=None, summarize=None, message=None, name=None)` {#assert_non_negative}
-
-Assert the condition `x >= 0` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_non_negative(x)]):
- output = tf.reduce_sum(x)
-```
-
-Non-negative means, for every element `x[i]` of `x`, we have `x[i] >= 0`.
-If `x` is empty this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional).
- Defaults to "assert_non_negative".
-
-##### Returns:
-
- Op raising `InvalidArgumentError` unless `x` is all non-negative.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.betainc.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.betainc.md
deleted file mode 100644
index 9da04a3642..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.betainc.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.betainc(a, b, x, name=None)` {#betainc}
-
-Compute the regularized incomplete beta integral \\(I_x(a, b)\\).
-
-The regularized incomplete beta integral is defined as:
-
-```
-I_x(a, b) = \frac{B(x; a, b)}{B(a, b)}
-```
-where
-
-```
-B(x; a, b) = \int_0^x t^{a-1} (1 - t)^{b-1} dt
-```
-
-is the incomplete beta function and \\(B(a, b)\\) is the *complete*
-beta function.
-
-##### Args:
-
-
-* <b>`a`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`b`</b>: A `Tensor`. Must have the same type as `a`.
-* <b>`x`</b>: A `Tensor`. Must have the same type as `a`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `a`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.cholesky_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.cholesky_solve.md
deleted file mode 100644
index cb0bdd7feb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.cholesky_solve.md
+++ /dev/null
@@ -1,35 +0,0 @@
-### `tf.cholesky_solve(chol, rhs, name=None)` {#cholesky_solve}
-
-Solves systems of linear eqns `A X = RHS`, given Cholesky factorizations.
-
-```python
-# Solve 10 separate 2x2 linear systems:
-A = ... # shape 10 x 2 x 2
-RHS = ... # shape 10 x 2 x 1
-chol = tf.cholesky(A) # shape 10 x 2 x 2
-X = tf.cholesky_solve(chol, RHS) # shape 10 x 2 x 1
-# tf.matmul(A, X) ~ RHS
-X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0]
-
-# Solve five linear systems (K = 5) for every member of the length 10 batch.
-A = ... # shape 10 x 2 x 2
-RHS = ... # shape 10 x 2 x 5
-...
-X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`chol`</b>: A `Tensor`. Must be `float32` or `float64`, shape is `[..., M, M]`.
- Cholesky factorization of `A`, e.g. `chol = tf.cholesky(A)`.
- For that reason, only the lower triangular parts (including the diagonal)
- of the last two dimensions of `chol` are used. The strictly upper part is
- assumed to be zero and not accessed.
-* <b>`rhs`</b>: A `Tensor`, same type as `chol`, shape is `[..., M, K]`.
-* <b>`name`</b>: A name to give this `Op`. Defaults to `cholesky_solve`.
-
-##### Returns:
-
- Solution to `A x = rhs`, shape `[..., M, K]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.constant.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.constant.md
deleted file mode 100644
index 3cc1e1ac0a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.constant.md
+++ /dev/null
@@ -1,53 +0,0 @@
-### `tf.constant(value, dtype=None, shape=None, name='Const', verify_shape=False)` {#constant}
-
-Creates a constant tensor.
-
- The resulting tensor is populated with values of type `dtype`, as
- specified by arguments `value` and (optionally) `shape` (see examples
- below).
-
- The argument `value` can be a constant value, or a list of values of type
- `dtype`. If `value` is a list, then the length of the list must be less
- than or equal to the number of elements implied by the `shape` argument (if
- specified). In the case where the list length is less than the number of
- elements specified by `shape`, the last element in the list will be used
- to fill the remaining entries.
-
- The argument `shape` is optional. If present, it specifies the dimensions of
- the resulting tensor. If not present, the shape of `value` is used.
-
- If the argument `dtype` is not specified, then the type is inferred from
- the type of `value`.
-
- For example:
-
- ```python
- # Constant 1-D Tensor populated with value list.
- tensor = tf.constant([1, 2, 3, 4, 5, 6, 7]) => [1 2 3 4 5 6 7]
-
- # Constant 2-D tensor populated with scalar value -1.
- tensor = tf.constant(-1.0, shape=[2, 3]) => [[-1. -1. -1.]
- [-1. -1. -1.]]
- ```
-
-##### Args:
-
-
-* <b>`value`</b>: A constant value (or list) of output type `dtype`.
-
-
-* <b>`dtype`</b>: The type of the elements of the resulting tensor.
-
-
-* <b>`shape`</b>: Optional dimensions of resulting tensor.
-
-
-* <b>`name`</b>: Optional name for the tensor.
-
-
-* <b>`verify_shape`</b>: Boolean that enables verification of a shape of values.
-
-##### Returns:
-
- A Constant Tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.bayesflow.monte_carlo.expectation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.bayesflow.monte_carlo.expectation.md
deleted file mode 100644
index d8f9c5c462..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.bayesflow.monte_carlo.expectation.md
+++ /dev/null
@@ -1,56 +0,0 @@
-### `tf.contrib.bayesflow.monte_carlo.expectation(f, p, z=None, n=None, seed=None, name='expectation')` {#expectation}
-
-Monte Carlo estimate of an expectation: `E_p[f(Z)]` with sample mean.
-
-This `Op` returns
-
-```
-n^{-1} sum_{i=1}^n f(z_i), where z_i ~ p
-\approx E_p[f(Z)]
-```
-
-User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
-
-##### Args:
-
-
-* <b>`f`</b>: Callable mapping samples from `p` to `Tensors`.
-* <b>`p`</b>: `tf.contrib.distributions.Distribution`.
-* <b>`z`</b>: `Tensor` of samples from `p`, produced by `p.sample` for some `n`.
-* <b>`n`</b>: Integer `Tensor`. Number of samples to generate if `z` is not provided.
-* <b>`seed`</b>: Python integer to seed the random number generator.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with the same `dtype` as `p`.
-
-
-* <b>`Example`</b>:
-
-```python
-N_samples = 10000
-
-distributions = tf.contrib.distributions
-
-dist = distributions.Uniform([0.0, 0.0], [1.0, 2.0])
-elementwise_mean = lambda x: x
-mean_sum = lambda x: tf.reduce_sum(x, 1)
-
-estimate_elementwise_mean_tf = monte_carlo.expectation(elementwise_mean,
- dist,
- n=N_samples)
-estimate_mean_sum_tf = monte_carlo.expectation(mean_sum,
- dist,
- n=N_samples)
-
-with tf.Session() as sess:
- estimate_elementwise_mean, estimate_mean_sum = (
- sess.run([estimate_elementwise_mean_tf, estimate_mean_sum_tf]))
-print estimate_elementwise_mean
->>> np.array([ 0.50018013 1.00097895], dtype=np.float32)
-print estimate_mean_sum
->>> 1.49571
-
-```
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.copy_graph.get_copied_op.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.copy_graph.get_copied_op.md
deleted file mode 100644
index 9e5a2118fd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.copy_graph.get_copied_op.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.contrib.copy_graph.get_copied_op(org_instance, graph, scope='')` {#get_copied_op}
-
-Given an `Operation` instance from some `Graph`, returns
-its namesake from `graph`, under the specified scope
-(default `""`).
-
-If a copy of `org_instance` is present in `graph` under the given
-`scope`, it will be returned.
-
-Args:
-org_instance: An `Operation` from some `Graph`.
-graph: The `Graph` to be searched for a copr of `org_instance`.
-scope: The scope `org_instance` is present in.
-
-##### Returns:
-
- The `Operation` copy from `graph`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.crf.CrfForwardRnnCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.crf.CrfForwardRnnCell.md
deleted file mode 100644
index a319e9bead..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.crf.CrfForwardRnnCell.md
+++ /dev/null
@@ -1,73 +0,0 @@
-Computes the alpha values in a linear-chain CRF.
-
-See http://www.cs.columbia.edu/~mcollins/fb.pdf for reference.
-- - -
-
-#### `tf.contrib.crf.CrfForwardRnnCell.__call__(inputs, state, scope=None)` {#CrfForwardRnnCell.__call__}
-
-Build the CrfForwardRnnCell.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A [batch_size, num_tags] matrix of unary potentials.
-* <b>`state`</b>: A [batch_size, num_tags] matrix containing the previous alpha
- values.
-* <b>`scope`</b>: Unused variable scope of this cell.
-
-##### Returns:
-
- new_alphas, new_alphas: A pair of [batch_size, num_tags] matrices
- values containing the new alpha values.
-
-
-- - -
-
-#### `tf.contrib.crf.CrfForwardRnnCell.__init__(transition_params)` {#CrfForwardRnnCell.__init__}
-
-Initialize the CrfForwardRnnCell.
-
-##### Args:
-
-
-* <b>`transition_params`</b>: A [num_tags, num_tags] matrix of binary potentials.
- This matrix is expanded into a [1, num_tags, num_tags] in preparation
- for the broadcast summation occurring within the cell.
-
-
-- - -
-
-#### `tf.contrib.crf.CrfForwardRnnCell.output_size` {#CrfForwardRnnCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.crf.CrfForwardRnnCell.state_size` {#CrfForwardRnnCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.crf.CrfForwardRnnCell.zero_state(batch_size, dtype)` {#CrfForwardRnnCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.crf.crf_binary_score.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.crf.crf_binary_score.md
deleted file mode 100644
index 956f52766d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.crf.crf_binary_score.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.contrib.crf.crf_binary_score(tag_indices, sequence_lengths, transition_params)` {#crf_binary_score}
-
-Computes the binary scores of tag sequences.
-
-##### Args:
-
-
-* <b>`tag_indices`</b>: A [batch_size, max_seq_len] matrix of tag indices.
-* <b>`sequence_lengths`</b>: A [batch_size] vector of true sequence lengths.
-* <b>`transition_params`</b>: A [num_tags, num_tags] matrix of binary potentials.
-
-##### Returns:
-
-
-* <b>`binary_scores`</b>: A [batch_size] vector of binary scores.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.Categorical.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.Categorical.md
deleted file mode 100644
index 6e2b72c7f3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.Categorical.md
+++ /dev/null
@@ -1,629 +0,0 @@
-Categorical distribution.
-
-The categorical distribution is parameterized by the log-probabilities
-of a set of classes.
-
-#### Examples
-
-Creates a 3-class distiribution, with the 2nd class, the most likely to be
-drawn from.
-
-```python
-p = [0.1, 0.5, 0.4]
-dist = Categorical(probs=p)
-```
-
-Creates a 3-class distiribution, with the 2nd class the most likely to be
-drawn from, using logits.
-
-```python
-logits = [-50, 400, 40]
-dist = Categorical(logits=logits)
-```
-
-Creates a 3-class distribution, with the 3rd class is most likely to be drawn.
-The distribution functions can be evaluated on counts.
-
-```python
-# counts is a scalar.
-p = [0.1, 0.4, 0.5]
-dist = Categorical(probs=p)
-dist.prob(0) # Shape []
-
-# p will be broadcast to [[0.1, 0.4, 0.5], [0.1, 0.4, 0.5]] to match counts.
-counts = [1, 0]
-dist.prob(counts) # Shape [2]
-
-# p will be broadcast to shape [3, 5, 7, 3] to match counts.
-counts = [[...]] # Shape [5, 7, 3]
-dist.prob(counts) # Shape [5, 7, 3]
-```
-- - -
-
-#### `tf.contrib.distributions.Categorical.__init__(logits=None, probs=None, dtype=tf.int32, validate_args=False, allow_nan_stats=True, name='Categorical')` {#Categorical.__init__}
-
-Initialize Categorical distributions using class log-probabilities.
-
-##### Args:
-
-
-* <b>`logits`</b>: An N-D `Tensor`, `N >= 1`, representing the log probabilities
- of a set of Categorical distributions. The first `N - 1` dimensions
- index into a batch of independent distributions and the last dimension
- represents a vector of logits for each class. Only one of `logits` or
- `probs` should be passed in.
-* <b>`probs`</b>: An N-D `Tensor`, `N >= 1`, representing the probabilities
- of a set of Categorical distributions. The first `N - 1` dimensions
- index into a batch of independent distributions and the last dimension
- represents a vector of probabilities for each class. Only one of
- `logits` or `probs` should be passed in.
-* <b>`dtype`</b>: The type of the event samples (default: int32).
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.allow_nan_stats` {#Categorical.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.batch_shape` {#Categorical.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.batch_shape_tensor(name='batch_shape_tensor')` {#Categorical.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.cdf(value, name='cdf')` {#Categorical.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.copy(**override_parameters_kwargs)` {#Categorical.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.covariance(name='covariance')` {#Categorical.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.dtype` {#Categorical.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.entropy(name='entropy')` {#Categorical.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.event_shape` {#Categorical.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.event_shape_tensor(name='event_shape_tensor')` {#Categorical.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.event_size` {#Categorical.event_size}
-
-Scalar `int32` tensor: the number of classes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.is_continuous` {#Categorical.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.is_scalar_batch(name='is_scalar_batch')` {#Categorical.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.is_scalar_event(name='is_scalar_event')` {#Categorical.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.log_cdf(value, name='log_cdf')` {#Categorical.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.log_prob(value, name='log_prob')` {#Categorical.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.log_survival_function(value, name='log_survival_function')` {#Categorical.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.logits` {#Categorical.logits}
-
-Vector of coordinatewise logits.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.mean(name='mean')` {#Categorical.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.mode(name='mode')` {#Categorical.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.name` {#Categorical.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Categorical.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.param_static_shapes(cls, sample_shape)` {#Categorical.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.parameters` {#Categorical.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.prob(value, name='prob')` {#Categorical.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.probs` {#Categorical.probs}
-
-Vector of coordinatewise probabilities.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.reparameterization_type` {#Categorical.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.sample(sample_shape=(), seed=None, name='sample')` {#Categorical.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.stddev(name='stddev')` {#Categorical.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.survival_function(value, name='survival_function')` {#Categorical.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.validate_args` {#Categorical.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Categorical.variance(name='variance')` {#Categorical.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.Chi2.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.Chi2.md
deleted file mode 100644
index 76a28f8d17..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.Chi2.md
+++ /dev/null
@@ -1,612 +0,0 @@
-Chi2 distribution.
-
-The Chi2 distribution is defined over positive real numbers using a degrees of
-freedom ("df") parameter.
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; df, x > 0) = x**(0.5 df - 1) exp(-0.5 x) / Z
-Z = 2**(0.5 df) Gamma(0.5 df)
-```
-
-where:
-
-* `df` denotes the degrees of freedom,
-* `Z` is the normalization constant, and,
-* `Gamma` is the [gamma function](
- https://en.wikipedia.org/wiki/Gamma_function).
-
-The Chi2 distribution is a special case of the Gamma distribution, i.e.,
-
-```python
-Chi2(df) = Gamma(concentration=0.5 * df, rate=0.5)
-```
-- - -
-
-#### `tf.contrib.distributions.Chi2.__init__(df, validate_args=False, allow_nan_stats=True, name='Chi2')` {#Chi2.__init__}
-
-Construct Chi2 distributions with parameter `df`.
-
-##### Args:
-
-
-* <b>`df`</b>: Floating point tensor, the degrees of freedom of the
- distribution(s). `df` must contain only positive values.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.allow_nan_stats` {#Chi2.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.batch_shape` {#Chi2.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.batch_shape_tensor(name='batch_shape_tensor')` {#Chi2.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.cdf(value, name='cdf')` {#Chi2.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.concentration` {#Chi2.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.copy(**override_parameters_kwargs)` {#Chi2.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.covariance(name='covariance')` {#Chi2.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.df` {#Chi2.df}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.dtype` {#Chi2.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.entropy(name='entropy')` {#Chi2.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.event_shape` {#Chi2.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.event_shape_tensor(name='event_shape_tensor')` {#Chi2.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.is_continuous` {#Chi2.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.is_scalar_batch(name='is_scalar_batch')` {#Chi2.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.is_scalar_event(name='is_scalar_event')` {#Chi2.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.log_cdf(value, name='log_cdf')` {#Chi2.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.log_prob(value, name='log_prob')` {#Chi2.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.log_survival_function(value, name='log_survival_function')` {#Chi2.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.mean(name='mean')` {#Chi2.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.mode(name='mode')` {#Chi2.mode}
-
-Mode.
-
-Additional documentation from `Gamma`:
-
-The mode of a gamma distribution is `(shape - 1) / rate` when
-`shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`,
-an exception will be raised rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.name` {#Chi2.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Chi2.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.param_static_shapes(cls, sample_shape)` {#Chi2.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.parameters` {#Chi2.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.prob(value, name='prob')` {#Chi2.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.rate` {#Chi2.rate}
-
-Rate parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.reparameterization_type` {#Chi2.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.sample(sample_shape=(), seed=None, name='sample')` {#Chi2.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.stddev(name='stddev')` {#Chi2.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.survival_function(value, name='survival_function')` {#Chi2.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.validate_args` {#Chi2.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Chi2.variance(name='variance')` {#Chi2.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.ConditionalDistribution.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.ConditionalDistribution.md
deleted file mode 100644
index 97d31bb273..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.ConditionalDistribution.md
+++ /dev/null
@@ -1,476 +0,0 @@
-Distribution that supports intrinsic parameters (local latents).
-
-Subclasses of this distribution may have additional keyword arguments passed
-to their sample-based methods (i.e. `sample`, `log_prob`, etc.).
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.__init__(dtype, is_continuous, reparameterization_type, validate_args, allow_nan_stats, parameters=None, graph_parents=None, name=None)` {#ConditionalDistribution.__init__}
-
-Constructs the `Distribution`.
-
-**This is a private method for subclass use.**
-
-##### Args:
-
-
-* <b>`dtype`</b>: The type of the event samples. `None` implies no type-enforcement.
-* <b>`is_continuous`</b>: Python `bool`. If `True` this `Distribution` is continuous
- over its supported domain.
-* <b>`reparameterization_type`</b>: Instance of `ReparameterizationType`.
- If `distributions.FULLY_REPARAMETERIZED`, this
- `Distribution` can be reparameterized in terms of some standard
- distribution with a function whose Jacobian is constant for the support
- of the standard distribution. If `distributions.NOT_REPARAMETERIZED`,
- then no such reparameterization is available.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`parameters`</b>: Python `dict` of parameters used to instantiate this
- `Distribution`.
-* <b>`graph_parents`</b>: Python `list` of graph prerequisites of this
- `Distribution`.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class. Default:
- subclass name.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if any member of graph_parents is `None` or not a `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.allow_nan_stats` {#ConditionalDistribution.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.batch_shape` {#ConditionalDistribution.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.batch_shape_tensor(name='batch_shape_tensor')` {#ConditionalDistribution.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.cdf(*args, **kwargs)` {#ConditionalDistribution.cdf}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.copy(**override_parameters_kwargs)` {#ConditionalDistribution.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.covariance(name='covariance')` {#ConditionalDistribution.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.dtype` {#ConditionalDistribution.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.entropy(name='entropy')` {#ConditionalDistribution.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.event_shape` {#ConditionalDistribution.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.event_shape_tensor(name='event_shape_tensor')` {#ConditionalDistribution.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.is_continuous` {#ConditionalDistribution.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.is_scalar_batch(name='is_scalar_batch')` {#ConditionalDistribution.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.is_scalar_event(name='is_scalar_event')` {#ConditionalDistribution.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.log_cdf(*args, **kwargs)` {#ConditionalDistribution.log_cdf}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.log_prob(*args, **kwargs)` {#ConditionalDistribution.log_prob}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.log_survival_function(*args, **kwargs)` {#ConditionalDistribution.log_survival_function}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.mean(name='mean')` {#ConditionalDistribution.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.mode(name='mode')` {#ConditionalDistribution.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.name` {#ConditionalDistribution.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#ConditionalDistribution.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.param_static_shapes(cls, sample_shape)` {#ConditionalDistribution.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.parameters` {#ConditionalDistribution.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.prob(*args, **kwargs)` {#ConditionalDistribution.prob}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.reparameterization_type` {#ConditionalDistribution.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.sample(*args, **kwargs)` {#ConditionalDistribution.sample}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.stddev(name='stddev')` {#ConditionalDistribution.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.survival_function(*args, **kwargs)` {#ConditionalDistribution.survival_function}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.validate_args` {#ConditionalDistribution.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalDistribution.variance(name='variance')` {#ConditionalDistribution.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.ReparameterizationType.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.ReparameterizationType.md
deleted file mode 100644
index 35e5d87db8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.ReparameterizationType.md
+++ /dev/null
@@ -1,47 +0,0 @@
-Instances of this class represent how sampling is reparameterized.
-
-Two static instances exist in the distritributions library, signifying
-one of two possible properties for samples from a distribution:
-
-`FULLY_REPARAMETERIZED`: Samples from the distribution are fully
- reparameterized, and straight-through gradients are supported.
-
-`NOT_REPARAMETERIZED`: Samples from the distribution are not fully
- reparameterized, and straight-through gradients are either partially
- unsupported or are not supported at all. In this case, for purposes of
- e.g. RL or variational inference, it is generally safest to wrap the
- sample results in a `stop_gradients` call and instead use policy
- gradients / surrogate loss instead.
-- - -
-
-#### `tf.contrib.distributions.ReparameterizationType.__eq__(other)` {#ReparameterizationType.__eq__}
-
-Determine if this `ReparameterizationType` is equal to another.
-
-Since RepaparameterizationType instances are constant static global
-instances, equality checks if two instances' id() values are equal.
-
-##### Args:
-
-
-* <b>`other`</b>: Object to compare against.
-
-##### Returns:
-
- `self is other`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ReparameterizationType.__init__(rep_type)` {#ReparameterizationType.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ReparameterizationType.__repr__()` {#ReparameterizationType.__repr__}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.Uniform.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.Uniform.md
deleted file mode 100644
index a3455aa9ea..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.Uniform.md
+++ /dev/null
@@ -1,625 +0,0 @@
-Uniform distribution with `low` and `high` parameters.
-
-### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; a, b) = I[a <= x < b] / Z
-Z = b - a
-```
-
-where:
-* `low = a`,
-* `high = b`,
-* `Z` is the normalizing constant, and,
-* `I[predicate]` is the [indicator function](
- https://en.wikipedia.org/wiki/Indicator_function) for `predicate`.
-
-The parameters `low` and `high` must be shaped in a way that supports
-broadcasting (e.g., `high - low` is a valid operation).
-
-### Examples
-
-```python
-# Without broadcasting:
-u1 = Uniform(low=3.0, high=4.0) # a single uniform distribution [3, 4]
-u2 = Uniform(low=[1.0, 2.0],
- high=[3.0, 4.0]) # 2 distributions [1, 3], [2, 4]
-u3 = Uniform(low=[[1.0, 2.0],
- [3.0, 4.0]],
- high=[[1.5, 2.5],
- [3.5, 4.5]]) # 4 distributions
-```
-
-```python
-# With broadcasting:
-u1 = Uniform(low=3.0, high=[5.0, 6.0, 7.0]) # 3 distributions
-```
-- - -
-
-#### `tf.contrib.distributions.Uniform.__init__(low=0.0, high=1.0, validate_args=False, allow_nan_stats=True, name='Uniform')` {#Uniform.__init__}
-
-Initialize a batch of Uniform distributions.
-
-##### Args:
-
-
-* <b>`low`</b>: Floating point tensor, lower boundary of the output interval. Must
- have `low < high`.
-* <b>`high`</b>: Floating point tensor, upper boundary of the output interval. Must
- have `low < high`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: if `low >= high` and `validate_args=False`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.allow_nan_stats` {#Uniform.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.batch_shape` {#Uniform.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.batch_shape_tensor(name='batch_shape_tensor')` {#Uniform.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.cdf(value, name='cdf')` {#Uniform.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.copy(**override_parameters_kwargs)` {#Uniform.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.covariance(name='covariance')` {#Uniform.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.dtype` {#Uniform.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.entropy(name='entropy')` {#Uniform.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.event_shape` {#Uniform.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.event_shape_tensor(name='event_shape_tensor')` {#Uniform.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.high` {#Uniform.high}
-
-Upper boundary of the output interval.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.is_continuous` {#Uniform.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.is_scalar_batch(name='is_scalar_batch')` {#Uniform.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.is_scalar_event(name='is_scalar_event')` {#Uniform.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.log_cdf(value, name='log_cdf')` {#Uniform.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.log_prob(value, name='log_prob')` {#Uniform.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.log_survival_function(value, name='log_survival_function')` {#Uniform.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.low` {#Uniform.low}
-
-Lower boundary of the output interval.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.mean(name='mean')` {#Uniform.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.mode(name='mode')` {#Uniform.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.name` {#Uniform.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Uniform.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.param_static_shapes(cls, sample_shape)` {#Uniform.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.parameters` {#Uniform.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.prob(value, name='prob')` {#Uniform.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.range(name='range')` {#Uniform.range}
-
-`high - low`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.reparameterization_type` {#Uniform.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.sample(sample_shape=(), seed=None, name='sample')` {#Uniform.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.stddev(name='stddev')` {#Uniform.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.survival_function(value, name='survival_function')` {#Uniform.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.validate_args` {#Uniform.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Uniform.variance(name='variance')` {#Uniform.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.WishartCholesky.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.WishartCholesky.md
deleted file mode 100644
index 156e009dd4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.WishartCholesky.md
+++ /dev/null
@@ -1,673 +0,0 @@
-The matrix Wishart distribution on positive definite matrices.
-
-This distribution is defined by a scalar degrees of freedom `df` and a
-lower, triangular Cholesky factor which characterizes the scale matrix.
-
-Using WishartCholesky is a constant-time improvement over WishartFull. It
-saves an O(nbk^3) operation, i.e., a matrix-product operation for sampling
-and a Cholesky factorization in log_prob. For most use-cases it often saves
-another O(nbk^3) operation since most uses of Wishart will also use the
-Cholesky factorization.
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(X; df, scale) = det(X)**(0.5 (df-k-1)) exp(-0.5 tr[inv(scale) X]) / Z
-Z = 2**(0.5 df k) |det(scale)|**(0.5 df) Gamma_k(0.5 df)
-```
-
-where:
-* `df >= k` denotes the degrees of freedom,
-* `scale` is a symmetric, positive definite, `k x k` matrix,
-* `Z` is the normalizing constant, and,
-* `Gamma_k` is the [multivariate Gamma function](
- https://en.wikipedia.org/wiki/Multivariate_gamma_function).
-
-
-#### Examples
-
-```python
-# Initialize a single 3x3 Wishart with Cholesky factored scale matrix and 5
-# degrees-of-freedom.(*)
-df = 5
-chol_scale = tf.cholesky(...) # Shape is [3, 3].
-dist = tf.contrib.distributions.WishartCholesky(df=df, scale=chol_scale)
-
-# Evaluate this on an observation in R^3, returning a scalar.
-x = ... # A 3x3 positive definite matrix.
-dist.prob(x) # Shape is [], a scalar.
-
-# Evaluate this on a two observations, each in R^{3x3}, returning a length two
-# Tensor.
-x = [x0, x1] # Shape is [2, 3, 3].
-dist.prob(x) # Shape is [2].
-
-# Initialize two 3x3 Wisharts with Cholesky factored scale matrices.
-df = [5, 4]
-chol_scale = tf.cholesky(...) # Shape is [2, 3, 3].
-dist = tf.contrib.distributions.WishartCholesky(df=df, scale=chol_scale)
-
-# Evaluate this on four observations.
-x = [[x0, x1], [x2, x3]] # Shape is [2, 2, 3, 3].
-dist.prob(x) # Shape is [2, 2].
-
-# (*) - To efficiently create a trainable covariance matrix, see the example
-# in tf.contrib.distributions.matrix_diag_transform.
-```
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.__init__(df, scale, cholesky_input_output_matrices=False, validate_args=False, allow_nan_stats=True, name='WishartCholesky')` {#WishartCholesky.__init__}
-
-Construct Wishart distributions.
-
-##### Args:
-
-
-* <b>`df`</b>: `float` or `double` `Tensor`. Degrees of freedom, must be greater than
- or equal to dimension of the scale matrix.
-* <b>`scale`</b>: `float` or `double` `Tensor`. The Cholesky factorization of
- the symmetric positive definite scale matrix of the distribution.
-* <b>`cholesky_input_output_matrices`</b>: Python `bool`. Any function which whose
- input or output is a matrix assumes the input is Cholesky and returns a
- Cholesky factored matrix. Example `log_prob` input takes a Cholesky and
- `sample_n` returns a Cholesky when
- `cholesky_input_output_matrices=True`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.allow_nan_stats` {#WishartCholesky.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.batch_shape` {#WishartCholesky.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.batch_shape_tensor(name='batch_shape_tensor')` {#WishartCholesky.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.cdf(value, name='cdf')` {#WishartCholesky.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.cholesky_input_output_matrices` {#WishartCholesky.cholesky_input_output_matrices}
-
-Boolean indicating if `Tensor` input/outputs are Cholesky factorized.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.copy(**override_parameters_kwargs)` {#WishartCholesky.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.covariance(name='covariance')` {#WishartCholesky.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.df` {#WishartCholesky.df}
-
-Wishart distribution degree(s) of freedom.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.dimension` {#WishartCholesky.dimension}
-
-Dimension of underlying vector space. The `p` in `R^(p*p)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.dtype` {#WishartCholesky.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.entropy(name='entropy')` {#WishartCholesky.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.event_shape` {#WishartCholesky.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.event_shape_tensor(name='event_shape_tensor')` {#WishartCholesky.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.is_continuous` {#WishartCholesky.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.is_scalar_batch(name='is_scalar_batch')` {#WishartCholesky.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.is_scalar_event(name='is_scalar_event')` {#WishartCholesky.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.log_cdf(value, name='log_cdf')` {#WishartCholesky.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.log_normalization(name='log_normalization')` {#WishartCholesky.log_normalization}
-
-Computes the log normalizing constant, log(Z).
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.log_prob(value, name='log_prob')` {#WishartCholesky.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.log_survival_function(value, name='log_survival_function')` {#WishartCholesky.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.mean(name='mean')` {#WishartCholesky.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.mean_log_det(name='mean_log_det')` {#WishartCholesky.mean_log_det}
-
-Computes E[log(det(X))] under this Wishart distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.mode(name='mode')` {#WishartCholesky.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.name` {#WishartCholesky.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#WishartCholesky.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.param_static_shapes(cls, sample_shape)` {#WishartCholesky.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.parameters` {#WishartCholesky.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.prob(value, name='prob')` {#WishartCholesky.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.reparameterization_type` {#WishartCholesky.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.sample(sample_shape=(), seed=None, name='sample')` {#WishartCholesky.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.scale()` {#WishartCholesky.scale}
-
-Wishart distribution scale matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.scale_operator_pd` {#WishartCholesky.scale_operator_pd}
-
-Wishart distribution scale matrix as an OperatorPD.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.stddev(name='stddev')` {#WishartCholesky.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.survival_function(value, name='survival_function')` {#WishartCholesky.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.validate_args` {#WishartCholesky.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartCholesky.variance(name='variance')` {#WishartCholesky.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.bijector.Bijector.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.bijector.Bijector.md
deleted file mode 100644
index bc383ea122..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.distributions.bijector.Bijector.md
+++ /dev/null
@@ -1,509 +0,0 @@
-Interface for transforming a `Distribution` sample.
-
-A `Bijector` implements a
-[diffeomorphism](https://en.wikipedia.org/wiki/Diffeomorphism), i.e., a
-bijective, differentiable function. A `Bijector` is used by
-`TransformedDistribution` but can be generally used for transforming a
-`Distribution` generated `Tensor`. A `Bijector` is characterized by three
-operations:
-
-1. Forward Evaluation
-
- Useful for turning one random outcome into another random outcome from a
- different distribution.
-
-2. Inverse Evaluation
-
- Useful for "reversing" a transformation to compute one probability in
- terms of another.
-
-3. (log o det o Jacobian o inverse)(x)
-
- "The log of the determinant of the matrix of all first-order partial
- derivatives of the inverse function."
- Useful for inverting a transformation to compute one probability in terms
- of another. Geometrically, the det(Jacobian) is the volume of the
- transformation and is used to scale the probability.
-
-By convention, transformations of random variables are named in terms of the
-forward transformation. The forward transformation creates samples, the
-inverse is useful for computing probabilities.
-
-Example Use:
-
- - Basic properties:
-
- ```python
- x = ... # A tensor.
- # Evaluate forward transformation.
- fwd_x = my_bijector.forward(x)
- x == my_bijector.inverse(fwd_x)
- x != my_bijector.forward(fwd_x) # Not equal because g(x) != g(g(x)).
- ```
-
- - Computing a log-likelihood:
-
- ```python
- def transformed_log_prob(bijector, log_prob, x):
- return (bijector.inverse_log_det_jacobian(x) +
- log_prob(bijector.inverse(x)))
- ```
-
- - Transforming a random outcome:
-
- ```python
- def transformed_sample(bijector, x):
- return bijector.forward(x)
- ```
-
-Example transformations:
-
- - "Exponential"
-
- ```
- Y = g(X) = exp(X)
- X ~ Normal(0, 1) # Univariate.
- ```
-
- Implies:
-
- ```
- g^{-1}(Y) = log(Y)
- |Jacobian(g^{-1})(y)| = 1 / y
- Y ~ LogNormal(0, 1), i.e.,
- prob(Y=y) = |Jacobian(g^{-1})(y)| * prob(X=g^{-1}(y))
- = (1 / y) Normal(log(y); 0, 1)
- ```
-
- Here is an example of how one might implement the `Exp` bijector:
-
- ```
- class Exp(Bijector):
- def __init__(self, event_ndims=0, validate_args=False, name="exp"):
- super(Exp, self).__init__(event_ndims=event_ndims,
- validate_args=validate_args, name=name)
- def _forward(self, x):
- return math_ops.exp(x)
- def _inverse_and_inverse_log_det_jacobian(self, y):
- x = math_ops.log(y)
- return x, -self._forward_log_det_jacobian(x)
- def _forward_log_det_jacobian(self, x):
- if self.event_ndims is None:
- raise ValueError("Jacobian requires known event_ndims.")
- event_dims = array_ops.shape(x)[-self.event_ndims:]
- return math_ops.reduce_sum(x, axis=event_dims)
- ```
-
- - "Affine"
-
- ```
- Y = g(X) = sqrtSigma * X + mu
- X ~ MultivariateNormal(0, I_d)
- ```
-
- Implies:
-
- ```
- g^{-1}(Y) = inv(sqrtSigma) * (Y - mu)
- |Jacobian(g^{-1})(y)| = det(inv(sqrtSigma))
- Y ~ MultivariateNormal(mu, sqrtSigma) , i.e.,
- prob(Y=y) = |Jacobian(g^{-1})(y)| * prob(X=g^{-1}(y))
- = det(sqrtSigma)^(-d) *
- MultivariateNormal(inv(sqrtSigma) * (y - mu); 0, I_d)
- ```
-
-Example of why a `Bijector` needs to understand sample, batch, event
-partitioning:
-
-- Consider the `Exp` `Bijector` applied to a `Tensor` which has sample, batch,
- and event (S, B, E) shape semantics. Suppose the `Tensor`'s
- partitioned-shape is `(S=[4], B=[2], E=[3, 3])`.
-
- For `Exp`, the shape of the `Tensor` returned by `forward` and `inverse` is
- unchanged, i.e., `[4, 2, 3, 3]`. However the shape returned by
- `inverse_log_det_jacobian` is `[4, 2]` because the Jacobian is a reduction
- over the event dimensions.
-
-Subclass Requirements:
-
-- Typically subclasses implement `_forward` and one or both of:
- - `_inverse`, `_inverse_log_det_jacobian`,
- - `_inverse_and_inverse_log_det_jacobian`.
-
-- If the `Bijector`'s use is limited to `TransformedDistribution` (or friends
- like `QuantizedDistribution`) then depending on your use, you may not need
- to implement all of `_forward` and `_inverse` functions. Examples:
- 1. Sampling (e.g., `sample`) only requires `_forward`.
- 2. Probability functions (e.g., `prob`, `cdf`, `survival`) only require
- `_inverse` (and related).
- 3. Only calling probability functions on the output of `sample` means
- `_inverse` can be implemented as a cache lookup.
-
- See `Example Use` [above] which shows how these functions are used to
- transform a distribution. (Note: `_forward` could theoretically be
- implemented as a cache lookup but this would require controlling the
- underlying sample generation mechanism.)
-
-- If computation can be shared among `_inverse` and
- `_inverse_log_det_jacobian` it is preferable to implement
- `_inverse_and_inverse_log_det_jacobian`. This usually reduces
- graph-construction overhead because a `Distribution`'s implementation of
- `log_prob` will need to evaluate both the inverse Jacobian as well as the
- inverse function.
-
-- If an additional use case needs just `inverse` or just
- `inverse_log_det_jacobian` then he or she may also wish to implement these
- functions to avoid computing the `inverse_log_det_jacobian` or the
- `inverse`, respectively.
-
-- Subclasses should implement `_forward_event_shape`,
- `_forward_event_shape_tensor` (and `inverse` counterparts) if the
- transformation is shape-changing. By default the event-shape is assumed
- unchanged from input.
-
-Tips for implementing `_inverse` and `_inverse_log_det_jacobian`:
-
-- As case 3 [above] indicates, under some circumstances the inverse function
- can be implemented as a cache lookup.
-
-- The inverse `log o det o Jacobian` can be implemented as the negative of the
- forward `log o det o Jacobian`. This is useful if the `inverse` is
- implemented as a cache or the inverse Jacobian is computationally more
- expensive (e.g., `CholeskyOuterProduct` `Bijector`). The following
- demonstrates the suggested implementation.
-
- ```python
- def _inverse_and_log_det_jacobian(self, y):
- x = ... # implement inverse, possibly via cache.
- return x, -self._forward_log_det_jac(x) # Note negation.
- ```
-
- By overriding the `_inverse_and_log_det_jacobian` function we have access to
- the inverse in one call.
-
- The correctness of this approach can be seen from the following claim.
-
- - Claim:
-
- Assume `Y=g(X)` is a bijection whose derivative exists and is nonzero
- for its domain, i.e., `d/dX g(X)!=0`. Then:
-
- ```none
- (log o det o jacobian o g^{-1})(Y) = -(log o det o jacobian o g)(X)
- ```
-
- - Proof:
-
- From the bijective, nonzero differentiability of `g`, the
- [inverse function theorem](
- https://en.wikipedia.org/wiki/Inverse_function_theorem)
- implies `g^{-1}` is differentiable in the image of `g`.
- Applying the chain rule to `y = g(x) = g(g^{-1}(y))` yields
- `I = g'(g^{-1}(y))*g^{-1}'(y)`.
- The same theorem also implies `g{-1}'` is non-singular therefore:
- `inv[ g'(g^{-1}(y)) ] = g^{-1}'(y)`.
- The claim follows from [properties of determinant](
-https://en.wikipedia.org/wiki/Determinant#Multiplicativity_and_matrix_groups).
-
-- If possible, prefer a direct implementation of the inverse Jacobian. This
- should have superior numerical stability and will often share subgraphs with
- the `_inverse` implementation.
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.__init__(event_ndims=None, graph_parents=None, is_constant_jacobian=False, validate_args=False, dtype=None, name=None)` {#Bijector.__init__}
-
-Constructs Bijector.
-
-A `Bijector` transforms random variables into new random variables.
-
-Examples:
-
-```python
-# Create the Y = g(X) = X transform which operates on vector events.
-identity = Identity(event_ndims=1)
-
-# Create the Y = g(X) = exp(X) transform which operates on matrices.
-exp = Exp(event_ndims=2)
-```
-
-See `Bijector` subclass docstring for more details and specific examples.
-
-##### Args:
-
-
-* <b>`event_ndims`</b>: number of dimensions associated with event coordinates.
-* <b>`graph_parents`</b>: Python list of graph prerequisites of this `Bijector`.
-* <b>`is_constant_jacobian`</b>: Python `bool` indicating that the Jacobian is not a
- function of the input.
-* <b>`validate_args`</b>: Python `bool`, default `False`. Whether to validate input
- with asserts. If `validate_args` is `False`, and the inputs are invalid,
- correct behavior is not guaranteed.
-* <b>`dtype`</b>: `tf.dtype` supported by this `Bijector`. `None` means dtype is not
- enforced.
-* <b>`name`</b>: The name to give Ops created by the initializer.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.dtype` {#Bijector.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.event_ndims` {#Bijector.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.forward(x, name='forward')` {#Bijector.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.forward_event_shape(input_shape)` {#Bijector.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Bijector.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Bijector.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.graph_parents` {#Bijector.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.inverse(y, name='inverse')` {#Bijector.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Bijector.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.inverse_event_shape(output_shape)` {#Bijector.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Bijector.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Bijector.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.is_constant_jacobian` {#Bijector.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.name` {#Bijector.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Bijector.validate_args` {#Bijector.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.arg_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.arg_scope.md
deleted file mode 100644
index 1b3ea08e64..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.arg_scope.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.contrib.framework.arg_scope(list_ops_or_scope, **kwargs)` {#arg_scope}
-
-Stores the default arguments for the given set of list_ops.
-
-For usage, please see examples at top of the file.
-
-##### Args:
-
-
-* <b>`list_ops_or_scope`</b>: List or tuple of operations to set argument scope for or
- a dictionary containing the current scope. When list_ops_or_scope is a
- dict, kwargs must be empty. When list_ops_or_scope is a list or tuple,
- then every op in it need to be decorated with @add_arg_scope to work.
-* <b>`**kwargs`</b>: keyword=value that will define the defaults for each op in
- list_ops. All the ops need to accept the given set of arguments.
-
-##### Yields:
-
- the current_scope, which is a dictionary of {op: {arg: value}}
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if list_ops is not a list or a tuple.
-* <b>`ValueError`</b>: if any op in list_ops has not be decorated with @add_arg_scope.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.assert_scalar_int.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.assert_scalar_int.md
deleted file mode 100644
index 469566f7b8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.assert_scalar_int.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.contrib.framework.assert_scalar_int(tensor, name=None)` {#assert_scalar_int}
-
-Assert `tensor` is 0-D, of type `tf.int32` or `tf.int64`.
-
-##### Args:
-
-
-* <b>`tensor`</b>: `Tensor` to test.
-* <b>`name`</b>: Name of the op and of the new `Tensor` if one is created.
-
-##### Returns:
-
- `tensor`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `tensor` is not 0-D, of type `tf.int32` or `tf.int64`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.deprecated_arg_values.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.deprecated_arg_values.md
deleted file mode 100644
index 285ea14f96..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.deprecated_arg_values.md
+++ /dev/null
@@ -1,35 +0,0 @@
-### `tf.contrib.framework.deprecated_arg_values(date, instructions, **deprecated_kwargs)` {#deprecated_arg_values}
-
-Decorator for marking specific function argument values as deprecated.
-
-This decorator logs a deprecation warning whenever the decorated function is
-called with the deprecated argument values. It has the following format:
-
- Calling <function> (from <module>) with <arg>=<value> is deprecated and
- will be removed after <date>. Instructions for updating:
- <instructions>
-
-<function> will include the class name if it is a method.
-
-It also edits the docstring of the function: ' (deprecated arguments)' is
-appended to the first line of the docstring and a deprecation notice is
-prepended to the rest of the docstring.
-
-##### Args:
-
-
-* <b>`date`</b>: String. The date the function is scheduled to be removed. Must be
- ISO 8601 (YYYY-MM-DD).
-* <b>`instructions`</b>: String. Instructions on how to update code using the
- deprecated function.
-* <b>`**deprecated_kwargs`</b>: The deprecated argument values.
-
-##### Returns:
-
- Decorated function or method.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If date is not in ISO 8601 format, or instructions are empty.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.get_unique_variable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.get_unique_variable.md
deleted file mode 100644
index 39ef6a1453..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.get_unique_variable.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.contrib.framework.get_unique_variable(var_op_name)` {#get_unique_variable}
-
-Gets the variable uniquely identified by that var_op_name.
-
-##### Args:
-
-
-* <b>`var_op_name`</b>: the full name of the variable op, including the scope.
-
-##### Returns:
-
- a tensorflow variable.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if no variable uniquely identified by the name exists.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.get_variables_to_restore.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.get_variables_to_restore.md
deleted file mode 100644
index c9cde43c39..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.get_variables_to_restore.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.contrib.framework.get_variables_to_restore(include=None, exclude=None)` {#get_variables_to_restore}
-
-Gets the list of the variables to restore.
-
-##### Args:
-
-
-* <b>`include`</b>: an optional list/tuple of scope strings for filtering which
- variables from the VARIABLES collection to include. None would include all
- the variables.
-* <b>`exclude`</b>: an optional list/tuple of scope strings for filtering which
- variables from the VARIABLES collection to exclude. None it would not
- exclude any.
-
-##### Returns:
-
- a list of variables to restore.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: include or exclude is provided but is not a list or a tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.load_checkpoint.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.load_checkpoint.md
deleted file mode 100644
index d8a1c94f6f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.load_checkpoint.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.contrib.framework.load_checkpoint(filepattern)` {#load_checkpoint}
-
-Returns CheckpointReader for latest checkpoint.
-
-##### Args:
-
-
-* <b>`filepattern`</b>: Directory with checkpoints file or path to checkpoint.
-
-##### Returns:
-
- `CheckpointReader` object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if checkpoint_dir doesn't have 'checkpoint' file or checkpoints.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.with_same_shape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.with_same_shape.md
deleted file mode 100644
index a0d85c425e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.framework.with_same_shape.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.contrib.framework.with_same_shape(expected_tensor, tensor)` {#with_same_shape}
-
-Assert tensors are the same shape, from the same graph.
-
-##### Args:
-
-
-* <b>`expected_tensor`</b>: Tensor with expected shape.
-* <b>`tensor`</b>: Tensor of actual values.
-
-##### Returns:
-
- Tuple of (actual_tensor, label_tensor), possibly with assert ops added.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.SubGraphView.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.SubGraphView.md
deleted file mode 100644
index 07338f7185..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.SubGraphView.md
+++ /dev/null
@@ -1,472 +0,0 @@
-A subgraph view on an existing `tf.Graph`.
-
-An instance of this class is a subgraph view on an existing `tf.Graph`.
-"subgraph" means that it can represent part of the whole `tf.Graph`.
-"view" means that it only provides a passive observation and do not to act
-on the `tf.Graph`. Note that in this documentation, the term "subgraph" is
-often used as substitute to "subgraph view".
-
-A subgraph contains:
-
-* a list of input tensors, accessible via the `inputs` property.
-* a list of output tensors, accessible via the `outputs` property.
-* and the operations in between, accessible via the "ops" property.
-
-An subgraph can be seen as a function F(i0, i1, ...) -> o0, o1, ... It is a
-function which takes as input some input tensors and returns as output some
-output tensors. The computation that the function performs is encoded in the
-operations of the subgraph.
-
-The tensors (input or output) can be of two kinds:
-
-- connected: a connected tensor connects to at least one operation contained
-in the subgraph. One example is a subgraph representing a single operation
-and its inputs and outputs: all the input and output tensors of the op
-are "connected".
-- passthrough: a passthrough tensor does not connect to any operation
-contained in the subgraph. One example is a subgraph representing a
-single tensor: this tensor is passthrough. By default a passthrough tensor is
-present both in the input and output tensors of the subgraph. It can however
-be remapped to only appear as an input (or output) only.
-
-The input and output tensors can be remapped. For instance, some input tensor
-can be omitted. For instance, a subgraph representing an operation with two
-inputs can be remapped to only take one input. Note that this does not change
-at all the underlying `tf.Graph` (remember, it is a view). It means that
-the other input is being ignored, or is being treated as "given".
-The analogy with functions can be extended like this: F(x,y) is the original
-function. Remapping the inputs from [x, y] to just [x] means that the subgraph
-now represent the function F_y(x) (y is "given").
-
-The output tensors can also be remapped. For instance, some output tensor can
-be omitted. Other output tensor can be duplicated as well. As mentioned
-before, this does not change at all the underlying `tf.Graph`.
-The analogy with functions can be extended like this: F(...)->x,y is the
-original function. Remapping the outputs from [x, y] to just [y,y] means that
-the subgraph now represent the function M(F(...)) where M is the function
-M(a,b)->b,b.
-
-It is useful to describe three other kind of tensors:
-
-* internal: an internal tensor is a tensor connecting operations contained
- in the subgraph. One example in the subgraph representing the two
- operations A and B connected sequentially: -> A -> B ->. The middle arrow
- is an internal tensor.
-* actual input: an input tensor of the subgraph, regardless of whether it is
- listed in "inputs" or not (masked-out).
-* actual output: an output tensor of the subgraph, regardless of whether it is
- listed in "outputs" or not (masked-out).
-* hidden input: an actual input which has been masked-out using an
- input remapping. In other word, a hidden input is a non-internal tensor
- not listed as a input tensor and one of whose consumers belongs to
- the subgraph.
-* hidden output: a actual output which has been masked-out using an output
- remapping. In other word, a hidden output is a non-internal tensor
- not listed as an output and one of whose generating operations belongs to
- the subgraph.
-
-Here are some useful guarantees about an instance of a SubGraphView:
-
-* the input (or output) tensors are not internal.
-* the input (or output) tensors are either "connected" or "passthrough".
-* the passthrough tensors are not connected to any of the operation of
-the subgraph.
-
-Note that there is no guarantee that an operation in a subgraph contributes
-at all to its inputs or outputs. For instance, remapping both the inputs and
-outputs to empty lists will produce a subgraph which still contains all the
-original operations. However, the remove_unused_ops function can be used to
-make a new subgraph view whose operations are connected to at least one of
-the input or output tensors.
-
-An instance of this class is meant to be a lightweight object which is not
-modified in-place by the user. Rather, the user can create new modified
-instances of a given subgraph. In that sense, the class SubGraphView is meant
-to be used like an immutable python object.
-
-A common problem when using views is that they can get out-of-sync with the
-data they observe (in this case, a `tf.Graph`). This is up to the user to
-ensure that this doesn't happen. To keep on the safe side, it is recommended
-that the life time of subgraph views are kept very short. One way to achieve
-this is to use subgraphs within a "with make_sgv(...) as sgv:" Python context.
-
-To alleviate the out-of-sync problem, some functions are granted the right to
-modified subgraph in place. This is typically the case of graph manipulation
-functions which, given some subgraphs as arguments, can modify the underlying
-`tf.Graph`. Since this modification is likely to render the subgraph view
-invalid, those functions can modify the argument in place to reflect the
-change. For instance, calling the function swap_inputs(svg0, svg1) will modify
-svg0 and svg1 in place to reflect the fact that their inputs have now being
-swapped.
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.__bool__()` {#SubGraphView.__bool__}
-
-Allows for implicit boolean conversion.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.__copy__()` {#SubGraphView.__copy__}
-
-Create a copy of this subgraph.
-
-Note that this class is a "view", copying it only create another view and
-does not copy the underlying part of the `tf.Graph`.
-
-##### Returns:
-
- A new identical instance of the original subgraph view.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.__enter__()` {#SubGraphView.__enter__}
-
-Allow Python context to minimize the life time of a subgraph view.
-
-A subgraph view is meant to be a lightweight and transient object. A short
-lifetime will alleviate the "out-of-sync" issue mentioned earlier. For that
-reason, a SubGraphView instance can be used within a Python context. For
-example:
-
-from tensorflow.contrib import graph_editor as ge
-with ge.make_sgv(...) as sgv:
- print(sgv)
-
-##### Returns:
-
- Itself.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.__exit__(exc_type, exc_value, traceback)` {#SubGraphView.__exit__}
-
-
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.__init__(inside_ops=(), passthrough_ts=())` {#SubGraphView.__init__}
-
-Create a subgraph containing the given ops and the "passthrough" tensors.
-
-##### Args:
-
-
-* <b>`inside_ops`</b>: an object convertible to a list of `tf.Operation`. This list
- defines all the operations in the subgraph.
-* <b>`passthrough_ts`</b>: an object convertible to a list of `tf.Tensor`. This list
- define all the "passthrough" tensors. A passthrough tensor is a tensor
- which goes directly from the input of the subgraph to it output, without
- any intermediate operations. All the non passthrough tensors are
- silently ignored.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if inside_ops cannot be converted to a list of `tf.Operation`
- or if `passthrough_ts` cannot be converted to a list of `tf.Tensor`.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.__nonzero__()` {#SubGraphView.__nonzero__}
-
-Allows for implicit boolean conversion.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.__str__()` {#SubGraphView.__str__}
-
-
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.connected_inputs` {#SubGraphView.connected_inputs}
-
-The connected input tensors of this subgraph view.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.connected_outputs` {#SubGraphView.connected_outputs}
-
-The connected output tensors of this subgraph view.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.consumers()` {#SubGraphView.consumers}
-
-Return a Python set of all the consumers of this subgraph view.
-
-A consumer of a subgraph view is a tf.Operation which is a consumer
-of one of the output tensors and is not in the subgraph.
-
-##### Returns:
-
- A list of `tf.Operation` which are the consumers of this subgraph view.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.copy()` {#SubGraphView.copy}
-
-Return a copy of itself.
-
-Note that this class is a "view", copying it only create another view and
-does not copy the underlying part of the tf.Graph.
-
-##### Returns:
-
- A new instance identical to the original one.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.find_op_by_name(op_name)` {#SubGraphView.find_op_by_name}
-
-Return the op named op_name.
-
-##### Args:
-
-
-* <b>`op_name`</b>: the name to search for
-
-##### Returns:
-
- The op named op_name.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the op_name could not be found.
-* <b>`AssertionError`</b>: if the name was found multiple time.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.graph` {#SubGraphView.graph}
-
-The underlying `tf.Graph`.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.input_index(t)` {#SubGraphView.input_index}
-
-Find the input index corresponding to the given input tensor t.
-
-##### Args:
-
-
-* <b>`t`</b>: the input tensor of this subgraph view.
-
-##### Returns:
-
- The index in the self.inputs list.
-
-##### Raises:
-
-
-* <b>`Error`</b>: if t in not an input tensor.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.inputs` {#SubGraphView.inputs}
-
-The input tensors of this subgraph view.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.is_passthrough(t)` {#SubGraphView.is_passthrough}
-
-Check whether a tensor is passthrough.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.op(op_id)` {#SubGraphView.op}
-
-Get an op by its index.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.ops` {#SubGraphView.ops}
-
-The operations in this subgraph view.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.output_index(t)` {#SubGraphView.output_index}
-
-Find the output index corresponding to given output tensor t.
-
-##### Args:
-
-
-* <b>`t`</b>: the output tensor of this subgraph view.
-
-##### Returns:
-
- The index in the self.outputs list.
-
-##### Raises:
-
-
-* <b>`Error`</b>: if t in not an output tensor.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.outputs` {#SubGraphView.outputs}
-
-The output tensors of this subgraph view.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.passthroughs` {#SubGraphView.passthroughs}
-
-The passthrough tensors, going straight from input to output.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.remap(new_input_indices=None, new_output_indices=None)` {#SubGraphView.remap}
-
-Remap the inputs and outputs of the subgraph.
-
-Note that this is only modifying the view: the underlying tf.Graph is not
-affected.
-
-##### Args:
-
-
-* <b>`new_input_indices`</b>: an iterable of integers or tf.Tensors
- representing a mapping between the old inputs and the new ones.
- Integers must be positive and smaller than the number of old inputs.
- tf.Tensors must belong to the old list of inputs.
- This mapping can be under-complete and must be without repetitions.
-* <b>`new_output_indices`</b>: an iterable of integers or tf.Tensors
- representing a mapping between the old outputs and the new ones.
- Integers must be positive and smaller than the number of old outputs.
- tf.Tensors must belong to the old list of outputs.
- This mapping can be under-complete and can have repetitions.
-
-##### Returns:
-
- A new modified instance of the original subgraph view with remapped
- inputs and outputs.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.remap_default(remove_input_map=True, remove_output_map=True)` {#SubGraphView.remap_default}
-
-Remap the inputs and/or outputs to the default mapping.
-
-##### Args:
-
-
-* <b>`remove_input_map`</b>: if True the input map is reset to the default one.
-* <b>`remove_output_map`</b>: if True the output map is reset to the default one.
-
-##### Returns:
-
- A new modified instance of the original subgraph view with its
- input and/or output mapping reset to the default one.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.remap_inputs(new_input_indices)` {#SubGraphView.remap_inputs}
-
-Remap the inputs of the subgraph.
-
-If the inputs of the original subgraph are [t0, t1, t2], remapping to [2,0]
-will create a new instance whose inputs is [t2, t0].
-
-Note that this is only modifying the view: the underlying `tf.Graph` is not
-affected.
-
-##### Args:
-
-
-* <b>`new_input_indices`</b>: an iterable of integers or tf.Tensors
- representing a mapping between the old inputs and the new ones.
- Integers must be positive and smaller than the number of old inputs.
- tf.Tensors must belong to the old list of inputs.
- This mapping can be under-complete and must be without repetitions.
-
-##### Returns:
-
- A new modified instance of the original subgraph view with remapped
- inputs.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.remap_outputs(new_output_indices)` {#SubGraphView.remap_outputs}
-
-Remap the output of the subgraph.
-
-If the output of the original subgraph are [t0, t1, t2], remapping to
-[1,1,0] will create a new instance whose outputs is [t1, t1, t0].
-
-Note that this is only modifying the view: the underlying tf.Graph is not
-affected.
-
-##### Args:
-
-
-* <b>`new_output_indices`</b>: an iterable of integers or tf.Tensors
- representing a mapping between the old outputs and the new ones.
- Integers must be positive and smaller than the number of old outputs.
- tf.Tensors must belong to the old list of outputs.
- This mapping can be under-complete and can have repetitions.
-
-##### Returns:
-
- A new modified instance of the original subgraph view with remapped
- outputs.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.remap_outputs_make_unique()` {#SubGraphView.remap_outputs_make_unique}
-
-Remap the outputs so that all the tensors appears only once.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.remap_outputs_to_consumers()` {#SubGraphView.remap_outputs_to_consumers}
-
-Remap the outputs to match the number of consumers.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.SubGraphView.remove_unused_ops(control_inputs=True)` {#SubGraphView.remove_unused_ops}
-
-Remove unused ops.
-
-##### Args:
-
-
-* <b>`control_inputs`</b>: if True, control inputs are used to detect used ops.
-
-##### Returns:
-
- A new subgraph view which only contains used operations.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.TransformerInfo.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.TransformerInfo.md
deleted file mode 100644
index 34489b5305..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.TransformerInfo.md
+++ /dev/null
@@ -1,67 +0,0 @@
-"Contains information about the result of a transform operation.
-- - -
-
-#### `tf.contrib.graph_editor.TransformerInfo.__init__(info)` {#TransformerInfo.__init__}
-
-Constructor.
-
-##### Args:
-
-
-* <b>`info`</b>: an instance of Transformer._TmpInfo containing various internal
- information about the transform operation.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.TransformerInfo.__str__()` {#TransformerInfo.__str__}
-
-
-
-
-- - -
-
-#### `tf.contrib.graph_editor.TransformerInfo.original(transformed, missing_fn=None)` {#TransformerInfo.original}
-
-Return the original op/tensor corresponding to the transformed one.
-
-Note that the output of this function mimics the hierarchy
-of its input argument `transformed`.
-Given an iterable, it returns a list. Given an operation or a tensor,
-it will return an operation or a tensor.
-
-##### Args:
-
-
-* <b>`transformed`</b>: the transformed tensor/operation.
-* <b>`missing_fn`</b>: function handling the case where the counterpart
- cannot be found. By default, None is returned.
-
-##### Returns:
-
- the original tensor/operation (or None if no match is found).
-
-
-- - -
-
-#### `tf.contrib.graph_editor.TransformerInfo.transformed(original, missing_fn=None)` {#TransformerInfo.transformed}
-
-Return the transformed op/tensor corresponding to the original one.
-
-Note that the output of this function mimics the hierarchy
-of its input argument `original`.
-Given an iterable, it returns a list. Given an operation or a tensor,
-it will return an operation or a tensor.
-
-##### Args:
-
-
-* <b>`original`</b>: the original tensor/operation.
-* <b>`missing_fn`</b>: function handling the case where the counterpart
- cannot be found. By default, None is returned.
-
-##### Returns:
-
- the transformed tensor/operation (or None if no match is found).
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.copy.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.copy.md
deleted file mode 100644
index 008fe66686..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.copy.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.contrib.graph_editor.copy(sgv, dst_graph=None, dst_scope='', src_scope='', reuse_dst_scope=False)` {#copy}
-
-Copy a subgraph.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the source subgraph-view. This argument is converted to a subgraph
- using the same rules than the function subgraph.make_view.
-* <b>`dst_graph`</b>: the destination graph.
-* <b>`dst_scope`</b>: the destination scope.
-* <b>`src_scope`</b>: the source scope.
-* <b>`reuse_dst_scope`</b>: if True the dst_scope is re-used if it already exists.
- Otherwise, the scope is given a unique name based on the one given
- by appending an underscore followed by a digit (default).
-
-##### Returns:
-
- A tuple `(sgv, info)` where:
- `sgv` is the transformed subgraph view;
- `info` is an instance of TransformerInfo containing
- information about the transform, including mapping between
- original and transformed tensors and operations.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `dst_graph` is not a `tf.Graph`.
-* <b>`StandardError`</b>: if sgv cannot be converted to a SubGraphView using
- the same rules than the function subgraph.make_view.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.filter_ts_from_regex.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.filter_ts_from_regex.md
deleted file mode 100644
index 469a458b4b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.filter_ts_from_regex.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.contrib.graph_editor.filter_ts_from_regex(ops, regex)` {#filter_ts_from_regex}
-
-Get all the tensors linked to ops that match the given regex.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of tf.Operation.
-* <b>`regex`</b>: a regular expression matching the tensors' name.
- For example, "^foo(/.*)?:\d+$" will match all the tensors in the "foo"
- scope.
-
-##### Returns:
-
- A list of tf.Tensor.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ops cannot be converted to a list of tf.Operation.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.keep_t_if_possible_handler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.keep_t_if_possible_handler.md
deleted file mode 100644
index 97d3977124..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.graph_editor.keep_t_if_possible_handler.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.contrib.graph_editor.keep_t_if_possible_handler(info, t)` {#keep_t_if_possible_handler}
-
-Transform a tensor into itself (identity) if possible.
-
-This handler transform a tensor into itself if the source and destination
-graph are the same. Otherwise it will create a placeholder.
-This handler is typically used to transform a hidden input tensors.
-
-##### Args:
-
-
-* <b>`info`</b>: Transform._TmpInfo instance.
-* <b>`t`</b>: tensor whose input must be transformed into a place holder.
-
-##### Returns:
-
- The tensor generated by the newly created place holder.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.infer_real_valued_columns.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.infer_real_valued_columns.md
deleted file mode 100644
index 92c0e584f2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.infer_real_valued_columns.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.layers.infer_real_valued_columns(features)` {#infer_real_valued_columns}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.optimize_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.optimize_loss.md
deleted file mode 100644
index a3a7a2989f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.optimize_loss.md
+++ /dev/null
@@ -1,83 +0,0 @@
-### `tf.contrib.layers.optimize_loss(loss, global_step, learning_rate, optimizer, gradient_noise_scale=None, gradient_multipliers=None, clip_gradients=None, learning_rate_decay_fn=None, update_ops=None, variables=None, name=None, summaries=None, colocate_gradients_with_ops=False)` {#optimize_loss}
-
-Given loss and parameters for optimizer, returns a training op.
-
-Various ways of passing optimizers, include:
-
-- string, name of the optimizer like 'SGD', 'Adam', see OPTIMIZER_CLS_NAMES
- for full list. E.g. `optimize_loss(..., optimizer='Adam')`.
-- function, takes learning rate `Tensor` as argument and must return
- `Optimizer` instance. E.g. `optimize_loss(...,
- optimizer=lambda lr: tf.train.MomentumOptimizer(lr, momentum=0.5))`.
- Alternatively, if `learning_rate` is `None`, the function takes no
- arguments. E.g. `optimize_loss(..., learning_rate=None,
- optimizer=lambda: tf.train.MomentumOptimizer(0.5, momentum=0.5))`.
-- class, subclass of `Optimizer` that takes only one required argument -
- learning rate, such as AdamOptimizer, AdagradOptimizer.
- E.g. `optimize_loss(..., optimizer=tf.train.AdagradOptimizer)`.
-- object, instance of subclass of `Optimizer`.
- E.g., `optimizer_loss(..., optimizer=tf.train.AdagradOptimizer(0.5))`.
-
-##### Args:
-
-
-* <b>`loss`</b>: Scalar `Tensor`.
-* <b>`global_step`</b>: Scalar int `Tensor`, step counter for each update. If not
- supplied, it will be fetched from the default graph (see
- `tf.contrib.framework.get_global_step` for details). If it's
- not been created, no step will be incremented with each weight
- update. `learning_rate_decay_fn` requires `global_step`.
-* <b>`learning_rate`</b>: float or `Tensor`, magnitude of update per each training
- step. Can be `None`.
-* <b>`optimizer`</b>: string, class or optimizer instance, used as trainer.
- string should be name of optimizer, like 'SGD',
- 'Adam', 'Adagrad'. Full list in OPTIMIZER_CLS_NAMES constant.
- class should be sub-class of `tf.Optimizer` that implements
- `compute_gradients` and `apply_gradients` functions.
- optimizer instance should be instantiation of `tf.Optimizer`
- sub-class and have `compute_gradients` and `apply_gradients`
- functions.
-* <b>`gradient_noise_scale`</b>: float or None, adds 0-mean normal noise scaled by this
- value.
-* <b>`gradient_multipliers`</b>: dict of variables or variable names to floats.
- If present, gradients for specified
- variables will be multiplied by given constant.
-* <b>`clip_gradients`</b>: float, callable or `None`. If float, is provided, a global
- clipping is applied to prevent the norm of the gradient to exceed this
- value. Alternatively, a callable can be provided e.g.: adaptive_clipping.
- This callable takes a `list` of `(gradients, variables)` `tuple`s and
- returns the same thing with the gradients modified.
-* <b>`learning_rate_decay_fn`</b>: function, takes `learning_rate` and `global_step`
- `Tensor`s, returns `Tensor`.
- Can be used to implement any learning rate decay
- functions.
- For example: `tf.train.exponential_decay`.
- Ignored if `learning_rate` is not supplied.
-* <b>`update_ops`</b>: list of update `Operation`s to execute at each step. If `None`,
- uses elements of UPDATE_OPS collection. The order of execution
- between `update_ops` and `loss` is non-deterministic.
-* <b>`variables`</b>: list of variables to optimize or
- `None` to use all trainable variables.
-* <b>`name`</b>: The name for this operation is used to scope operations and summaries.
-* <b>`summaries`</b>: List of internal quantities to visualize on tensorboard. If not
- set only the loss and the learning rate will be reported. The
- complete list is in OPTIMIZER_SUMMARIES.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with the
- corresponding op.
-
-##### Returns:
-
- Training op.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if:
- * `loss` is an invalid type or shape.
- * `global_step` is an invalid type or shape.
- * `learning_rate` is an invalid type or value.
- * `optimizer` is wrong type.
- * `clip_gradients` is not float or callable.
- * `learning_rate` and `learning_rate_decay_fn` are supplied, but no
- `global_step` is available.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.repeat.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.repeat.md
deleted file mode 100644
index 47672d30bd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.repeat.md
+++ /dev/null
@@ -1,36 +0,0 @@
-### `tf.contrib.layers.repeat(inputs, repetitions, layer, *args, **kwargs)` {#repeat}
-
-Applies the same layer with the same arguments repeatedly.
-
-```python
- y = repeat(x, 3, conv2d, 64, [3, 3], scope='conv1')
- # It is equivalent to:
-
- x = conv2d(x, 64, [3, 3], scope='conv1/conv1_1')
- x = conv2d(x, 64, [3, 3], scope='conv1/conv1_2')
- y = conv2d(x, 64, [3, 3], scope='conv1/conv1_3')
-```
-
-If the `scope` argument is not given in `kwargs`, it is set to
-`layer.__name__`, or `layer.func.__name__` (for `functools.partial`
-objects). If neither `__name__` nor `func.__name__` is available, the
-layers are called with `scope='stack'`.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A `Tensor` suitable for layer.
-* <b>`repetitions`</b>: Int, number of repetitions.
-* <b>`layer`</b>: A layer with arguments `(inputs, *args, **kwargs)`
-* <b>`*args`</b>: Extra args for the layer.
-* <b>`**kwargs`</b>: Extra kwargs for the layer.
-
-##### Returns:
-
- A tensor result of applying the layer, repetitions times.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the op is unknown or wrong.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.safe_embedding_lookup_sparse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.safe_embedding_lookup_sparse.md
deleted file mode 100644
index faa0bb0351..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.safe_embedding_lookup_sparse.md
+++ /dev/null
@@ -1,50 +0,0 @@
-### `tf.contrib.layers.safe_embedding_lookup_sparse(embedding_weights, sparse_ids, sparse_weights=None, combiner=None, default_id=None, name=None, partition_strategy='div', max_norm=None)` {#safe_embedding_lookup_sparse}
-
-Lookup embedding results, accounting for invalid IDs and empty features.
-
-The partitioned embedding in `embedding_weights` must all be the same shape
-except for the first dimension. The first dimension is allowed to vary as the
-vocabulary size is not necessarily a multiple of `P`. `embedding_weights`
-may be a `PartitionedVariable` as returned by using `tf.get_variable()` with a
-partitioner.
-
-Invalid IDs (< 0) are pruned from input IDs and weights, as well as any IDs
-with non-positive weight. For an entry with no features, the embedding vector
-for `default_id` is returned, or the 0-vector if `default_id` is not supplied.
-
-The ids and weights may be multi-dimensional. Embeddings are always aggregated
-along the last dimension.
-
-##### Args:
-
-
-* <b>`embedding_weights`</b>: A list of `P` float tensors or values representing
- partitioned embedding tensors. Alternatively, a `PartitionedVariable`,
- created by partitioning along dimension 0. The total unpartitioned
- shape should be `[e_0, e_1, ..., e_m]`, where `e_0` represents the
- vocab size and `e_1, ..., e_m` are the embedding dimensions.
-* <b>`sparse_ids`</b>: `SparseTensor` of shape `[d_0, d_1, ..., d_n]` containing the
- ids. `d_0` is typically batch size.
-* <b>`sparse_weights`</b>: `SparseTensor` of same shape as `sparse_ids`, containing
- float weights corresponding to `sparse_ids`, or `None` if all weights
- are be assumed to be 1.0.
-* <b>`combiner`</b>: A string specifying how to combine embedding results for each
- entry. Currently "mean", "sqrtn" and "sum" are supported, with "mean"
- the default.
-* <b>`default_id`</b>: The id to use for an entry with no features.
-* <b>`name`</b>: A name for this operation (optional).
-* <b>`partition_strategy`</b>: A string specifying the partitioning strategy.
- Currently `"div"` and `"mod"` are supported. Default is `"div"`.
-* <b>`max_norm`</b>: If not None, all embeddings are l2-normalized to max_norm before
- combining.
-
-
-##### Returns:
-
- Dense tensor of shape `[d_0, d_1, ..., d_{n-1}, e_1, ..., e_m]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `embedding_weights` is empty.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.shared_embedding_columns.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.shared_embedding_columns.md
deleted file mode 100644
index 29611d833a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.shared_embedding_columns.md
+++ /dev/null
@@ -1,48 +0,0 @@
-### `tf.contrib.layers.shared_embedding_columns(sparse_id_columns, dimension, combiner='mean', shared_embedding_name=None, initializer=None, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None)` {#shared_embedding_columns}
-
-Creates a list of `_EmbeddingColumn` sharing the same embedding.
-
-##### Args:
-
-
-* <b>`sparse_id_columns`</b>: An iterable of `_SparseColumn`, such as those created by
- `sparse_column_with_*` or crossed_column functions. Note that `combiner`
- defined in each sparse_id_column is ignored.
-* <b>`dimension`</b>: An integer specifying dimension of the embedding.
-* <b>`combiner`</b>: A string specifying how to reduce if there are multiple entries
- in a single row. Currently "mean", "sqrtn" and "sum" are supported, with
- "mean" the default. "sqrtn" often achieves good accuracy, in particular
- with bag-of-words columns. Each of this can be thought as example level
- normalizations on the column:
- * "sum": do not normalize
- * "mean": do l1 normalization
- * "sqrtn": do l2 normalization
- For more information: `tf.embedding_lookup_sparse`.
-* <b>`shared_embedding_name`</b>: (Optional). A string specifying the name of shared
- embedding weights. This will be needed if you want to reference the shared
- embedding separately from the generated `_EmbeddingColumn`.
-* <b>`initializer`</b>: A variable initializer function to be used in embedding
- variable initialization. If not specified, defaults to
- `tf.truncated_normal_initializer` with mean 0.0 and standard deviation
- 1/sqrt(sparse_id_columns[0].length).
-* <b>`ckpt_to_load_from`</b>: (Optional). String representing checkpoint name/pattern
- to restore the column weights. Required if `tensor_name_in_ckpt` is not
- None.
-* <b>`tensor_name_in_ckpt`</b>: (Optional). Name of the `Tensor` in the provided
- checkpoint from which to restore the column weights. Required if
- `ckpt_to_load_from` is not None.
-* <b>`max_norm`</b>: (Optional). If not None, embedding values are l2-normalized to
- the value of max_norm.
-
-##### Returns:
-
- A tuple of `_EmbeddingColumn` with shared embedding space.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if sparse_id_columns is empty, or its elements are not
- compatible with each other.
-* <b>`TypeError`</b>: if `sparse_id_columns` is not a sequence or is a string. If at
- least one element of `sparse_id_columns` is not a `SparseTensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.weighted_sum_from_feature_columns.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.weighted_sum_from_feature_columns.md
deleted file mode 100644
index 39ae2754f9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.layers.weighted_sum_from_feature_columns.md
+++ /dev/null
@@ -1,54 +0,0 @@
-### `tf.contrib.layers.weighted_sum_from_feature_columns(columns_to_tensors, feature_columns, num_outputs, weight_collections=None, trainable=True, scope=None)` {#weighted_sum_from_feature_columns}
-
-A tf.contrib.layer style linear prediction builder based on FeatureColumns.
-
-Generally a single example in training data is described with feature columns.
-This function generates weighted sum for each num_outputs. Weighted sum refers
-to logits in classification problems. It refers to prediction itself for
-linear regression problems.
-
-Example:
-
- ```
- # Building model for training
- feature_columns = (
- real_valued_column("my_feature1"),
- ...
- )
- columns_to_tensor = tf.parse_example(...)
- logits = weighted_sum_from_feature_columns(
- columns_to_tensors=columns_to_tensor,
- feature_columns=feature_columns,
- num_outputs=1)
- loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=labels,
- logits=logits)
- ```
-
-##### Args:
-
-
-* <b>`columns_to_tensors`</b>: A mapping from feature column to tensors. 'string' key
- means a base feature (not-transformed). It can have FeatureColumn as a
- key too. That means that FeatureColumn is already transformed by input
- pipeline. For example, `inflow` may have handled transformations.
-* <b>`feature_columns`</b>: A set containing all the feature columns. All items in the
- set should be instances of classes derived from FeatureColumn.
-* <b>`num_outputs`</b>: An integer specifying number of outputs. Default value is 1.
-* <b>`weight_collections`</b>: List of graph collections to which weights are added.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- A tuple containing:
-
- * A Tensor which represents predictions of a linear model.
- * A dictionary which maps feature_column to corresponding Variable.
- * A Variable which is used for bias.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if FeatureColumn cannot be used for linear predictions.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.BaseEstimator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.BaseEstimator.md
deleted file mode 100644
index 740be32d9b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.BaseEstimator.md
+++ /dev/null
@@ -1,305 +0,0 @@
-Abstract BaseEstimator class to train and evaluate TensorFlow models.
-
-Users should not instantiate or subclass this class. Instead, use `Estimator`.
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.__init__(model_dir=None, config=None)` {#BaseEstimator.__init__}
-
-Initializes a BaseEstimator instance.
-
-##### Args:
-
-
-* <b>`model_dir`</b>: Directory to save model parameters, graph and etc. This can
- also be used to load checkpoints from the directory into a estimator to
- continue training a previously saved model.
-* <b>`config`</b>: A RunConfig instance.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.__repr__()` {#BaseEstimator.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.config` {#BaseEstimator.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.evaluate(*args, **kwargs)` {#BaseEstimator.evaluate}
-
-See `Evaluable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` or `y` is provided, and at least one of
- `input_fn` or `feed_fn` is provided.
- Or if `metrics` is not `None` or `dict`.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.export(*args, **kwargs)` {#BaseEstimator.export}
-
-Exports inference graph into given dir. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-23.
-Instructions for updating:
-The signature of the input_fn accepted by export is changing to be consistent with what's used by tf.Learn Estimator's train/evaluate. input_fn (and in most cases, input_feature_key) will become required args, and use_deprecated_input_fn will default to False and be removed altogether.
-
-##### Args:
-
-
-* <b>`export_dir`</b>: A string containing a directory to write the exported graph
- and checkpoints.
-* <b>`input_fn`</b>: If `use_deprecated_input_fn` is true, then a function that given
- `Tensor` of `Example` strings, parses it into features that are then
- passed to the model. Otherwise, a function that takes no argument and
- returns a tuple of (features, labels), where features is a dict of
- string key to `Tensor` and labels is a `Tensor` that's currently not
- used (and so can be `None`).
-* <b>`input_feature_key`</b>: Only used if `use_deprecated_input_fn` is false. String
- key into the features dict returned by `input_fn` that corresponds to a
- the raw `Example` strings `Tensor` that the exported model will take as
- input. Can only be `None` if you're using a custom `signature_fn` that
- does not use the first arg (examples).
-* <b>`use_deprecated_input_fn`</b>: Determines the signature format of `input_fn`.
-* <b>`signature_fn`</b>: Function that returns a default signature and a named
- signature map, given `Tensor` of `Example` strings, `dict` of `Tensor`s
- for features and `Tensor` or `dict` of `Tensor`s for predictions.
-* <b>`prediction_key`</b>: The key for a tensor in the `predictions` dict (output
- from the `model_fn`) to use as the `predictions` input to the
- `signature_fn`. Optional. If `None`, predictions will pass to
- `signature_fn` without filtering.
-* <b>`default_batch_size`</b>: Default batch size of the `Example` placeholder.
-* <b>`exports_to_keep`</b>: Number of exports to keep.
-* <b>`checkpoint_path`</b>: the checkpoint path of the model to be exported. If it is
- `None` (which is default), will use the latest checkpoint in
- export_dir.
-
-##### Returns:
-
- The string path to the exported directory. NB: this functionality was
- added ca. 2016/09/25; clients that depend on the return value may need
- to handle the case where this function returns None because subclasses
- are not returning a value.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.fit(*args, **kwargs)` {#BaseEstimator.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.get_params(deep=True)` {#BaseEstimator.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.get_variable_names()` {#BaseEstimator.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.get_variable_value(name)` {#BaseEstimator.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.model_dir` {#BaseEstimator.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.partial_fit(*args, **kwargs)` {#BaseEstimator.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.predict(*args, **kwargs)` {#BaseEstimator.predict}
-
-Returns predictions for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x` and 'batch_size' must be `None`.
-* <b>`batch_size`</b>: Override default batch size. If set, 'input_fn' must be
- 'None'.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns all.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- A numpy array of predicted classes or regression values if the
- constructor's `model_fn` returns a `Tensor` for `predictions` or a `dict`
- of numpy arrays if `model_fn` returns a `dict`. Returns an iterable of
- predictions if as_iterable is True.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If x and input_fn are both provided or both `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.BaseEstimator.set_params(**params)` {#BaseEstimator.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.ModeKeys.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.ModeKeys.md
deleted file mode 100644
index 83e0bd4119..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.ModeKeys.md
+++ /dev/null
@@ -1,7 +0,0 @@
-Standard names for model modes.
-
-The following standard keys are defined:
-
-* `TRAIN`: training mode.
-* `EVAL`: evaluation mode.
-* `INFER`: inference mode.
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.ModelFnOps.__new__.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.ModelFnOps.__new__.md
deleted file mode 100644
index 10dec55e35..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.ModelFnOps.__new__.md
+++ /dev/null
@@ -1,54 +0,0 @@
-#### `tf.contrib.learn.ModelFnOps.__new__(cls, mode, predictions=None, loss=None, train_op=None, eval_metric_ops=None, output_alternatives=None, training_chief_hooks=None, training_hooks=None, scaffold=None)` {#ModelFnOps.__new__}
-
-Creates a validated `ModelFnOps` instance.
-
-For a multi-headed model, the predictions dict here will contain the outputs
-of all of the heads. However: at serving time, requests will be made
-specifically for one or more heads, and the RPCs used for these requests may
-differ by problem type (i.e., regression, classification, other). The
-purpose of the output_alternatives dict is to aid in exporting a SavedModel
-from which such head-specific queries can be served. These
-output_alternatives will be combined with input_alternatives (see
-`saved_model_export_utils`) to produce a set of `SignatureDef`s specifying
-the valid requests that can be served from this model.
-
-For a single-headed model, it is still adviseable to provide
-output_alternatives with a single entry, because this is how the problem
-type is communicated for export and serving. If output_alternatives is not
-given, the resulting SavedModel will support only one head of unspecified
-type.
-
-##### Args:
-
-
-* <b>`mode`</b>: One of `ModeKeys`. Specifies if this training, evaluation or
- prediction.
-* <b>`predictions`</b>: Predictions `Tensor` or dict of `Tensor`.
-* <b>`loss`</b>: Training loss `Tensor`.
-* <b>`train_op`</b>: Op for the training step.
-* <b>`eval_metric_ops`</b>: Dict of metric results keyed by name. The values of the
- dict are the results of calling a metric function, such as `Tensor`.
-* <b>`output_alternatives`</b>: a dict of
- `{submodel_name: (problem_type, {tensor_name: Tensor})}`, where
- `submodel_name` is a submodel identifier that should be consistent
- across the pipeline (here likely taken from the name of each `Head`,
- for models that use them), `problem_type` is a `ProblemType`,
- `tensor_name` is a symbolic name for an output Tensor possibly but not
- necessarily taken from `PredictionKey`, and `Tensor` is the
- corresponding output Tensor itself.
-* <b>`training_chief_hooks`</b>: A list of `SessionRunHook` objects that will be
- run on the chief worker during training.
-* <b>`training_hooks`</b>: A list of `SessionRunHook` objects that will be run on
- all workers during training.
-* <b>`scaffold`</b>: A `tf.train.Scaffold` object that can be used to set
- initialization, saver, and more to be used in training.
-
-##### Returns:
-
- A validated `ModelFnOps` object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If validation fails.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.ProblemType.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.ProblemType.md
deleted file mode 100644
index 20be4db791..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.ProblemType.md
+++ /dev/null
@@ -1,10 +0,0 @@
-Enum-like values for the type of problem that the model solves.
-
-These values are used when exporting the model to produce the appropriate
-signature function for serving.
-
-The following values are supported:
- UNSPECIFIED: Produces a predict signature_fn.
- CLASSIFICATION: Produces a classify signature_fn.
- LINEAR_REGRESSION: Produces a regression signature_fn.
- LOGISTIC_REGRESSION: Produces a classify signature_fn.
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.CaptureVariable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.CaptureVariable.md
deleted file mode 100644
index 4160ed5ec4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.CaptureVariable.md
+++ /dev/null
@@ -1,199 +0,0 @@
-Captures a variable's values into a collection.
-
-This monitor is useful for unit testing. You should exercise caution when
-using this monitor in production, since it never discards values.
-
-This is an `EveryN` monitor and has consistent semantic for `every_n`
-and `first_n`.
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.__init__(var_name, every_n=100, first_n=1)` {#CaptureVariable.__init__}
-
-Initializes a CaptureVariable monitor.
-
-##### Args:
-
-
-* <b>`var_name`</b>: `string`. The variable name, including suffix (typically ":0").
-* <b>`every_n`</b>: `int`, print every N steps. See `PrintN.`
-* <b>`first_n`</b>: `int`, also print the first N steps. See `PrintN.`
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.begin(max_steps=None)` {#CaptureVariable.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.end(session=None)` {#CaptureVariable.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.epoch_begin(epoch)` {#CaptureVariable.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.epoch_end(epoch)` {#CaptureVariable.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.every_n_post_step(step, session)` {#CaptureVariable.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.every_n_step_begin(step)` {#CaptureVariable.every_n_step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.every_n_step_end(step, outputs)` {#CaptureVariable.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.post_step(step, session)` {#CaptureVariable.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.run_on_all_workers` {#CaptureVariable.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.set_estimator(estimator)` {#CaptureVariable.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.step_begin(step)` {#CaptureVariable.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.step_end(step, output)` {#CaptureVariable.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CaptureVariable.values` {#CaptureVariable.values}
-
-Returns the values captured so far.
-
-##### Returns:
-
- `dict` mapping `int` step numbers to that values of the variable at the
- respective step.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.ExportMonitor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.ExportMonitor.md
deleted file mode 100644
index bf3fa842a3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.ExportMonitor.md
+++ /dev/null
@@ -1,248 +0,0 @@
-Monitor that exports Estimator every N steps.
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.__init__(*args, **kwargs)` {#ExportMonitor.__init__}
-
-Initializes ExportMonitor. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-23.
-Instructions for updating:
-The signature of the input_fn accepted by export is changing to be consistent with what's used by tf.Learn Estimator's train/evaluate. input_fn (and in most cases, input_feature_key) will both become required args.
-
-##### Args:
-
-
-* <b>`every_n_steps`</b>: Run monitor every N steps.
-* <b>`export_dir`</b>: str, folder to export.
-* <b>`input_fn`</b>: A function that takes no argument and returns a tuple of
- (features, labels), where features is a dict of string key to `Tensor`
- and labels is a `Tensor` that's currently not used (and so can be
- `None`).
-* <b>`input_feature_key`</b>: String key into the features dict returned by
- `input_fn` that corresponds to the raw `Example` strings `Tensor` that
- the exported model will take as input. Should be `None` if and only if
- you're passing in a `signature_fn` that does not use the first arg
- (`Tensor` of `Example` strings).
-* <b>`exports_to_keep`</b>: int, number of exports to keep.
-* <b>`signature_fn`</b>: Function that returns a default signature and a named
- signature map, given `Tensor` of `Example` strings, `dict` of `Tensor`s
- for features and `dict` of `Tensor`s for predictions.
-* <b>`default_batch_size`</b>: Default batch size of the `Example` placeholder.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `input_fn` and `input_feature_key` are not both defined or
- are not both `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.begin(max_steps=None)` {#ExportMonitor.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.end(session=None)` {#ExportMonitor.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.epoch_begin(epoch)` {#ExportMonitor.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.epoch_end(epoch)` {#ExportMonitor.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.every_n_post_step(step, session)` {#ExportMonitor.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.every_n_step_begin(step)` {#ExportMonitor.every_n_step_begin}
-
-Callback before every n'th step begins.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list` of tensors that will be evaluated at this step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.every_n_step_end(step, outputs)` {#ExportMonitor.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.export_dir` {#ExportMonitor.export_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.exports_to_keep` {#ExportMonitor.exports_to_keep}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.last_export_dir` {#ExportMonitor.last_export_dir}
-
-Returns the directory containing the last completed export.
-
-##### Returns:
-
- The string path to the exported directory. NB: this functionality was
- added on 2016/09/25; clients that depend on the return value may need
- to handle the case where this function returns None because the
- estimator being fitted does not yet return a value during export.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.post_step(step, session)` {#ExportMonitor.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.run_on_all_workers` {#ExportMonitor.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.set_estimator(estimator)` {#ExportMonitor.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.signature_fn` {#ExportMonitor.signature_fn}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.step_begin(step)` {#ExportMonitor.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.ExportMonitor.step_end(step, output)` {#ExportMonitor.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.GraphDump.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.GraphDump.md
deleted file mode 100644
index 8e1fed54c1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.GraphDump.md
+++ /dev/null
@@ -1,163 +0,0 @@
-Dumps almost all tensors in the graph at every step.
-
-Note, this is very expensive, prefer `PrintTensor` in production.
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.__init__(ignore_ops=None)` {#GraphDump.__init__}
-
-Initializes GraphDump monitor.
-
-##### Args:
-
-
-* <b>`ignore_ops`</b>: `list` of `string`. Names of ops to ignore.
- If None, `GraphDump.IGNORE_OPS` is used.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.begin(max_steps=None)` {#GraphDump.begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.compare(other_dump, step, atol=1e-06)` {#GraphDump.compare}
-
-Compares two `GraphDump` monitors and returns differences.
-
-##### Args:
-
-
-* <b>`other_dump`</b>: Another `GraphDump` monitor.
-* <b>`step`</b>: `int`, step to compare on.
-* <b>`atol`</b>: `float`, absolute tolerance in comparison of floating arrays.
-
-##### Returns:
-
- Returns tuple:
-
-* <b>`matched`</b>: `list` of keys that matched.
-* <b>`non_matched`</b>: `dict` of keys to tuple of 2 mismatched values.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if a key in `data` is missing from `other_dump` at `step`.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.data` {#GraphDump.data}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.end(session=None)` {#GraphDump.end}
-
-Callback at the end of training/evaluation.
-
-##### Args:
-
-
-* <b>`session`</b>: A `tf.Session` object that can be used to run ops.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.epoch_begin(epoch)` {#GraphDump.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.epoch_end(epoch)` {#GraphDump.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.post_step(step, session)` {#GraphDump.post_step}
-
-Callback after the step is finished.
-
-Called after step_end and receives session to perform extra session.run
-calls. If failure occurred in the process, will be called as well.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, global step of the model.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.run_on_all_workers` {#GraphDump.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.set_estimator(estimator)` {#GraphDump.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.step_begin(step)` {#GraphDump.step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.GraphDump.step_end(step, output)` {#GraphDump.step_end}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.NanLoss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.NanLoss.md
deleted file mode 100644
index d5fb341690..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.NanLoss.md
+++ /dev/null
@@ -1,184 +0,0 @@
-NaN Loss monitor.
-
-Monitors loss and stops training if loss is NaN.
-Can either fail with exception or just stop training.
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.__init__(loss_tensor, every_n_steps=100, fail_on_nan_loss=True)` {#NanLoss.__init__}
-
-Initializes NanLoss monitor.
-
-##### Args:
-
-
-* <b>`loss_tensor`</b>: `Tensor`, the loss tensor.
-* <b>`every_n_steps`</b>: `int`, run check every this many steps.
-* <b>`fail_on_nan_loss`</b>: `bool`, whether to raise exception when loss is NaN.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.begin(max_steps=None)` {#NanLoss.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.end(session=None)` {#NanLoss.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.epoch_begin(epoch)` {#NanLoss.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.epoch_end(epoch)` {#NanLoss.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.every_n_post_step(step, session)` {#NanLoss.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.every_n_step_begin(step)` {#NanLoss.every_n_step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.every_n_step_end(step, outputs)` {#NanLoss.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.post_step(step, session)` {#NanLoss.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.run_on_all_workers` {#NanLoss.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.set_estimator(estimator)` {#NanLoss.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.step_begin(step)` {#NanLoss.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.NanLoss.step_end(step, output)` {#NanLoss.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.StepCounter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.StepCounter.md
deleted file mode 100644
index 13278b4fb2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.learn.monitors.StepCounter.md
+++ /dev/null
@@ -1,171 +0,0 @@
-Steps per second monitor.
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.__init__(every_n_steps=100, output_dir=None, summary_writer=None)` {#StepCounter.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.begin(max_steps=None)` {#StepCounter.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.end(session=None)` {#StepCounter.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.epoch_begin(epoch)` {#StepCounter.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.epoch_end(epoch)` {#StepCounter.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.every_n_post_step(step, session)` {#StepCounter.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.every_n_step_begin(step)` {#StepCounter.every_n_step_begin}
-
-Callback before every n'th step begins.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list` of tensors that will be evaluated at this step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.every_n_step_end(current_step, outputs)` {#StepCounter.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.post_step(step, session)` {#StepCounter.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.run_on_all_workers` {#StepCounter.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.set_estimator(estimator)` {#StepCounter.set_estimator}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.step_begin(step)` {#StepCounter.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StepCounter.step_end(step, output)` {#StepCounter.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.legacy_seq2seq.embedding_attention_decoder.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.legacy_seq2seq.embedding_attention_decoder.md
deleted file mode 100644
index 2ad87bab35..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.legacy_seq2seq.embedding_attention_decoder.md
+++ /dev/null
@@ -1,52 +0,0 @@
-### `tf.contrib.legacy_seq2seq.embedding_attention_decoder(decoder_inputs, initial_state, attention_states, cell, num_symbols, embedding_size, num_heads=1, output_size=None, output_projection=None, feed_previous=False, update_embedding_for_previous=True, dtype=None, scope=None, initial_state_attention=False)` {#embedding_attention_decoder}
-
-RNN decoder with embedding and attention and a pure-decoding option.
-
-##### Args:
-
-
-* <b>`decoder_inputs`</b>: A list of 1D batch-sized int32 Tensors (decoder inputs).
-* <b>`initial_state`</b>: 2D Tensor [batch_size x cell.state_size].
-* <b>`attention_states`</b>: 3D Tensor [batch_size x attn_length x attn_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function.
-* <b>`num_symbols`</b>: Integer, how many symbols come into the embedding.
-* <b>`embedding_size`</b>: Integer, the length of the embedding vector for each symbol.
-* <b>`num_heads`</b>: Number of attention heads that read from attention_states.
-* <b>`output_size`</b>: Size of the output vectors; if None, use output_size.
-* <b>`output_projection`</b>: None or a pair (W, B) of output projection weights and
- biases; W has shape [output_size x num_symbols] and B has shape
- [num_symbols]; if provided and feed_previous=True, each fed previous
- output will first be multiplied by W and added B.
-* <b>`feed_previous`</b>: Boolean; if True, only the first of decoder_inputs will be
- used (the "GO" symbol), and all other decoder inputs will be generated by:
- next = embedding_lookup(embedding, argmax(previous_output)),
- In effect, this implements a greedy decoder. It can also be used
- during training to emulate http://arxiv.org/abs/1506.03099.
- If False, decoder_inputs are used as given (the standard decoder case).
-* <b>`update_embedding_for_previous`</b>: Boolean; if False and feed_previous=True,
- only the embedding for the first symbol of decoder_inputs (the "GO"
- symbol) will be updated by back propagation. Embeddings for the symbols
- generated from the decoder itself remain unchanged. This parameter has
- no effect if feed_previous=False.
-* <b>`dtype`</b>: The dtype to use for the RNN initial states (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "embedding_attention_decoder".
-* <b>`initial_state_attention`</b>: If False (default), initial attentions are zero.
- If True, initialize the attentions from the initial state and attention
- states -- useful when we wish to resume decoding from a previously
- stored decoder state and attention states.
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors with
- shape [batch_size x output_size] containing the generated outputs.
-* <b>`state`</b>: The state of each decoder cell at the final time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: When output_projection has the wrong shape.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.legacy_seq2seq.embedding_attention_seq2seq.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.legacy_seq2seq.embedding_attention_seq2seq.md
deleted file mode 100644
index 6055727edd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.legacy_seq2seq.embedding_attention_seq2seq.md
+++ /dev/null
@@ -1,50 +0,0 @@
-### `tf.contrib.legacy_seq2seq.embedding_attention_seq2seq(encoder_inputs, decoder_inputs, cell, num_encoder_symbols, num_decoder_symbols, embedding_size, num_heads=1, output_projection=None, feed_previous=False, dtype=None, scope=None, initial_state_attention=False)` {#embedding_attention_seq2seq}
-
-Embedding sequence-to-sequence model with attention.
-
-This model first embeds encoder_inputs by a newly created embedding (of shape
-[num_encoder_symbols x input_size]). Then it runs an RNN to encode
-embedded encoder_inputs into a state vector. It keeps the outputs of this
-RNN at every step to use for attention later. Next, it embeds decoder_inputs
-by another newly created embedding (of shape [num_decoder_symbols x
-input_size]). Then it runs attention decoder, initialized with the last
-encoder state, on embedded decoder_inputs and attending to encoder outputs.
-
-Warning: when output_projection is None, the size of the attention vectors
-and variables will be made proportional to num_decoder_symbols, can be large.
-
-##### Args:
-
-
-* <b>`encoder_inputs`</b>: A list of 1D int32 Tensors of shape [batch_size].
-* <b>`decoder_inputs`</b>: A list of 1D int32 Tensors of shape [batch_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function and size.
-* <b>`num_encoder_symbols`</b>: Integer; number of symbols on the encoder side.
-* <b>`num_decoder_symbols`</b>: Integer; number of symbols on the decoder side.
-* <b>`embedding_size`</b>: Integer, the length of the embedding vector for each symbol.
-* <b>`num_heads`</b>: Number of attention heads that read from attention_states.
-* <b>`output_projection`</b>: None or a pair (W, B) of output projection weights and
- biases; W has shape [output_size x num_decoder_symbols] and B has
- shape [num_decoder_symbols]; if provided and feed_previous=True, each
- fed previous output will first be multiplied by W and added B.
-* <b>`feed_previous`</b>: Boolean or scalar Boolean Tensor; if True, only the first
- of decoder_inputs will be used (the "GO" symbol), and all other decoder
- inputs will be taken from previous outputs (as in embedding_rnn_decoder).
- If False, decoder_inputs are used as given (the standard decoder case).
-* <b>`dtype`</b>: The dtype of the initial RNN state (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "embedding_attention_seq2seq".
-* <b>`initial_state_attention`</b>: If False (default), initial attentions are zero.
- If True, initialize the attentions from the initial state and attention
- states.
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors with
- shape [batch_size x num_decoder_symbols] containing the generated
- outputs.
-* <b>`state`</b>: The state of each decoder cell at the final time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.legacy_seq2seq.rnn_decoder.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.legacy_seq2seq.rnn_decoder.md
deleted file mode 100644
index c5eb781d62..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.legacy_seq2seq.rnn_decoder.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.contrib.legacy_seq2seq.rnn_decoder(decoder_inputs, initial_state, cell, loop_function=None, scope=None)` {#rnn_decoder}
-
-RNN decoder for the sequence-to-sequence model.
-
-##### Args:
-
-
-* <b>`decoder_inputs`</b>: A list of 2D Tensors [batch_size x input_size].
-* <b>`initial_state`</b>: 2D Tensor with shape [batch_size x cell.state_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function and size.
-* <b>`loop_function`</b>: If not None, this function will be applied to the i-th output
- in order to generate the i+1-st input, and decoder_inputs will be ignored,
- except for the first element ("GO" symbol). This can be used for decoding,
- but also for training to emulate http://arxiv.org/abs/1506.03099.
- Signature -- loop_function(prev, i) = next
- * prev is a 2D Tensor of shape [batch_size x output_size],
- * i is an integer, the step number (when advanced control is needed),
- * next is a 2D Tensor of shape [batch_size x input_size].
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn_decoder".
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors with
- shape [batch_size x output_size] containing generated outputs.
-* <b>`state`</b>: The state of each cell at the final time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
- (Note that in some cases, like basic RNN cell or GRU cell, outputs and
- states can be the same. They are different for LSTM cells though.)
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.set_difference.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.set_difference.md
deleted file mode 100644
index f656fb5e42..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.set_difference.md
+++ /dev/null
@@ -1,64 +0,0 @@
-### `tf.contrib.metrics.set_difference(a, b, aminusb=True, validate_indices=True)` {#set_difference}
-
-Compute set difference of elements in last dimension of `a` and `b`.
-
-All but the last dimension of `a` and `b` must match.
-
-Example:
-
-```python
- a = [
- [
- [
- [1, 2],
- [3],
- ],
- [
- [4],
- [5, 6],
- ],
- ],
- ]
- b = [
- [
- [
- [1, 3],
- [2],
- ],
- [
- [4, 5],
- [5, 6, 7, 8],
- ],
- ],
- ]
- set_difference(a, b, aminusb=True) = [
- [
- [
- [2],
- [3],
- ],
- [
- [],
- [],
- ],
- ],
- ]
-```
-
-##### Args:
-
-
-* <b>`a`</b>: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices
- must be sorted in row-major order.
-* <b>`b`</b>: `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices
- must be sorted in row-major order.
-* <b>`aminusb`</b>: Whether to subtract `b` from `a`, vs vice versa.
-* <b>`validate_indices`</b>: Whether to validate the order and range of sparse indices
- in `a` and `b`.
-
-##### Returns:
-
- A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but
- the last dimension the same. Elements along the last dimension contain the
- differences.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_auc.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_auc.md
deleted file mode 100644
index 9f405ebd5d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_auc.md
+++ /dev/null
@@ -1,64 +0,0 @@
-### `tf.contrib.metrics.streaming_auc(predictions, labels, weights=None, num_thresholds=200, metrics_collections=None, updates_collections=None, curve='ROC', name=None)` {#streaming_auc}
-
-Computes the approximate AUC via a Riemann sum.
-
-The `streaming_auc` function creates four local variables, `true_positives`,
-`true_negatives`, `false_positives` and `false_negatives` that are used to
-compute the AUC. To discretize the AUC curve, a linearly spaced set of
-thresholds is used to compute pairs of recall and precision values. The area
-under the ROC-curve is therefore computed using the height of the recall
-values by the false positive rate, while the area under the PR-curve is the
-computed using the height of the precision values by the recall.
-
-This value is ultimately returned as `auc`, an idempotent operation that
-computes the area under a discretized curve of precision versus recall values
-(computed using the aforementioned variables). The `num_thresholds` variable
-controls the degree of discretization with larger numbers of thresholds more
-closely approximating the true AUC. The quality of the approximation may vary
-dramatically depending on `num_thresholds`.
-
-For best results, `predictions` should be distributed approximately uniformly
-in the range [0, 1] and not peaked around 0 or 1. The quality of the AUC
-approximation may be poor if this is not the case.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the `auc`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A floating point `Tensor` of arbitrary shape and whose values
- are in the range `[0, 1]`.
-* <b>`labels`</b>: A `bool` `Tensor` whose shape matches `predictions`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`num_thresholds`</b>: The number of thresholds to use when discretizing the roc
- curve.
-* <b>`metrics_collections`</b>: An optional list of collections that `auc` should be
- added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`curve`</b>: Specifies the name of the curve to be computed, 'ROC' [default] or
- 'PR' for the Precision-Recall-curve.
-
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`auc`</b>: A scalar `Tensor` representing the current area-under-curve.
-* <b>`update_op`</b>: An operation that increments the `true_positives`,
- `true_negatives`, `false_positives` and `false_negatives` variables
- appropriately and whose value matches `auc`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_covariance.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_covariance.md
deleted file mode 100644
index 6136a4ebf1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_covariance.md
+++ /dev/null
@@ -1,55 +0,0 @@
-### `tf.contrib.metrics.streaming_covariance(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_covariance}
-
-Computes the unbiased sample covariance between `predictions` and `labels`.
-
-The `streaming_covariance` function creates four local variables,
-`comoment`, `mean_prediction`, `mean_label`, and `count`, which are used to
-compute the sample covariance between predictions and labels across multiple
-batches of data. The covariance is ultimately returned as an idempotent
-operation that simply divides `comoment` by `count` - 1. We use `count` - 1
-in order to get an unbiased estimate.
-
-The algorithm used for this online computation is described in
-https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance.
-Specifically, the formula used to combine two sample comoments is
-`C_AB = C_A + C_B + (E[x_A] - E[x_B]) * (E[y_A] - E[y_B]) * n_A * n_B / n_AB`
-The comoment for a single batch of data is simply
-`sum((x - E[x]) * (y - E[y]))`, optionally weighted.
-
-If `weights` is not None, then it is used to compute weighted comoments,
-means, and count. NOTE: these weights are treated as "frequency weights", as
-opposed to "reliability weights". See discussion of the difference on
-https://wikipedia.org/wiki/Weighted_arithmetic_mean#Weighted_sample_variance
-
-To facilitate the computation of covariance across multiple batches of data,
-the function creates an `update_op` operation, which updates underlying
-variables and returns the updated covariance.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of arbitrary size.
-* <b>`labels`</b>: A `Tensor` of the same size as `predictions`.
-* <b>`weights`</b>: Optional `Tensor` indicating the frequency with which an example is
- sampled. Rank must be 0, or the same rank as `labels`, and must be
- broadcastable to `labels` (i.e., all dimensions must be either `1`, or
- the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: A `Tensor` representing the current unbiased sample covariance,
- `comoment` / (`count` - 1).
-* <b>`update_op`</b>: An operation that updates the local variables appropriately.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If labels and predictions are of different sizes or if either
- `metrics_collections` or `updates_collections` are not a list or tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_false_negatives_at_thresholds.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_false_negatives_at_thresholds.md
deleted file mode 100644
index e1b1c77293..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_false_negatives_at_thresholds.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.metrics.streaming_false_negatives_at_thresholds(predictions, labels, thresholds, weights=None)` {#streaming_false_negatives_at_thresholds}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_mean.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_mean.md
deleted file mode 100644
index 3919eed9be..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_mean.md
+++ /dev/null
@@ -1,44 +0,0 @@
-### `tf.contrib.metrics.streaming_mean(values, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean}
-
-Computes the (weighted) mean of the given values.
-
-The `streaming_mean` function creates two local variables, `total` and `count`
-that are used to compute the average of `values`. This average is ultimately
-returned as `mean` which is an idempotent operation that simply divides
-`total` by `count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the `mean`.
-`update_op` increments `total` with the reduced sum of the product of `values`
-and `weights`, and it increments `count` with the reduced sum of `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`values`</b>: A `Tensor` of arbitrary dimensions.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `values`, and
- must be broadcastable to `values` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `values` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `mean`
- should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op`
- should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`mean`</b>: A `Tensor` representing the current mean, the value of `total` divided
- by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `mean_value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is not `None` and its shape doesn't match `values`,
- or if either `metrics_collections` or `updates_collections` are not a list
- or tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_mean_relative_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_mean_relative_error.md
deleted file mode 100644
index 1270d60a13..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_mean_relative_error.md
+++ /dev/null
@@ -1,52 +0,0 @@
-### `tf.contrib.metrics.streaming_mean_relative_error(predictions, labels, normalizer, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_relative_error}
-
-Computes the mean relative error by normalizing with the given values.
-
-The `streaming_mean_relative_error` function creates two local variables,
-`total` and `count` that are used to compute the mean relative absolute error.
-This average is weighted by `weights`, and it is ultimately returned as
-`mean_relative_error`: an idempotent operation that simply divides `total` by
-`count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`mean_reative_error`. Internally, a `relative_errors` operation divides the
-absolute value of the differences between `predictions` and `labels` by the
-`normalizer`. Then `update_op` increments `total` with the reduced sum of the
-product of `weights` and `relative_errors`, and it increments `count` with the
-reduced sum of `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of arbitrary shape.
-* <b>`labels`</b>: A `Tensor` of the same shape as `predictions`.
-* <b>`normalizer`</b>: A `Tensor` of the same shape as `predictions`.
-* <b>`weights`</b>: Optional `Tensor` indicating the frequency with which an example is
- sampled. Rank must be 0, or the same rank as `labels`, and must be
- broadcastable to `labels` (i.e., all dimensions must be either `1`, or
- the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that
- `mean_relative_error` should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`mean_relative_error`</b>: A `Tensor` representing the current mean, the value of
- `total` divided by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `mean_relative_error`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_sparse_precision_at_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_sparse_precision_at_k.md
deleted file mode 100644
index d68243c573..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.metrics.streaming_sparse_precision_at_k.md
+++ /dev/null
@@ -1,77 +0,0 @@
-### `tf.contrib.metrics.streaming_sparse_precision_at_k(predictions, labels, k, class_id=None, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_sparse_precision_at_k}
-
-Computes precision@k of the predictions with respect to sparse labels.
-
-If `class_id` is not specified, we calculate precision as the ratio of true
- positives (i.e., correct predictions, items in the top `k` highest
- `predictions` that are found in the corresponding row in `labels`) to
- positives (all top `k` `predictions`).
-If `class_id` is specified, we calculate precision by considering only the
- rows in the batch for which `class_id` is in the top `k` highest
- `predictions`, and computing the fraction of them for which `class_id` is
- in the corresponding row in `labels`.
-
-We expect precision to decrease as `k` increases.
-
-`streaming_sparse_precision_at_k` creates two local variables,
-`true_positive_at_<k>` and `false_positive_at_<k>`, that are used to compute
-the precision@k frequency. This frequency is ultimately returned as
-`precision_at_<k>`: an idempotent operation that simply divides
-`true_positive_at_<k>` by total (`true_positive_at_<k>` +
-`false_positive_at_<k>`).
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`precision_at_<k>`. Internally, a `top_k` operation computes a `Tensor`
-indicating the top `k` `predictions`. Set operations applied to `top_k` and
-`labels` calculate the true positives and false positives weighted by
-`weights`. Then `update_op` increments `true_positive_at_<k>` and
-`false_positive_at_<k>` using these values.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: Float `Tensor` with shape [D1, ... DN, num_classes] where
- N >= 1. Commonly, N=1 and predictions has shape [batch size, num_classes].
- The final dimension contains the logit values for each class. [D1, ... DN]
- must match `labels`.
-* <b>`labels`</b>: `int64` `Tensor` or `SparseTensor` with shape
- [D1, ... DN, num_labels], where N >= 1 and num_labels is the number of
- target classes for the associated prediction. Commonly, N=1 and `labels`
- has shape [batch_size, num_labels]. [D1, ... DN] must match
- `predictions`. Values should be in range [0, num_classes), where
- num_classes is the last dimension of `predictions`. Values outside this
- range are ignored.
-* <b>`k`</b>: Integer, k for @k metric.
-* <b>`class_id`</b>: Integer class ID for which we want binary metrics. This should be
- in range [0, num_classes], where num_classes is the last dimension of
- `predictions`. If `class_id` is outside this range, the method returns
- NAN.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or n-1, where n is the rank of
- `labels`. If the latter, it must be broadcastable to `labels` (i.e., all
- dimensions must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that values should
- be added to.
-* <b>`updates_collections`</b>: An optional list of collections that updates should
- be added to.
-* <b>`name`</b>: Name of new update operation, and namespace for other dependent ops.
-
-##### Returns:
-
-
-* <b>`precision`</b>: Scalar `float64` `Tensor` with the value of `true_positives`
- divided by the sum of `true_positives` and `false_positives`.
-* <b>`update_op`</b>: `Operation` that increments `true_positives` and
- `false_positives` variables appropriately, and whose value matches
- `precision`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is not `None` and its shape doesn't match
- `predictions`, or if either `metrics_collections` or `updates_collections`
- are not a list or tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.opt.ScipyOptimizerInterface.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.opt.ScipyOptimizerInterface.md
deleted file mode 100644
index 63bf919f5c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.opt.ScipyOptimizerInterface.md
+++ /dev/null
@@ -1,87 +0,0 @@
-Wrapper allowing `scipy.optimize.minimize` to operate a `tf.Session`.
-
-Example:
-
-```python
-vector = tf.Variable([7., 7.], 'vector')
-
-# Make vector norm as small as possible.
-loss = tf.reduce_sum(tf.square(vector))
-
-optimizer = ScipyOptimizerInterface(loss, options={'maxiter': 100})
-
-with tf.Session() as session:
- optimizer.minimize(session)
-
-# The value of vector should now be [0., 0.].
-```
-
-Example with constraints:
-
-```python
-vector = tf.Variable([7., 7.], 'vector')
-
-# Make vector norm as small as possible.
-loss = tf.reduce_sum(tf.square(vector))
-# Ensure the vector's y component is = 1.
-equalities = [vector[1] - 1.]
-# Ensure the vector's x component is >= 1.
-inequalities = [vector[0] - 1.]
-
-# Our default SciPy optimization algorithm, L-BFGS-B, does not support
-# general constraints. Thus we use SLSQP instead.
-optimizer = ScipyOptimizerInterface(
- loss, equalities=equalities, inequalities=inequalities, method='SLSQP')
-
-with tf.Session() as session:
- optimizer.minimize(session)
-
-# The value of vector should now be [1., 1.].
-```
-- - -
-
-#### `tf.contrib.opt.ScipyOptimizerInterface.__init__(loss, var_list=None, equalities=None, inequalities=None, **optimizer_kwargs)` {#ScipyOptimizerInterface.__init__}
-
-Initialize a new interface instance.
-
-##### Args:
-
-
-* <b>`loss`</b>: A scalar `Tensor` to be minimized.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`equalities`</b>: Optional list of equality constraint scalar `Tensor`s to be
- held equal to zero.
-* <b>`inequalities`</b>: Optional list of inequality constraint scalar `Tensor`s
- to be kept nonnegative.
-* <b>`**optimizer_kwargs`</b>: Other subclass-specific keyword arguments.
-
-
-- - -
-
-#### `tf.contrib.opt.ScipyOptimizerInterface.minimize(session=None, feed_dict=None, fetches=None, step_callback=None, loss_callback=None)` {#ScipyOptimizerInterface.minimize}
-
-Minimize a scalar `Tensor`.
-
-Variables subject to optimization are updated in-place at the end of
-optimization.
-
-Note that this method does *not* just return a minimization `Op`, unlike
-`Optimizer.minimize()`; instead it actually performs minimization by
-executing commands to control a `Session`.
-
-##### Args:
-
-
-* <b>`session`</b>: A `Session` instance.
-* <b>`feed_dict`</b>: A feed dict to be passed to calls to `session.run`.
-* <b>`fetches`</b>: A list of `Tensor`s to fetch and supply to `loss_callback`
- as positional arguments.
-* <b>`step_callback`</b>: A function to be called at each optimization step;
- arguments are the current values of all optimization variables
- flattened into a single vector.
-* <b>`loss_callback`</b>: A function to be called every time the loss and gradients
- are computed, with evaluated fetches supplied as positional arguments.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.rnn.GRUBlockCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.rnn.GRUBlockCell.md
deleted file mode 100644
index e6b8d4fc8b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.rnn.GRUBlockCell.md
+++ /dev/null
@@ -1,84 +0,0 @@
-Block GRU cell implementation.
-
-The implementation is based on: http://arxiv.org/abs/1406.1078
-Computes the LSTM cell forward propagation for 1 time step.
-
-This kernel op implements the following mathematical equations:
-
-Biases are initialized with:
-
-* `b_ru` - constant_initializer(1.0)
-* `b_c` - constant_initializer(0.0)
-
-```
-x_h_prev = [x, h_prev]
-
-[r_bar u_bar] = x_h_prev * w_ru + b_ru
-
-r = sigmoid(r_bar)
-u = sigmoid(u_bar)
-
-h_prevr = h_prev \circ r
-
-x_h_prevr = [x h_prevr]
-
-c_bar = x_h_prevr * w_c + b_c
-c = tanh(c_bar)
-
-h = (1-u) \circ c + u \circ h_prev
-```
-- - -
-
-#### `tf.contrib.rnn.GRUBlockCell.__call__(x, h_prev, scope=None)` {#GRUBlockCell.__call__}
-
-GRU cell.
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUBlockCell.__init__(cell_size)` {#GRUBlockCell.__init__}
-
-Initialize the Block GRU cell.
-
-##### Args:
-
-
-* <b>`cell_size`</b>: int, GRU cell size.
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUBlockCell.output_size` {#GRUBlockCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUBlockCell.state_size` {#GRUBlockCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.GRUBlockCell.zero_state(batch_size, dtype)` {#GRUBlockCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.rnn.static_bidirectional_rnn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.rnn.static_bidirectional_rnn.md
deleted file mode 100644
index b4cc966e32..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.contrib.rnn.static_bidirectional_rnn.md
+++ /dev/null
@@ -1,48 +0,0 @@
-### `tf.contrib.rnn.static_bidirectional_rnn(cell_fw, cell_bw, inputs, initial_state_fw=None, initial_state_bw=None, dtype=None, sequence_length=None, scope=None)` {#static_bidirectional_rnn}
-
-Creates a bidirectional recurrent neural network.
-
-Similar to the unidirectional case above (rnn) but takes input and builds
-independent forward and backward RNNs with the final forward and backward
-outputs depth-concatenated, such that the output will have the format
-[time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of
-forward and backward cell must match. The initial state for both directions
-is zero by default (but can be set optionally) and no intermediate states are
-ever returned -- the network is fully unrolled for the given (passed in)
-length(s) of the sequence(s) or completely unrolled if length(s) is not given.
-
-##### Args:
-
-
-* <b>`cell_fw`</b>: An instance of RNNCell, to be used for forward direction.
-* <b>`cell_bw`</b>: An instance of RNNCell, to be used for backward direction.
-* <b>`inputs`</b>: A length T list of inputs, each a tensor of shape
- [batch_size, input_size], or a nested tuple of such elements.
-* <b>`initial_state_fw`</b>: (optional) An initial state for the forward RNN.
- This must be a tensor of appropriate type and shape
- `[batch_size, cell_fw.state_size]`.
- If `cell_fw.state_size` is a tuple, this should be a tuple of
- tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
-* <b>`initial_state_bw`</b>: (optional) Same as for `initial_state_fw`, but using
- the corresponding properties of `cell_bw`.
-* <b>`dtype`</b>: (optional) The data type for the initial state. Required if
- either of the initial states are not provided.
-* <b>`sequence_length`</b>: (optional) An int32/int64 vector, size `[batch_size]`,
- containing the actual lengths for each of the sequences.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "bidirectional_rnn"
-
-##### Returns:
-
- A tuple (outputs, output_state_fw, output_state_bw) where:
- outputs is a length `T` list of outputs (one for each input), which
- are depth-concatenated forward and backward outputs.
- output_state_fw is the final state of the forward rnn.
- output_state_bw is the final state of the backward rnn.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cell_fw` or `cell_bw` is not an instance of `RNNCell`.
-* <b>`ValueError`</b>: If inputs is None or an empty list.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.digamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.digamma.md
deleted file mode 100644
index 8729e7ecfe..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.digamma.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.digamma(x, name=None)` {#digamma}
-
-Computes Psi, the derivative of Lgamma (the log of the absolute value of
-
-`Gamma(x)`), element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.edit_distance.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.edit_distance.md
deleted file mode 100644
index e5f6471817..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.edit_distance.md
+++ /dev/null
@@ -1,65 +0,0 @@
-### `tf.edit_distance(hypothesis, truth, normalize=True, name='edit_distance')` {#edit_distance}
-
-Computes the Levenshtein distance between sequences.
-
-This operation takes variable-length sequences (`hypothesis` and `truth`),
-each provided as a `SparseTensor`, and computes the Levenshtein distance.
-You can normalize the edit distance by length of `truth` by setting
-`normalize` to true.
-
-For example, given the following input:
-
-```python
-# 'hypothesis' is a tensor of shape `[2, 1]` with variable-length values:
-# (0,0) = ["a"]
-# (1,0) = ["b"]
-hypothesis = tf.SparseTensor(
- [[0, 0, 0],
- [1, 0, 0]],
- ["a", "b"]
- (2, 1, 1))
-
-# 'truth' is a tensor of shape `[2, 2]` with variable-length values:
-# (0,0) = []
-# (0,1) = ["a"]
-# (1,0) = ["b", "c"]
-# (1,1) = ["a"]
-truth = tf.SparseTensor(
- [[0, 1, 0],
- [1, 0, 0],
- [1, 0, 1],
- [1, 1, 0]]
- ["a", "b", "c", "a"],
- (2, 2, 2))
-
-normalize = True
-```
-
-This operation would return the following:
-
-```python
-# 'output' is a tensor of shape `[2, 2]` with edit distances normalized
-# by 'truth' lengths.
-output ==> [[inf, 1.0], # (0,0): no truth, (0,1): no hypothesis
- [0.5, 1.0]] # (1,0): addition, (1,1): no hypothesis
-```
-
-##### Args:
-
-
-* <b>`hypothesis`</b>: A `SparseTensor` containing hypothesis sequences.
-* <b>`truth`</b>: A `SparseTensor` containing truth sequences.
-* <b>`normalize`</b>: A `bool`. If `True`, normalizes the Levenshtein distance by
- length of `truth.`
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A dense `Tensor` with rank `R - 1`, where R is the rank of the
- `SparseTensor` inputs `hypothesis` and `truth`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If either `hypothesis` or `truth` are not a `SparseTensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.encode_base64.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.encode_base64.md
deleted file mode 100644
index 20fef36bcb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.encode_base64.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.encode_base64(input, pad=None, name=None)` {#encode_base64}
-
-Encode strings into web-safe base64 format.
-
-Refer to the following article for more information on base64 format:
-en.wikipedia.org/wiki/Base64. Base64 strings may have padding with '=' at the
-end so that the encoded has length multiple of 4. See Padding section of the
-link above.
-
-Web-safe means that the encoder uses - and _ instead of + and /.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `string`. Strings to be encoded.
-* <b>`pad`</b>: An optional `bool`. Defaults to `False`.
- Bool whether padding is applied at the ends.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`. Input strings encoded in base64.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.ResourceExhaustedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.ResourceExhaustedError.md
deleted file mode 100644
index a01e255be5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.errors.ResourceExhaustedError.md
+++ /dev/null
@@ -1,12 +0,0 @@
-Some resource has been exhausted.
-
-For example, this error might be raised if a per-user quota is
-exhausted, or perhaps the entire file system is out of space.
-
-- - -
-
-#### `tf.errors.ResourceExhaustedError.__init__(node_def, op, message)` {#ResourceExhaustedError.__init__}
-
-Creates a `ResourceExhaustedError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.expand_dims.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.expand_dims.md
deleted file mode 100644
index 53272b295f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.expand_dims.md
+++ /dev/null
@@ -1,54 +0,0 @@
-### `tf.expand_dims(input, axis=None, name=None, dim=None)` {#expand_dims}
-
-Inserts a dimension of 1 into a tensor's shape.
-
-Given a tensor `input`, this operation inserts a dimension of 1 at the
-dimension index `axis` of `input`'s shape. The dimension index `axis` starts
-at zero; if you specify a negative number for `axis` it is counted backward
-from the end.
-
-This operation is useful if you want to add a batch dimension to a single
-element. For example, if you have a single image of shape `[height, width,
-channels]`, you can make it a batch of 1 image with `expand_dims(image, 0)`,
-which will make the shape `[1, height, width, channels]`.
-
-Other examples:
-
-```python
-# 't' is a tensor of shape [2]
-shape(expand_dims(t, 0)) ==> [1, 2]
-shape(expand_dims(t, 1)) ==> [2, 1]
-shape(expand_dims(t, -1)) ==> [2, 1]
-
-# 't2' is a tensor of shape [2, 3, 5]
-shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5]
-shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5]
-shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1]
-```
-
-This operation requires that:
-
-`-1-input.dims() <= dim <= input.dims()`
-
-This operation is related to `squeeze()`, which removes dimensions of
-size 1.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`.
-* <b>`axis`</b>: 0-D (scalar). Specifies the dimension index at which to
- expand the shape of `input`.
-* <b>`name`</b>: The name of the output `Tensor`.
-* <b>`dim`</b>: 0-D (scalar). Equivalent to `axis`, to be deprecated.
-
-##### Returns:
-
- A `Tensor` with the same data as `input`, but its shape has an additional
- dimension of size 1 added.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if both `dim` and `axis` are specified.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.floor_div.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.floor_div.md
deleted file mode 100644
index da18338be6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.floor_div.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.floor_div(x, y, name=None)` {#floor_div}
-
-Returns x // y element-wise.
-
-*NOTE*: `FloorDiv` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.gather_nd.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.gather_nd.md
deleted file mode 100644
index 22cccc9d9a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.gather_nd.md
+++ /dev/null
@@ -1,110 +0,0 @@
-### `tf.gather_nd(params, indices, name=None)` {#gather_nd}
-
-Gather values or slices from `params` according to `indices`.
-
-`params` is a Tensor of rank `P` and `indices` is a Tensor of rank `Q`.
-
-`indices` must be integer tensor, containing indices into `params`.
-It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
-
-The innermost dimension of `indices` (with length `K`) corresponds to
-indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
-dimension of `params`.
-
-Produces an output tensor with shape
-
-```
-[d_0, ..., d_{Q-2}, params.shape[K], ..., params.shape[P-1]].
-```
-
-Some examples below.
-
-Simple indexing into a matrix:
-
-```python
- indices = [[0, 0], [1, 1]]
- params = [['a', 'b'], ['c', 'd']]
- output = ['a', 'd']
-```
-
-Slice indexing into a matrix:
-
-```python
- indices = [[1], [0]]
- params = [['a', 'b'], ['c', 'd']]
- output = [['c', 'd'], ['a', 'b']]
-```
-
-Indexing into a 3-tensor:
-
-```python
- indices = [[1]]
- params = [[['a0', 'b0'], ['c0', 'd0']],
- [['a1', 'b1'], ['c1', 'd1']]]
- output = [[['a1', 'b1'], ['c1', 'd1']]]
-
-
- indices = [[0, 1], [1, 0]]
- params = [[['a0', 'b0'], ['c0', 'd0']],
- [['a1', 'b1'], ['c1', 'd1']]]
- output = [['c0', 'd0'], ['a1', 'b1']]
-
-
- indices = [[0, 0, 1], [1, 0, 1]]
- params = [[['a0', 'b0'], ['c0', 'd0']],
- [['a1', 'b1'], ['c1', 'd1']]]
- output = ['b0', 'b1']
-```
-
-Batched indexing into a matrix:
-
-```python
- indices = [[[0, 0]], [[0, 1]]]
- params = [['a', 'b'], ['c', 'd']]
- output = [['a'], ['b']]
-```
-
-Batched slice indexing into a matrix:
-
-```python
- indices = [[[1]], [[0]]]
- params = [['a', 'b'], ['c', 'd']]
- output = [[['c', 'd']], [['a', 'b']]]
-```
-
-Batched indexing into a 3-tensor:
-
-```python
- indices = [[[1]], [[0]]]
- params = [[['a0', 'b0'], ['c0', 'd0']],
- [['a1', 'b1'], ['c1', 'd1']]]
- output = [[[['a1', 'b1'], ['c1', 'd1']]],
- [[['a0', 'b0'], ['c0', 'd0']]]]
-
- indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]]
- params = [[['a0', 'b0'], ['c0', 'd0']],
- [['a1', 'b1'], ['c1', 'd1']]]
- output = [[['c0', 'd0'], ['a1', 'b1']],
- [['a0', 'b0'], ['c1', 'd1']]]
-
-
- indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]]
- params = [[['a0', 'b0'], ['c0', 'd0']],
- [['a1', 'b1'], ['c1', 'd1']]]
- output = [['b0', 'b1'], ['d0', 'c1']]
-```
-
-##### Args:
-
-
-* <b>`params`</b>: A `Tensor`. `P-D`. The tensor from which to gather values.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- `Q-D`. Index tensor having shape `[d_0, ..., d_{Q-2}, K]`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `params`.
- `(P+Q-K-1)-D`. Values from `params` gathered from indices given by
- `indices`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.get_variable_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.get_variable_scope.md
deleted file mode 100644
index 4a0d3bc775..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.get_variable_scope.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.get_variable_scope()` {#get_variable_scope}
-
-Returns the current variable scope.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.global_variables_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.global_variables_initializer.md
deleted file mode 100644
index b1ebdcc327..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.global_variables_initializer.md
+++ /dev/null
@@ -1,10 +0,0 @@
-### `tf.global_variables_initializer()` {#global_variables_initializer}
-
-Returns an Op that initializes global variables.
-
-This is just a shortcut for `variable_initializers(global_variables())`
-
-##### Returns:
-
- An Op that initializes global variables in the graph.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.identity.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.identity.md
deleted file mode 100644
index 13f1318601..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.identity.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.identity(input, name=None)` {#identity}
-
-Return a tensor with the same shape and contents as the input tensor or value.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.imag.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.imag.md
deleted file mode 100644
index e6a0ed1a39..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.imag.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.imag(input, name=None)` {#imag}
-
-Returns the imaginary part of a complex number.
-
-Given a tensor `input` of complex numbers, this operation returns a tensor of
-type `float32` or `float64` that is the imaginary part of each element in
-`input`. All elements in `input` must be complex numbers of the form \(a +
-bj\), where *a* is the real part and *b* is the imaginary part returned by
-this operation.
-
-For example:
-
-```
-# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
-tf.imag(input) ==> [4.75, 5.75]
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `complex64`,
- `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32` or `float64`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.crop_and_resize.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.crop_and_resize.md
deleted file mode 100644
index aace65153a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.crop_and_resize.md
+++ /dev/null
@@ -1,50 +0,0 @@
-### `tf.image.crop_and_resize(image, boxes, box_ind, crop_size, method=None, extrapolation_value=None, name=None)` {#crop_and_resize}
-
-Extracts crops from the input image tensor and bilinearly resizes them (possibly
-
-with aspect ratio change) to a common output size specified by `crop_size`. This
-is more general than the `crop_to_bounding_box` op which extracts a fixed size
-slice from the input image and does not allow resizing or aspect ratio change.
-
-Returns a tensor with `crops` from the input `image` at positions defined at the
-bounding box locations in `boxes`. The cropped boxes are all resized (with
-bilinear interpolation) to a fixed `size = [crop_height, crop_width]`. The
-result is a 4-D tensor `[num_boxes, crop_height, crop_width, depth]`.
-
-##### Args:
-
-
-* <b>`image`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`.
- A 4-D tensor of shape `[batch, image_height, image_width, depth]`.
- Both `image_height` and `image_width` need to be positive.
-* <b>`boxes`</b>: A `Tensor` of type `float32`.
- A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor
- specifies the coordinates of a box in the `box_ind[i]` image and is specified
- in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of
- `y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the
- `[0, 1]` interval of normalized image height is mapped to
- `[0, image_height - 1] in image height coordinates. We do allow y1 > y2, in
- which case the sampled crop is an up-down flipped version of the original
- image. The width dimension is treated similarly. Normalized coordinates
- outside the `[0, 1]` range are allowed, in which case we use
- `extrapolation_value` to extrapolate the input image values.
-* <b>`box_ind`</b>: A `Tensor` of type `int32`.
- A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`.
- The value of `box_ind[i]` specifies the image that the `i`-th box refers to.
-* <b>`crop_size`</b>: A `Tensor` of type `int32`.
- A 1-D tensor of 2 elements, `size = [crop_height, crop_width]`. All
- cropped image patches are resized to this size. The aspect ratio of the image
- content is not preserved. Both `crop_height` and `crop_width` need to be
- positive.
-* <b>`method`</b>: An optional `string` from: `"bilinear"`. Defaults to `"bilinear"`.
- A string specifying the interpolation method. Only 'bilinear' is
- supported for now.
-* <b>`extrapolation_value`</b>: An optional `float`. Defaults to `0`.
- Value used for extrapolation, when applicable.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`.
- A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.encode_jpeg.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.encode_jpeg.md
deleted file mode 100644
index 24b1886c10..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.image.encode_jpeg.md
+++ /dev/null
@@ -1,51 +0,0 @@
-### `tf.image.encode_jpeg(image, format=None, quality=None, progressive=None, optimize_size=None, chroma_downsampling=None, density_unit=None, x_density=None, y_density=None, xmp_metadata=None, name=None)` {#encode_jpeg}
-
-JPEG-encode an image.
-
-`image` is a 3-D uint8 Tensor of shape `[height, width, channels]`.
-
-The attr `format` can be used to override the color format of the encoded
-output. Values can be:
-
-* `''`: Use a default format based on the number of channels in the image.
-* `grayscale`: Output a grayscale JPEG image. The `channels` dimension
- of `image` must be 1.
-* `rgb`: Output an RGB JPEG image. The `channels` dimension
- of `image` must be 3.
-
-If `format` is not specified or is the empty string, a default format is picked
-in function of the number of channels in `image`:
-
-* 1: Output a grayscale image.
-* 3: Output an RGB image.
-
-##### Args:
-
-
-* <b>`image`</b>: A `Tensor` of type `uint8`.
- 3-D with shape `[height, width, channels]`.
-* <b>`format`</b>: An optional `string` from: `"", "grayscale", "rgb"`. Defaults to `""`.
- Per pixel image format.
-* <b>`quality`</b>: An optional `int`. Defaults to `95`.
- Quality of the compression from 0 to 100 (higher is better and slower).
-* <b>`progressive`</b>: An optional `bool`. Defaults to `False`.
- If True, create a JPEG that loads progressively (coarse to fine).
-* <b>`optimize_size`</b>: An optional `bool`. Defaults to `False`.
- If True, spend CPU/RAM to reduce size with no quality change.
-* <b>`chroma_downsampling`</b>: An optional `bool`. Defaults to `True`.
- See http://en.wikipedia.org/wiki/Chroma_subsampling.
-* <b>`density_unit`</b>: An optional `string` from: `"in", "cm"`. Defaults to `"in"`.
- Unit used to specify `x_density` and `y_density`:
- pixels per inch (`'in'`) or centimeter (`'cm'`).
-* <b>`x_density`</b>: An optional `int`. Defaults to `300`.
- Horizontal pixels per density unit.
-* <b>`y_density`</b>: An optional `int`. Defaults to `300`.
- Vertical pixels per density unit.
-* <b>`xmp_metadata`</b>: An optional `string`. Defaults to `""`.
- If not empty, embed this XMP metadata in the image header.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`. 0-D. JPEG-encoded image.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.initialize_all_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.initialize_all_variables.md
deleted file mode 100644
index ec240fc608..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.initialize_all_variables.md
+++ /dev/null
@@ -1,8 +0,0 @@
-### `tf.initialize_all_variables(*args, **kwargs)` {#initialize_all_variables}
-
-See `tf.global_variables_initializer`. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02.
-Instructions for updating:
-Use `tf.global_variables_initializer` instead.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.initialize_local_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.initialize_local_variables.md
deleted file mode 100644
index a6c1395e91..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.initialize_local_variables.md
+++ /dev/null
@@ -1,8 +0,0 @@
-### `tf.initialize_local_variables(*args, **kwargs)` {#initialize_local_variables}
-
-See `tf.local_variables_initializer`. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02.
-Instructions for updating:
-Use `tf.local_variables_initializer` instead.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.is_variable_initialized.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.is_variable_initialized.md
deleted file mode 100644
index d8383439ab..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.is_variable_initialized.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.is_variable_initialized(variable)` {#is_variable_initialized}
-
-Tests if a variable has been initialized.
-
-##### Args:
-
-
-* <b>`variable`</b>: A `Variable`.
-
-##### Returns:
-
- Returns a scalar boolean Tensor, `True` if the variable has been
- initialized, `False` otherwise.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.local_variables_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.local_variables_initializer.md
deleted file mode 100644
index 3f726bdf7a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.local_variables_initializer.md
+++ /dev/null
@@ -1,10 +0,0 @@
-### `tf.local_variables_initializer()` {#local_variables_initializer}
-
-Returns an Op that initializes all local variables.
-
-This is just a shortcut for `variable_initializers(local_variables())`
-
-##### Returns:
-
- An Op that initializes all local variables in the graph.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matmul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matmul.md
deleted file mode 100644
index 69079ecabe..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.matmul.md
+++ /dev/null
@@ -1,90 +0,0 @@
-### `tf.matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_is_sparse=False, b_is_sparse=False, name=None)` {#matmul}
-
-Multiplies matrix `a` by matrix `b`, producing `a` * `b`.
-
-The inputs must be matrices (or tensors of rank > 2, representing batches of
-matrices), with matching inner dimensions, possibly after transposition.
-
-Both matrices must be of the same type. The supported types are:
-`float16`, `float32`, `float64`, `int32`, `complex64`, `complex128`.
-
-Either matrix can be transposed or adjointed (conjugated and transposed) on
-the fly by setting one of the corresponding flag to `True`. These are `False`
-by default.
-
-If one or both of the matrices contain a lot of zeros, a more efficient
-multiplication algorithm can be used by setting the corresponding
-`a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default.
-This optimization is only available for plain matrices (rank-2 tensors) with
-datatypes `bfloat16` or `float32`.
-
-For example:
-
-```python
-# 2-D tensor `a`
-a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) => [[1. 2. 3.]
- [4. 5. 6.]]
-# 2-D tensor `b`
-b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) => [[7. 8.]
- [9. 10.]
- [11. 12.]]
-c = tf.matmul(a, b) => [[58 64]
- [139 154]]
-
-
-# 3-D tensor `a`
-a = tf.constant(np.arange(1, 13, dtype=np.int32),
- shape=[2, 2, 3]) => [[[ 1. 2. 3.]
- [ 4. 5. 6.]],
- [[ 7. 8. 9.]
- [10. 11. 12.]]]
-
-# 3-D tensor `b`
-b = tf.constant(np.arange(13, 25, dtype=np.int32),
- shape=[2, 3, 2]) => [[[13. 14.]
- [15. 16.]
- [17. 18.]],
- [[19. 20.]
- [21. 22.]
- [23. 24.]]]
-c = tf.matmul(a, b) => [[[ 94 100]
- [229 244]],
- [[508 532]
- [697 730]]]
-```
-
-##### Args:
-
-
-* <b>`a`</b>: `Tensor` of type `float16`, `float32`, `float64`, `int32`, `complex64`,
- `complex128` and rank > 1.
-* <b>`b`</b>: `Tensor` with same type and rank as `a`.
-* <b>`transpose_a`</b>: If `True`, `a` is transposed before multiplication.
-* <b>`transpose_b`</b>: If `True`, `b` is transposed before multiplication.
-* <b>`adjoint_a`</b>: If `True`, `a` is conjugated and transposed before
- multiplication.
-* <b>`adjoint_b`</b>: If `True`, `b` is conjugated and transposed before
- multiplication.
-* <b>`a_is_sparse`</b>: If `True`, `a` is treated as a sparse matrix.
-* <b>`b_is_sparse`</b>: If `True`, `b` is treated as a sparse matrix.
-* <b>`name`</b>: Name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of the same type as `a` and `b` where each inner-most matrix is
- the product of the corresponding matrices in `a` and `b`, e.g. if all
- transpose or adjoint attributes are `False`:
-
- `output`[..., i, j] = sum_k (`a`[..., i, k] * `b`[..., k, j]),
- for all indices i, j.
-
-
-* <b>`Note`</b>: This is matrix product, not element-wise product.
-
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If transpose_a and adjoint_a, or transpose_b and adjoint_b
- are both set to True.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.minimum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.minimum.md
deleted file mode 100644
index 9bcd03f6e7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.minimum.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.minimum(x, y, name=None)` {#minimum}
-
-Returns the min of x and y (i.e. x < y ? x : y) element-wise.
-
-*NOTE*: `Minimum` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.conv3d_backprop_filter_v2.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.conv3d_backprop_filter_v2.md
deleted file mode 100644
index 1a48a6f0e0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.conv3d_backprop_filter_v2.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.nn.conv3d_backprop_filter_v2(input, filter_sizes, out_backprop, strides, padding, name=None)` {#conv3d_backprop_filter_v2}
-
-Computes the gradients of 3-D convolution with respect to the filter.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Shape `[batch, depth, rows, cols, in_channels]`.
-* <b>`filter_sizes`</b>: A `Tensor` of type `int32`.
- An integer vector representing the tensor shape of `filter`,
- where `filter` is a 5-D
- `[filter_depth, filter_height, filter_width, in_channels, out_channels]`
- tensor.
-* <b>`out_backprop`</b>: A `Tensor`. Must have the same type as `input`.
- Backprop signal of shape `[batch, out_depth, out_rows, out_cols,
- out_channels]`.
-* <b>`strides`</b>: A list of `ints` that has length `>= 5`.
- 1-D tensor of length 5. The stride of the sliding window for each
- dimension of `input`. Must have `strides[0] = strides[4] = 1`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.depthwise_conv2d_native_backprop_input.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.depthwise_conv2d_native_backprop_input.md
deleted file mode 100644
index 26023f5f65..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.depthwise_conv2d_native_backprop_input.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.nn.depthwise_conv2d_native_backprop_input(input_sizes, filter, out_backprop, strides, padding, name=None)` {#depthwise_conv2d_native_backprop_input}
-
-Computes the gradients of depthwise convolution with respect to the input.
-
-##### Args:
-
-
-* <b>`input_sizes`</b>: A `Tensor` of type `int32`.
- An integer vector representing the shape of `input`,
- where `input` is a 4-D `[batch, height, width, channels]` tensor.
-* <b>`filter`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
- 4-D with shape
- `[filter_height, filter_width, in_channels, depthwise_multiplier]`.
-* <b>`out_backprop`</b>: A `Tensor`. Must have the same type as `filter`.
- 4-D with shape `[batch, out_height, out_width, out_channels]`.
- Gradients w.r.t. the output of the convolution.
-* <b>`strides`</b>: A list of `ints`.
- The stride of the sliding window for each dimension of the input
- of the convolution.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `filter`.
- 4-D with shape `[batch, in_height, in_width, in_channels]`. Gradient
- w.r.t. the input of the convolution.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.embedding_lookup.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.embedding_lookup.md
deleted file mode 100644
index a58bd7f728..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.embedding_lookup.md
+++ /dev/null
@@ -1,56 +0,0 @@
-### `tf.nn.embedding_lookup(params, ids, partition_strategy='mod', name=None, validate_indices=True, max_norm=None)` {#embedding_lookup}
-
-Looks up `ids` in a list of embedding tensors.
-
-This function is used to perform parallel lookups on the list of
-tensors in `params`. It is a generalization of
-[`tf.gather()`](../../api_docs/python/array_ops.md#gather), where `params` is
-interpreted as a partitioning of a large embedding tensor. `params` may be
-a `PartitionedVariable` as returned by using `tf.get_variable()` with a
-partitioner.
-
-If `len(params) > 1`, each element `id` of `ids` is partitioned between
-the elements of `params` according to the `partition_strategy`.
-In all strategies, if the id space does not evenly divide the number of
-partitions, each of the first `(max_id + 1) % len(params)` partitions will
-be assigned one more id.
-
-If `partition_strategy` is `"mod"`, we assign each id to partition
-`p = id % len(params)`. For instance,
-13 ids are split across 5 partitions as:
-`[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`
-
-If `partition_strategy` is `"div"`, we assign ids to partitions in a
-contiguous manner. In this case, 13 ids are split across 5 partitions as:
-`[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`
-
-The results of the lookup are concatenated into a dense
-tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
-
-##### Args:
-
-
-* <b>`params`</b>: A single tensor representing the complete embedding tensor,
- or a list of P tensors all of same shape except for the first dimension,
- representing sharded embedding tensors. Alternatively, a
- `PartitionedVariable`, created by partitioning along dimension 0. Each
- element must be appropriately sized for the given `partition_strategy`.
-* <b>`ids`</b>: A `Tensor` with type `int32` or `int64` containing the ids to be looked
- up in `params`.
-* <b>`partition_strategy`</b>: A string specifying the partitioning strategy, relevant
- if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default
- is `"mod"`.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`validate_indices`</b>: Whether or not to validate gather indices.
-* <b>`max_norm`</b>: If not None, embedding values are l2-normalized to the value of
- max_norm.
-
-##### Returns:
-
- A `Tensor` with the same type as the tensors in `params`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `params` is empty.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.log_uniform_candidate_sampler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.log_uniform_candidate_sampler.md
deleted file mode 100644
index baf9f9d421..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.log_uniform_candidate_sampler.md
+++ /dev/null
@@ -1,56 +0,0 @@
-### `tf.nn.log_uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)` {#log_uniform_candidate_sampler}
-
-Samples a set of classes using a log-uniform (Zipfian) base distribution.
-
-This operation randomly samples a tensor of sampled classes
-(`sampled_candidates`) from the range of integers `[0, range_max)`.
-
-The elements of `sampled_candidates` are drawn without replacement
-(if `unique=True`) or with replacement (if `unique=False`) from
-the base distribution.
-
-The base distribution for this operation is an approximately log-uniform
-or Zipfian distribution:
-
-`P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)`
-
-This sampler is useful when the target classes approximately follow such
-a distribution - for example, if the classes represent words in a lexicon
-sorted in decreasing order of frequency. If your classes are not ordered by
-decreasing frequency, do not use this op.
-
-In addition, this operation returns tensors `true_expected_count`
-and `sampled_expected_count` representing the number of times each
-of the target classes (`true_classes`) and the sampled
-classes (`sampled_candidates`) is expected to occur in an average
-tensor of sampled classes. These values correspond to `Q(y|x)`
-defined in [this
-document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
-If `unique=True`, then these are post-rejection probabilities and we
-compute them approximately.
-
-##### Args:
-
-
-* <b>`true_classes`</b>: A `Tensor` of type `int64` and shape `[batch_size,
- num_true]`. The target classes.
-* <b>`num_true`</b>: An `int`. The number of target classes per training example.
-* <b>`num_sampled`</b>: An `int`. The number of classes to randomly sample per batch.
-* <b>`unique`</b>: A `bool`. Determines whether all sampled classes in a batch are
- unique.
-* <b>`range_max`</b>: An `int`. The number of possible classes.
-* <b>`seed`</b>: An `int`. An operation-specific seed. Default is 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
-
-* <b>`sampled_candidates`</b>: A tensor of type `int64` and shape `[num_sampled]`.
- The sampled classes.
-* <b>`true_expected_count`</b>: A tensor of type `float`. Same shape as
- `true_classes`. The expected counts under the sampling distribution
- of each of `true_classes`.
-* <b>`sampled_expected_count`</b>: A tensor of type `float`. Same shape as
- `sampled_candidates`. The expected counts under the sampling distribution
- of each of `sampled_candidates`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.relu.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.relu.md
deleted file mode 100644
index 5811a1da96..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.nn.relu.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.nn.relu(features, name=None)` {#relu}
-
-Computes rectified linear: `max(features, 0)`.
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `features`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.parallel_stack.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.parallel_stack.md
deleted file mode 100644
index a9df823110..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.parallel_stack.md
+++ /dev/null
@@ -1,41 +0,0 @@
-### `tf.parallel_stack(values, name='parallel_stack')` {#parallel_stack}
-
-Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor in parallel.
-
-Requires that the shape of inputs be known at graph construction time.
-
-Packs the list of tensors in `values` into a tensor with rank one higher than
-each tensor in `values`, by packing them along the first dimension.
-Given a list of length `N` of tensors of shape `(A, B, C)`; the `output`
-tensor will have the shape `(N, A, B, C)`.
-
-For example:
-
-```prettyprint
-# 'x' is [1, 4]
-# 'y' is [2, 5]
-# 'z' is [3, 6]
-parallel_stack([x, y, z]) => [[1, 4], [2, 5], [3, 6]]
-```
-
-The difference between stack and parallel_stack is that stack requires all
-of the inputs be computed before the operation will begin but doesn't require
-that the input shapes be known during graph construction. Parallel stack
-will copy pieces of the input into the output as they become available, in
-some situations this can provide a performance benefit.
-
-This is the opposite of unstack. The numpy equivalent is
-
- tf.parallel_stack([x, y, z]) = np.asarray([x, y, z])
-
-##### Args:
-
-
-* <b>`values`</b>: A list of `Tensor` objects with the same shape and type.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
-
-* <b>`output`</b>: A stacked `Tensor` with the same type as `values`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.random_uniform.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.random_uniform.md
deleted file mode 100644
index 517bdd98c4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.random_uniform.md
+++ /dev/null
@@ -1,41 +0,0 @@
-### `tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)` {#random_uniform}
-
-Outputs random values from a uniform distribution.
-
-The generated values follow a uniform distribution in the range
-`[minval, maxval)`. The lower bound `minval` is included in the range, while
-the upper bound `maxval` is excluded.
-
-For floats, the default range is `[0, 1)`. For ints, at least `maxval` must
-be specified explicitly.
-
-In the integer case, the random integers are slightly biased unless
-`maxval - minval` is an exact power of two. The bias is small for values of
-`maxval - minval` significantly smaller than the range of the output (either
-`2**32` or `2**64`).
-
-##### Args:
-
-
-* <b>`shape`</b>: A 1-D integer Tensor or Python array. The shape of the output tensor.
-* <b>`minval`</b>: A 0-D Tensor or Python value of type `dtype`. The lower bound on the
- range of random values to generate. Defaults to 0.
-* <b>`maxval`</b>: A 0-D Tensor or Python value of type `dtype`. The upper bound on
- the range of random values to generate. Defaults to 1 if `dtype` is
- floating point.
-* <b>`dtype`</b>: The type of the output: `float32`, `float64`, `int32`, or `int64`.
-* <b>`seed`</b>: A Python integer. Used to create a random seed for the distribution.
- See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tensor of the specified shape filled with random uniform values.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `dtype` is integral and `maxval` is not specified.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.real.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.real.md
deleted file mode 100644
index 00ebad2676..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.real.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.real(input, name=None)` {#real}
-
-Returns the real part of a complex number.
-
-Given a tensor `input` of complex numbers, this operation returns a tensor of
-type `float32` or `float64` that is the real part of each element in `input`.
-All elements in `input` must be complex numbers of the form \\(a + bj\\),
-where *a* is the real part returned by this operation and *b* is the
-imaginary part.
-
-For example:
-
-```
-# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
-tf.real(input) ==> [-2.25, 3.25]
-```
-
-If `input` is already real, it is returned unchanged.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must have numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32` or `float64`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.report_uninitialized_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.report_uninitialized_variables.md
deleted file mode 100644
index e3ecdf7733..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.report_uninitialized_variables.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.report_uninitialized_variables(var_list=None, name='report_uninitialized_variables')` {#report_uninitialized_variables}
-
-Adds ops to list the names of uninitialized variables.
-
-When run, it returns a 1-D tensor containing the names of uninitialized
-variables if there are any, or an empty array if there are none.
-
-##### Args:
-
-
-* <b>`var_list`</b>: List of `Variable` objects to check. Defaults to the
- value of `global_variables() + local_variables()`
-* <b>`name`</b>: Optional name of the `Operation`.
-
-##### Returns:
-
- A 1-D tensor containing names of the uninitialized variables, or an empty
- 1-D tensor if there are no variables or no uninitialized variables.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.required_space_to_batch_paddings.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.required_space_to_batch_paddings.md
deleted file mode 100644
index ac3bd931fb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.required_space_to_batch_paddings.md
+++ /dev/null
@@ -1,35 +0,0 @@
-### `tf.required_space_to_batch_paddings(input_shape, block_shape, base_paddings=None, name=None)` {#required_space_to_batch_paddings}
-
-Calculate padding required to make block_shape divide input_shape.
-
-This function can be used to calculate a suitable paddings argument for use
-with space_to_batch_nd and batch_to_space_nd.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: int32 Tensor of shape [N].
-* <b>`block_shape`</b>: int32 Tensor of shape [N].
-* <b>`base_paddings`</b>: Optional int32 Tensor of shape [N, 2]. Specifies the minimum
- amount of padding to use. All elements must be >= 0. If not specified,
- defaults to 0.
-* <b>`name`</b>: string. Optional name prefix.
-
-##### Returns:
-
- (paddings, crops), where:
-
- `paddings` and `crops` are int32 Tensors of rank 2 and shape [N, 2]
-
-* <b>`satisfying`</b>:
-
- paddings[i, 0] = base_paddings[i, 0].
- 0 <= paddings[i, 1] - base_paddings[i, 1] < block_shape[i]
- (input_shape[i] + paddings[i, 0] + paddings[i, 1]) % block_shape[i] == 0
-
- crops[i, 0] = 0
- crops[i, 1] = paddings[i, 1] - base_paddings[i, 1]
-
-
-* <b>`Raises`</b>: ValueError if called with incompatible shapes.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scatter_nd_sub.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scatter_nd_sub.md
deleted file mode 100644
index 1d16c8e06c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scatter_nd_sub.md
+++ /dev/null
@@ -1,61 +0,0 @@
-### `tf.scatter_nd_sub(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_sub}
-
-Applies sparse subtraction between `updates` and individual values or slices
-
-within a given variable according to `indices`.
-
-`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
-
-`indices` must be integer tensor, containing indices into `ref`.
-It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
-
-The innermost dimension of `indices` (with length `K`) corresponds to
-indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
-dimension of `ref`.
-
-`updates` is `Tensor` of rank `Q-1+P-K` with shape:
-
-```
-[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
-```
-
-For example, say we want to subtract 4 scattered elements from a rank-1 tensor
-with 8 elements. In Python, that subtraction would look like this:
-
- ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
- indices = tf.constant([[4], [3], [1], [7]])
- updates = tf.constant([9, 10, 11, 12])
- sub = tf.scatter_nd_sub(ref, indices, updates)
- with tf.Session() as sess:
- print sess.run(sub)
-
-The resulting update to ref would look like this:
-
- [1, -9, 3, -6, -4, 6, 7, -4]
-
-See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to
-slices.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- A mutable Tensor. Should be from a Variable node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A Tensor. Must be one of the following types: int32, int64.
- A tensor of indices into ref.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A Tensor. Must have the same type as ref. A tensor of updated values
- to subtract from ref.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- An optional bool. Defaults to True. If True, the assignment will
- be protected by a lock; otherwise the behavior is undefined,
- but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A mutable `Tensor`. Has the same type as `ref`.
- Same as ref. Returned as a convenience for operations that want
- to use the updated values after the update is done.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scatter_update.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scatter_update.md
deleted file mode 100644
index 880b740b16..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.scatter_update.md
+++ /dev/null
@@ -1,46 +0,0 @@
-### `tf.scatter_update(ref, indices, updates, use_locking=None, name=None)` {#scatter_update}
-
-Applies sparse updates to a variable reference.
-
-This operation computes
-
- # Scalar indices
- ref[indices, ...] = updates[...]
-
- # Vector indices (for each i)
- ref[indices[i], ...] = updates[i, ...]
-
- # High rank indices (for each i, ..., j)
- ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]
-
-This operation outputs `ref` after the update is done.
-This makes it easier to chain operations that need to use the reset value.
-
-If values in `ref` is to be updated more than once, because there are
-duplicate entries in `indices`, the order at which the updates happen
-for each value is undefined.
-
-Requires `updates.shape = indices.shape + ref.shape[1:]`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/ScatterUpdate.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Should be from a `Variable` node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A tensor of indices into the first dimension of `ref`.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A tensor of updated values to store in `ref`.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `True`.
- If True, the assignment will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as `ref`. Returned as a convenience for operations that want
- to use the updated values after the update is done.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.setdiff1d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.setdiff1d.md
deleted file mode 100644
index 3bd95f13c5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.setdiff1d.md
+++ /dev/null
@@ -1,41 +0,0 @@
-### `tf.setdiff1d(x, y, index_dtype=tf.int32, name=None)` {#setdiff1d}
-
-Computes the difference between two lists of numbers or strings.
-
-Given a list `x` and a list `y`, this operation returns a list `out` that
-represents all values that are in `x` but not in `y`. The returned list `out`
-is sorted in the same order that the numbers appear in `x` (duplicates are
-preserved). This operation also returns a list `idx` that represents the
-position of each `out` element in `x`. In other words:
-
-`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]`
-
-For example, given this input:
-
-```prettyprint
-x = [1, 2, 3, 4, 5, 6]
-y = [1, 3, 5]
-```
-
-This operation would return:
-
-```prettyprint
-out ==> [2, 4, 6]
-idx ==> [1, 3, 5]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. 1-D. Values to keep.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`. 1-D. Values to remove.
-* <b>`out_idx`</b>: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (out, idx).
-
-* <b>`out`</b>: A `Tensor`. Has the same type as `x`. 1-D. Values present in `x` but not in `y`.
-* <b>`idx`</b>: A `Tensor` of type `out_idx`. 1-D. Positions of `x` values preserved in `out`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.shape_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.shape_n.md
deleted file mode 100644
index 5a5eca2762..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.shape_n.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.shape_n(input, out_type=None, name=None)` {#shape_n}
-
-Returns shape of tensors.
-
-This operation returns N 1-D integer tensors representing shape of `input[i]s`.
-
-##### Args:
-
-
-* <b>`input`</b>: A list of at least 1 `Tensor` objects of the same type.
-* <b>`out_type`</b>: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A list with the same number of `Tensor` objects as `input` of `Tensor` objects of type out_type.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sin.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sin.md
deleted file mode 100644
index f69c58bee0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sin.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.sin(x, name=None)` {#sin}
-
-Computes sin of x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.space_to_batch_nd.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.space_to_batch_nd.md
deleted file mode 100644
index 7ab9e70475..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.space_to_batch_nd.md
+++ /dev/null
@@ -1,137 +0,0 @@
-### `tf.space_to_batch_nd(input, block_shape, paddings, name=None)` {#space_to_batch_nd}
-
-SpaceToBatch for N-D tensors of type T.
-
-This operation divides "spatial" dimensions `[1, ..., M]` of the input into a
-grid of blocks of shape `block_shape`, and interleaves these blocks with the
-"batch" dimension (0) such that in the output, the spatial dimensions
-`[1, ..., M]` correspond to the position within the grid, and the batch
-dimension combines both the position within a spatial block and the original
-batch position. Prior to division into blocks, the spatial dimensions of the
-input are optionally zero padded according to `paddings`. See below for a
-precise description.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`.
- N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,
- where spatial_shape has `M` dimensions.
-* <b>`block_shape`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 1-D with shape `[M]`, all values must be >= 1.
-* <b>`paddings`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 2-D with shape `[M, 2]`, all values must be >= 0.
- `paddings[i] = [pad_start, pad_end]` specifies the padding for input dimension
- `i + 1`, which corresponds to spatial dimension `i`. It is required that
- `block_shape[i]` divides `input_shape[i + 1] + pad_start + pad_end`.
-
- This operation is equivalent to the following steps:
-
- 1. Zero-pad the start and end of dimensions `[1, ..., M]` of the
- input according to `paddings` to produce `padded` of shape `padded_shape`.
-
- 2. Reshape `padded` to `reshaped_padded` of shape:
-
- [batch] +
- [padded_shape[1] / block_shape[0],
- block_shape[0],
- ...,
- padded_shape[M] / block_shape[M-1],
- block_shape[M-1]] +
- remaining_shape
-
- 3. Permute dimensions of `reshaped_padded` to produce
- `permuted_reshaped_padded` of shape:
-
- block_shape +
- [batch] +
- [padded_shape[1] / block_shape[0],
- ...,
- padded_shape[M] / block_shape[M-1]] +
- remaining_shape
-
- 4. Reshape `permuted_reshaped_padded` to flatten `block_shape` into the batch
- dimension, producing an output tensor of shape:
-
- [batch * prod(block_shape)] +
- [padded_shape[1] / block_shape[0],
- ...,
- padded_shape[M] / block_shape[M-1]] +
- remaining_shape
-
- Some examples:
-
- (1) For the following input of shape `[1, 2, 2, 1]`, `block_shape = [2, 2]`, and
- `paddings = [[0, 0], [0, 0]]`:
-
- ```prettyprint
- x = [[[[1], [2]], [[3], [4]]]]
- ```
-
- The output tensor has shape `[4, 1, 1, 1]` and value:
-
- ```prettyprint
- [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
- ```
-
- (2) For the following input of shape `[1, 2, 2, 3]`, `block_shape = [2, 2]`, and
- `paddings = [[0, 0], [0, 0]]`:
-
- ```prettyprint
- x = [[[[1, 2, 3], [4, 5, 6]],
- [[7, 8, 9], [10, 11, 12]]]]
- ```
-
- The output tensor has shape `[4, 1, 1, 3]` and value:
-
- ```prettyprint
- [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
- ```
-
- (3) For the following input of shape `[1, 4, 4, 1]`, `block_shape = [2, 2]`, and
- `paddings = [[0, 0], [0, 0]]`:
-
- ```prettyprint
- x = [[[[1], [2], [3], [4]],
- [[5], [6], [7], [8]],
- [[9], [10], [11], [12]],
- [[13], [14], [15], [16]]]]
- ```
-
- The output tensor has shape `[4, 2, 2, 1]` and value:
-
- ```prettyprint
- x = [[[[1], [3]], [[9], [11]]],
- [[[2], [4]], [[10], [12]]],
- [[[5], [7]], [[13], [15]]],
- [[[6], [8]], [[14], [16]]]]
- ```
-
- (4) For the following input of shape `[2, 2, 4, 1]`, block_shape = `[2, 2]`, and
- paddings = `[[0, 0], [2, 0]]`:
-
- ```prettyprint
- x = [[[[1], [2], [3], [4]],
- [[5], [6], [7], [8]]],
- [[[9], [10], [11], [12]],
- [[13], [14], [15], [16]]]]
- ```
-
- The output tensor has shape `[8, 1, 3, 1]` and value:
-
- ```prettyprint
- x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
- [[[0], [2], [4]]], [[[0], [10], [12]]],
- [[[0], [5], [7]]], [[[0], [13], [15]]],
- [[[0], [6], [8]]], [[[0], [14], [16]]]]
- ```
-
- Among others, this operation is useful for reducing atrous convolution into
- regular convolution.
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sparse_placeholder.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sparse_placeholder.md
deleted file mode 100644
index c1fa1d12e6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.sparse_placeholder.md
+++ /dev/null
@@ -1,43 +0,0 @@
-### `tf.sparse_placeholder(dtype, shape=None, name=None)` {#sparse_placeholder}
-
-Inserts a placeholder for a sparse tensor that will be always fed.
-
-**Important**: This sparse tensor will produce an error if evaluated.
-Its value must be fed using the `feed_dict` optional argument to
-`Session.run()`, `Tensor.eval()`, or `Operation.run()`.
-
-For example:
-
-```python
-x = tf.sparse_placeholder(tf.float32)
-y = tf.sparse_reduce_sum(x)
-
-with tf.Session() as sess:
- print(sess.run(y)) # ERROR: will fail because x was not fed.
-
- indices = np.array([[3, 2, 0], [4, 5, 1]], dtype=np.int64)
- values = np.array([1.0, 2.0], dtype=np.float32)
- shape = np.array([7, 9, 2], dtype=np.int64)
- print(sess.run(y, feed_dict={
- x: tf.SparseTensorValue(indices, values, shape)})) # Will succeed.
- print(sess.run(y, feed_dict={
- x: (indices, values, shape)})) # Will succeed.
-
- sp = tf.SparseTensor(indices=indices, values=values, dense_shape=shape)
- sp_value = sp.eval(session)
- print(sess.run(y, feed_dict={x: sp_value})) # Will succeed.
-```
-
-##### Args:
-
-
-* <b>`dtype`</b>: The type of `values` elements in the tensor to be fed.
-* <b>`shape`</b>: The shape of the tensor to be fed (optional). If the shape is not
- specified, you can feed a sparse tensor of any shape.
-* <b>`name`</b>: A name for prefixing the operations (optional).
-
-##### Returns:
-
- A `SparseTensor` that may be used as a handle for feeding a value, but not
- evaluated directly.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.split.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.split.md
deleted file mode 100644
index 06c6461b83..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.split.md
+++ /dev/null
@@ -1,53 +0,0 @@
-### `tf.split(value, num_or_size_splits, axis=0, num=None, name='split')` {#split}
-
-Splits a tensor into sub tensors.
-
-If `num_or_size_splits` is a scalar, `num_split`, then splits `value` along
-dimension `axis` into `num_split` smaller tensors.
-Requires that `num_split` evenly divides `value.shape[axis]`.
-
-If `num_or_size_splits` is a tensor, `size_splits`, then splits `value` into
-`len(size_splits)` pieces. The shape of the `i`-th piece has the same size as
-the `value` except along dimension `axis` where the size is `size_splits[i]`.
-
-For example:
-
-```python
-# 'value' is a tensor with shape [5, 30]
-# Split 'value' into 3 tensors with sizes [4, 15, 11] along dimension 1
-split0, split1, split2 = tf.split(value, [4, 15, 11], 1)
-tf.shape(split0) ==> [5, 4]
-tf.shape(split1) ==> [5, 15]
-tf.shape(split2) ==> [5, 11]
-# Split 'value' into 3 tensors along dimension 1
-split0, split1, split2 = tf.split(value, num_or_size_splits=3, axis=1)
-tf.shape(split0) ==> [5, 10]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: The `Tensor` to split.
-* <b>`num_or_size_splits`</b>: Either an integer indicating the number of splits along
- split_dim or a 1-D Tensor containing the sizes of each output tensor
- along split_dim. If an integer then it must evenly divide
- `value.shape[axis]`; otherwise the sum of sizes along the split
- dimension must match that of the `value`.
-* <b>`axis`</b>: A 0-D `int32` `Tensor`. The dimension along which to split.
- Must be in the range `[0, rank(value))`. Defaults to 0.
-* <b>`num`</b>: Optional, used to specify the number of outputs when it cannot be
- inferred from the shape of `size_splits`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- if `num_or_size_splits` is a scalar returns `num_or_size_splits` `Tensor`
- objects; if `num_or_size_splits` is a 1-D Tensor returns
- `num_or_size_splits.get_shape[0]` `Tensor` objects resulting from splitting
- `value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `num` is unspecified and cannot be inferred.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.squeeze.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.squeeze.md
deleted file mode 100644
index 90a1b9af82..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.squeeze.md
+++ /dev/null
@@ -1,44 +0,0 @@
-### `tf.squeeze(input, axis=None, name=None, squeeze_dims=None)` {#squeeze}
-
-Removes dimensions of size 1 from the shape of a tensor.
-
-Given a tensor `input`, this operation returns a tensor of the same type with
-all dimensions of size 1 removed. If you don't want to remove all size 1
-dimensions, you can remove specific size 1 dimensions by specifying
-`axis`.
-
-For example:
-
-```prettyprint
-# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
-shape(squeeze(t)) ==> [2, 3]
-```
-
-Or, to remove specific size 1 dimensions:
-
-```prettyprint
-# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
-shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1]
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. The `input` to squeeze.
-* <b>`axis`</b>: An optional list of `ints`. Defaults to `[]`.
- If specified, only squeezes the dimensions listed. The dimension
- index starts at 0. It is an error to squeeze a dimension that is not 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`squeeze_dims`</b>: Deprecated keyword argument that is now axis.
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- Contains the same data as `input`, but has one or more dimensions of
- size 1 removed.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: When both `squeeze_dims` and `axis` are specified.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.string_split.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.string_split.md
deleted file mode 100644
index 08ccc5f104..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.string_split.md
+++ /dev/null
@@ -1,43 +0,0 @@
-### `tf.string_split(source, delimiter=' ')` {#string_split}
-
-Split elements of `source` based on `delimiter` into a `SparseTensor`.
-
-Let N be the size of source (typically N will be the batch size). Split each
-element of `source` based on `delimiter` and return a `SparseTensor`
-containing the splitted tokens. Empty tokens are ignored.
-
-If `delimiter` is an empty string, each element of the `source` is split
-into individual strings, each containing one byte. (This includes splitting
-multibyte sequences of UTF-8.) If delimiter contains multiple bytes, it is
-treated as a set of delimiters with each considered a potential split point.
-
-For example:
-N = 2, source[0] is 'hello world' and source[1] is 'a b c', then the output
-will be
-
-st.indices = [0, 0;
- 0, 1;
- 1, 0;
- 1, 1;
- 1, 2]
-st.shape = [2, 3]
-st.values = ['hello', 'world', 'a', 'b', 'c']
-
-##### Args:
-
-
-* <b>`source`</b>: `1-D` string `Tensor`, the strings to split.
-* <b>`delimiter`</b>: `0-D` string `Tensor`, the delimiter character, the string should
- be length 0 or 1.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If delimiter is not a string.
-
-##### Returns:
-
- A `SparseTensor` of rank `2`, the strings split according to the delimiter.
- The first column of the indices corresponds to the row in `source` and the
- second column corresponds to the index of the split component in this row.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.SummaryDescription.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.SummaryDescription.md
deleted file mode 100644
index bce704ef4f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.SummaryDescription.md
+++ /dev/null
@@ -1,245 +0,0 @@
-
-- - -
-
-#### `tf.summary.SummaryDescription.ByteSize()` {#SummaryDescription.ByteSize}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.Clear()` {#SummaryDescription.Clear}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.ClearExtension(extension_handle)` {#SummaryDescription.ClearExtension}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.ClearField(field_name)` {#SummaryDescription.ClearField}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.CopyFrom(other_msg)` {#SummaryDescription.CopyFrom}
-
-Copies the content of the specified message into the current message.
-
-The method clears the current message and then merges the specified
-message using MergeFrom.
-
-##### Args:
-
-
-* <b>`other_msg`</b>: Message to copy into the current one.
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.DiscardUnknownFields()` {#SummaryDescription.DiscardUnknownFields}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.FindInitializationErrors()` {#SummaryDescription.FindInitializationErrors}
-
-Finds required fields which are not initialized.
-
-##### Returns:
-
- A list of strings. Each string is a path to an uninitialized field from
- the top-level message, e.g. "foo.bar[5].baz".
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.FromString(s)` {#SummaryDescription.FromString}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.HasExtension(extension_handle)` {#SummaryDescription.HasExtension}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.HasField(field_name)` {#SummaryDescription.HasField}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.IsInitialized(errors=None)` {#SummaryDescription.IsInitialized}
-
-Checks if all required fields of a message are set.
-
-##### Args:
-
-
-* <b>`errors`</b>: A list which, if provided, will be populated with the field
- paths of all missing required fields.
-
-##### Returns:
-
- True iff the specified message has all required fields set.
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.ListFields()` {#SummaryDescription.ListFields}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.MergeFrom(msg)` {#SummaryDescription.MergeFrom}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.MergeFromString(serialized)` {#SummaryDescription.MergeFromString}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.ParseFromString(serialized)` {#SummaryDescription.ParseFromString}
-
-Parse serialized protocol buffer data into this message.
-
-Like MergeFromString(), except we clear the object first and
-do not return the value that MergeFromString returns.
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.RegisterExtension(extension_handle)` {#SummaryDescription.RegisterExtension}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.SerializePartialToString()` {#SummaryDescription.SerializePartialToString}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.SerializeToString()` {#SummaryDescription.SerializeToString}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.SetInParent()` {#SummaryDescription.SetInParent}
-
-Sets the _cached_byte_size_dirty bit to true,
-and propagates this to our listener iff this was a state change.
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.WhichOneof(oneof_name)` {#SummaryDescription.WhichOneof}
-
-Returns the name of the currently set field inside a oneof, or None.
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__deepcopy__(memo=None)` {#SummaryDescription.__deepcopy__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__eq__(other)` {#SummaryDescription.__eq__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__getstate__()` {#SummaryDescription.__getstate__}
-
-Support the pickle protocol.
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__hash__()` {#SummaryDescription.__hash__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__init__(**kwargs)` {#SummaryDescription.__init__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__ne__(other_msg)` {#SummaryDescription.__ne__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__repr__()` {#SummaryDescription.__repr__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__setstate__(state)` {#SummaryDescription.__setstate__}
-
-Support the pickle protocol.
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__str__()` {#SummaryDescription.__str__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__unicode__()` {#SummaryDescription.__unicode__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.type_hint` {#SummaryDescription.type_hint}
-
-Magic attribute generated for "type_hint" proto field.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.audio.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.audio.md
deleted file mode 100644
index c7edb74291..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.audio.md
+++ /dev/null
@@ -1,35 +0,0 @@
-### `tf.summary.audio(name, tensor, sample_rate, max_outputs=3, collections=None)` {#audio}
-
-Outputs a `Summary` protocol buffer with audio.
-
-The summary has up to `max_outputs` summary values containing audio. The
-audio is built from `tensor` which must be 3-D with shape `[batch_size,
-frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are
-assumed to be in the range of `[-1.0, 1.0]` with a sample rate of
-`sample_rate`.
-
-The `tag` in the outputted Summary.Value protobufs is generated based on the
-name, with a suffix depending on the max_outputs setting:
-
-* If `max_outputs` is 1, the summary value tag is '*name*/audio'.
-* If `max_outputs` is greater than 1, the summary value tags are
- generated sequentially as '*name*/audio/0', '*name*/audio/1', etc
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the generated node. Will also serve as a series name in
- TensorBoard.
-* <b>`tensor`</b>: A 3-D `float32` `Tensor` of shape `[batch_size, frames, channels]`
- or a 2-D `float32` `Tensor` of shape `[batch_size, frames]`.
-* <b>`sample_rate`</b>: A Scalar `float32` `Tensor` indicating the sample rate of the
- signal in hertz.
-* <b>`max_outputs`</b>: Max number of batch elements to generate audio for.
-* <b>`collections`</b>: Optional list of ops.GraphKeys. The collections to add the
- summary to. Defaults to [_ops.GraphKeys.SUMMARIES]
-
-##### Returns:
-
- A scalar `Tensor` of type `string`. The serialized `Summary` protocol
- buffer.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.tensor_summary.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.tensor_summary.md
deleted file mode 100644
index 3fb19c2601..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.summary.tensor_summary.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.summary.tensor_summary(name, tensor, summary_description=None, collections=None)` {#tensor_summary}
-
-Outputs a `Summary` protocol buffer with a serialized tensor.proto.
-
-The generated
-[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)
-has one summary value containing the input tensor.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the generated node. Will also serve as the series name in
- TensorBoard.
-* <b>`tensor`</b>: A tensor of any type and shape to serialize.
-* <b>`summary_description`</b>: Optional summary_pb2.SummaryDescription()
-* <b>`collections`</b>: Optional list of graph collections keys. The new summary op is
- added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.
-
-##### Returns:
-
- A scalar `Tensor` of type `string`. The serialized `Summary` protocol
- buffer.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.tables_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.tables_initializer.md
deleted file mode 100644
index f278bd57e6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.tables_initializer.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.tables_initializer(name='init_all_tables')` {#tables_initializer}
-
-Returns an Op that initializes all tables of the default graph.
-
-##### Args:
-
-
-* <b>`name`</b>: Optional name for the initialization op.
-
-##### Returns:
-
- An Op that initializes all tables. Note that if there are
- not tables the returned Op is a NoOp.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.TestCase.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.TestCase.md
deleted file mode 100644
index 0e63e0d708..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.TestCase.md
+++ /dev/null
@@ -1,875 +0,0 @@
-Base class for tests that need to test TensorFlow.
-- - -
-
-#### `tf.test.TestCase.__call__(*args, **kwds)` {#TestCase.__call__}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.__eq__(other)` {#TestCase.__eq__}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.__hash__()` {#TestCase.__hash__}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.__init__(methodName='runTest')` {#TestCase.__init__}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.__ne__(other)` {#TestCase.__ne__}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.__repr__()` {#TestCase.__repr__}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.__str__()` {#TestCase.__str__}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.addCleanup(function, *args, **kwargs)` {#TestCase.addCleanup}
-
-Add a function, with arguments, to be called when the test is
-completed. Functions added are called on a LIFO basis and are
-called after tearDown on test failure or success.
-
-Cleanup items are called even if setUp fails (unlike tearDown).
-
-
-- - -
-
-#### `tf.test.TestCase.addTypeEqualityFunc(typeobj, function)` {#TestCase.addTypeEqualityFunc}
-
-Add a type specific assertEqual style function to compare a type.
-
-This method is for use by TestCase subclasses that need to register
-their own type equality functions to provide nicer error messages.
-
-##### Args:
-
-
-* <b>`typeobj`</b>: The data type to call this function on when both values
- are of the same type in assertEqual().
-* <b>`function`</b>: The callable taking two arguments and an optional
- msg= argument that raises self.failureException with a
- useful error message when the two arguments are not equal.
-
-
-- - -
-
-#### `tf.test.TestCase.assertAllClose(a, b, rtol=1e-06, atol=1e-06)` {#TestCase.assertAllClose}
-
-Asserts that two numpy arrays have near values.
-
-##### Args:
-
-
-* <b>`a`</b>: a numpy ndarray or anything can be converted to one.
-* <b>`b`</b>: a numpy ndarray or anything can be converted to one.
-* <b>`rtol`</b>: relative tolerance
-* <b>`atol`</b>: absolute tolerance
-
-
-- - -
-
-#### `tf.test.TestCase.assertAllCloseAccordingToType(a, b, rtol=1e-06, atol=1e-06, float_rtol=1e-06, float_atol=1e-06, half_rtol=0.001, half_atol=0.001)` {#TestCase.assertAllCloseAccordingToType}
-
-Like assertAllClose, but also suitable for comparing fp16 arrays.
-
-In particular, the tolerance is reduced to 1e-3 if at least
-one of the arguments is of type float16.
-
-##### Args:
-
-
-* <b>`a`</b>: a numpy ndarray or anything can be converted to one.
-* <b>`b`</b>: a numpy ndarray or anything can be converted to one.
-* <b>`rtol`</b>: relative tolerance
-* <b>`atol`</b>: absolute tolerance
-* <b>`float_rtol`</b>: relative tolerance for float32
-* <b>`float_atol`</b>: absolute tolerance for float32
-* <b>`half_rtol`</b>: relative tolerance for float16
-* <b>`half_atol`</b>: absolute tolerance for float16
-
-
-- - -
-
-#### `tf.test.TestCase.assertAllEqual(a, b)` {#TestCase.assertAllEqual}
-
-Asserts that two numpy arrays have the same values.
-
-##### Args:
-
-
-* <b>`a`</b>: a numpy ndarray or anything can be converted to one.
-* <b>`b`</b>: a numpy ndarray or anything can be converted to one.
-
-
-- - -
-
-#### `tf.test.TestCase.assertAlmostEqual(first, second, places=None, msg=None, delta=None)` {#TestCase.assertAlmostEqual}
-
-Fail if the two objects are unequal as determined by their
-difference rounded to the given number of decimal places
-(default 7) and comparing to zero, or by comparing that the
-between the two objects is more than the given delta.
-
-Note that decimal places (from zero) are usually not the same
-as significant digits (measured from the most signficant digit).
-
-If the two objects compare equal then they will automatically
-compare almost equal.
-
-
-- - -
-
-#### `tf.test.TestCase.assertAlmostEquals(first, second, places=None, msg=None, delta=None)` {#TestCase.assertAlmostEquals}
-
-Fail if the two objects are unequal as determined by their
-difference rounded to the given number of decimal places
-(default 7) and comparing to zero, or by comparing that the
-between the two objects is more than the given delta.
-
-Note that decimal places (from zero) are usually not the same
-as significant digits (measured from the most signficant digit).
-
-If the two objects compare equal then they will automatically
-compare almost equal.
-
-
-- - -
-
-#### `tf.test.TestCase.assertArrayNear(farray1, farray2, err)` {#TestCase.assertArrayNear}
-
-Asserts that two float arrays are near each other.
-
-Checks that for all elements of farray1 and farray2
-|f1 - f2| < err. Asserts a test failure if not.
-
-##### Args:
-
-
-* <b>`farray1`</b>: a list of float values.
-* <b>`farray2`</b>: a list of float values.
-* <b>`err`</b>: a float value.
-
-
-- - -
-
-#### `tf.test.TestCase.assertDeviceEqual(device1, device2)` {#TestCase.assertDeviceEqual}
-
-Asserts that the two given devices are the same.
-
-##### Args:
-
-
-* <b>`device1`</b>: A string device name or TensorFlow `DeviceSpec` object.
-* <b>`device2`</b>: A string device name or TensorFlow `DeviceSpec` object.
-
-
-- - -
-
-#### `tf.test.TestCase.assertDictContainsSubset(expected, actual, msg=None)` {#TestCase.assertDictContainsSubset}
-
-Checks whether actual is a superset of expected.
-
-
-- - -
-
-#### `tf.test.TestCase.assertDictEqual(d1, d2, msg=None)` {#TestCase.assertDictEqual}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.assertEqual(first, second, msg=None)` {#TestCase.assertEqual}
-
-Fail if the two objects are unequal as determined by the '=='
-operator.
-
-
-- - -
-
-#### `tf.test.TestCase.assertEquals(first, second, msg=None)` {#TestCase.assertEquals}
-
-Fail if the two objects are unequal as determined by the '=='
-operator.
-
-
-- - -
-
-#### `tf.test.TestCase.assertFalse(expr, msg=None)` {#TestCase.assertFalse}
-
-Check that the expression is false.
-
-
-- - -
-
-#### `tf.test.TestCase.assertGreater(a, b, msg=None)` {#TestCase.assertGreater}
-
-Just like self.assertTrue(a > b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertGreaterEqual(a, b, msg=None)` {#TestCase.assertGreaterEqual}
-
-Just like self.assertTrue(a >= b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertIn(member, container, msg=None)` {#TestCase.assertIn}
-
-Just like self.assertTrue(a in b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertIs(expr1, expr2, msg=None)` {#TestCase.assertIs}
-
-Just like self.assertTrue(a is b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertIsInstance(obj, cls, msg=None)` {#TestCase.assertIsInstance}
-
-Same as self.assertTrue(isinstance(obj, cls)), with a nicer
-default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertIsNone(obj, msg=None)` {#TestCase.assertIsNone}
-
-Same as self.assertTrue(obj is None), with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertIsNot(expr1, expr2, msg=None)` {#TestCase.assertIsNot}
-
-Just like self.assertTrue(a is not b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertIsNotNone(obj, msg=None)` {#TestCase.assertIsNotNone}
-
-Included for symmetry with assertIsNone.
-
-
-- - -
-
-#### `tf.test.TestCase.assertItemsEqual(expected_seq, actual_seq, msg=None)` {#TestCase.assertItemsEqual}
-
-An unordered sequence specific comparison. It asserts that
-actual_seq and expected_seq have the same element counts.
-Equivalent to::
-
- self.assertEqual(Counter(iter(actual_seq)),
- Counter(iter(expected_seq)))
-
-Asserts that each element has the same count in both sequences.
-
-##### Example:
-
- - [0, 1, 1] and [1, 0, 1] compare equal.
- - [0, 0, 1] and [0, 1] compare unequal.
-
-
-- - -
-
-#### `tf.test.TestCase.assertLess(a, b, msg=None)` {#TestCase.assertLess}
-
-Just like self.assertTrue(a < b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertLessEqual(a, b, msg=None)` {#TestCase.assertLessEqual}
-
-Just like self.assertTrue(a <= b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertListEqual(list1, list2, msg=None)` {#TestCase.assertListEqual}
-
-A list-specific equality assertion.
-
-##### Args:
-
-
-* <b>`list1`</b>: The first list to compare.
-* <b>`list2`</b>: The second list to compare.
-* <b>`msg`</b>: Optional message to use on failure instead of a list of
- differences.
-
-
-- - -
-
-#### `tf.test.TestCase.assertMultiLineEqual(first, second, msg=None)` {#TestCase.assertMultiLineEqual}
-
-Assert that two multi-line strings are equal.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNDArrayNear(ndarray1, ndarray2, err)` {#TestCase.assertNDArrayNear}
-
-Asserts that two numpy arrays have near values.
-
-##### Args:
-
-
-* <b>`ndarray1`</b>: a numpy ndarray.
-* <b>`ndarray2`</b>: a numpy ndarray.
-* <b>`err`</b>: a float. The maximum absolute difference allowed.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNear(f1, f2, err, msg=None)` {#TestCase.assertNear}
-
-Asserts that two floats are near each other.
-
-Checks that |f1 - f2| < err and asserts a test failure
-if not.
-
-##### Args:
-
-
-* <b>`f1`</b>: A float value.
-* <b>`f2`</b>: A float value.
-* <b>`err`</b>: A float value.
-* <b>`msg`</b>: An optional string message to append to the failure message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNotAlmostEqual(first, second, places=None, msg=None, delta=None)` {#TestCase.assertNotAlmostEqual}
-
-Fail if the two objects are equal as determined by their
-difference rounded to the given number of decimal places
-(default 7) and comparing to zero, or by comparing that the
-between the two objects is less than the given delta.
-
-Note that decimal places (from zero) are usually not the same
-as significant digits (measured from the most signficant digit).
-
-Objects that are equal automatically fail.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNotAlmostEquals(first, second, places=None, msg=None, delta=None)` {#TestCase.assertNotAlmostEquals}
-
-Fail if the two objects are equal as determined by their
-difference rounded to the given number of decimal places
-(default 7) and comparing to zero, or by comparing that the
-between the two objects is less than the given delta.
-
-Note that decimal places (from zero) are usually not the same
-as significant digits (measured from the most signficant digit).
-
-Objects that are equal automatically fail.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNotEqual(first, second, msg=None)` {#TestCase.assertNotEqual}
-
-Fail if the two objects are equal as determined by the '!='
-operator.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNotEquals(first, second, msg=None)` {#TestCase.assertNotEquals}
-
-Fail if the two objects are equal as determined by the '!='
-operator.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNotIn(member, container, msg=None)` {#TestCase.assertNotIn}
-
-Just like self.assertTrue(a not in b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNotIsInstance(obj, cls, msg=None)` {#TestCase.assertNotIsInstance}
-
-Included for symmetry with assertIsInstance.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNotRegexpMatches(text, unexpected_regexp, msg=None)` {#TestCase.assertNotRegexpMatches}
-
-Fail the test if the text matches the regular expression.
-
-
-- - -
-
-#### `tf.test.TestCase.assertProtoEquals(expected_message_maybe_ascii, message)` {#TestCase.assertProtoEquals}
-
-Asserts that message is same as parsed expected_message_ascii.
-
-Creates another prototype of message, reads the ascii message into it and
-then compares them using self._AssertProtoEqual().
-
-##### Args:
-
-
-* <b>`expected_message_maybe_ascii`</b>: proto message in original or ascii form
-* <b>`message`</b>: the message to validate
-
-
-- - -
-
-#### `tf.test.TestCase.assertProtoEqualsVersion(expected, actual, producer=21, min_consumer=0)` {#TestCase.assertProtoEqualsVersion}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.assertRaises(excClass, callableObj=None, *args, **kwargs)` {#TestCase.assertRaises}
-
-Fail unless an exception of class excClass is raised
-by callableObj when invoked with arguments args and keyword
-arguments kwargs. If a different type of exception is
-raised, it will not be caught, and the test case will be
-deemed to have suffered an error, exactly as for an
-unexpected exception.
-
-If called with callableObj omitted or None, will return a
-context object used like this::
-
- with self.assertRaises(SomeException):
- do_something()
-
-The context manager keeps a reference to the exception as
-the 'exception' attribute. This allows you to inspect the
-exception after the assertion::
-
- with self.assertRaises(SomeException) as cm:
- do_something()
- the_exception = cm.exception
- self.assertEqual(the_exception.error_code, 3)
-
-
-- - -
-
-#### `tf.test.TestCase.assertRaisesOpError(expected_err_re_or_predicate)` {#TestCase.assertRaisesOpError}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.assertRaisesRegexp(expected_exception, expected_regexp, callable_obj=None, *args, **kwargs)` {#TestCase.assertRaisesRegexp}
-
-Asserts that the message in a raised exception matches a regexp.
-
-##### Args:
-
-
-* <b>`expected_exception`</b>: Exception class expected to be raised.
-* <b>`expected_regexp`</b>: Regexp (re pattern object or string) expected
- to be found in error message.
-* <b>`callable_obj`</b>: Function to be called.
-* <b>`args`</b>: Extra args.
-* <b>`kwargs`</b>: Extra kwargs.
-
-
-- - -
-
-#### `tf.test.TestCase.assertRaisesWithPredicateMatch(exception_type, expected_err_re_or_predicate)` {#TestCase.assertRaisesWithPredicateMatch}
-
-Returns a context manager to enclose code expected to raise an exception.
-
-If the exception is an OpError, the op stack is also included in the message
-predicate search.
-
-##### Args:
-
-
-* <b>`exception_type`</b>: The expected type of exception that should be raised.
-* <b>`expected_err_re_or_predicate`</b>: If this is callable, it should be a function
- of one argument that inspects the passed-in exception and
- returns True (success) or False (please fail the test). Otherwise, the
- error message is expected to match this regular expression partially.
-
-##### Returns:
-
- A context manager to surround code that is expected to raise an
- exception.
-
-
-- - -
-
-#### `tf.test.TestCase.assertRegexpMatches(text, expected_regexp, msg=None)` {#TestCase.assertRegexpMatches}
-
-Fail the test unless the text matches the regular expression.
-
-
-- - -
-
-#### `tf.test.TestCase.assertSequenceEqual(seq1, seq2, msg=None, seq_type=None)` {#TestCase.assertSequenceEqual}
-
-An equality assertion for ordered sequences (like lists and tuples).
-
-For the purposes of this function, a valid ordered sequence type is one
-which can be indexed, has a length, and has an equality operator.
-
-##### Args:
-
-
-* <b>`seq1`</b>: The first sequence to compare.
-* <b>`seq2`</b>: The second sequence to compare.
-* <b>`seq_type`</b>: The expected datatype of the sequences, or None if no
- datatype should be enforced.
-* <b>`msg`</b>: Optional message to use on failure instead of a list of
- differences.
-
-
-- - -
-
-#### `tf.test.TestCase.assertSetEqual(set1, set2, msg=None)` {#TestCase.assertSetEqual}
-
-A set-specific equality assertion.
-
-##### Args:
-
-
-* <b>`set1`</b>: The first set to compare.
-* <b>`set2`</b>: The second set to compare.
-* <b>`msg`</b>: Optional message to use on failure instead of a list of
- differences.
-
-assertSetEqual uses ducktyping to support different types of sets, and
-is optimized for sets specifically (parameters must support a
-difference method).
-
-
-- - -
-
-#### `tf.test.TestCase.assertShapeEqual(np_array, tf_tensor)` {#TestCase.assertShapeEqual}
-
-Asserts that a Numpy ndarray and a TensorFlow tensor have the same shape.
-
-##### Args:
-
-
-* <b>`np_array`</b>: A Numpy ndarray or Numpy scalar.
-* <b>`tf_tensor`</b>: A Tensor.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the arguments have the wrong type.
-
-
-- - -
-
-#### `tf.test.TestCase.assertStartsWith(actual, expected_start, msg=None)` {#TestCase.assertStartsWith}
-
-Assert that actual.startswith(expected_start) is True.
-
-##### Args:
-
-
-* <b>`actual`</b>: str
-* <b>`expected_start`</b>: str
-* <b>`msg`</b>: Optional message to report on failure.
-
-
-- - -
-
-#### `tf.test.TestCase.assertTrue(expr, msg=None)` {#TestCase.assertTrue}
-
-Check that the expression is true.
-
-
-- - -
-
-#### `tf.test.TestCase.assertTupleEqual(tuple1, tuple2, msg=None)` {#TestCase.assertTupleEqual}
-
-A tuple-specific equality assertion.
-
-##### Args:
-
-
-* <b>`tuple1`</b>: The first tuple to compare.
-* <b>`tuple2`</b>: The second tuple to compare.
-* <b>`msg`</b>: Optional message to use on failure instead of a list of
- differences.
-
-
-- - -
-
-#### `tf.test.TestCase.assert_(expr, msg=None)` {#TestCase.assert_}
-
-Check that the expression is true.
-
-
-- - -
-
-#### `tf.test.TestCase.checkedThread(target, args=None, kwargs=None)` {#TestCase.checkedThread}
-
-Returns a Thread wrapper that asserts 'target' completes successfully.
-
-This method should be used to create all threads in test cases, as
-otherwise there is a risk that a thread will silently fail, and/or
-assertions made in the thread will not be respected.
-
-##### Args:
-
-
-* <b>`target`</b>: A callable object to be executed in the thread.
-* <b>`args`</b>: The argument tuple for the target invocation. Defaults to ().
-* <b>`kwargs`</b>: A dictionary of keyword arguments for the target invocation.
- Defaults to {}.
-
-##### Returns:
-
- A wrapper for threading.Thread that supports start() and join() methods.
-
-
-- - -
-
-#### `tf.test.TestCase.countTestCases()` {#TestCase.countTestCases}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.debug()` {#TestCase.debug}
-
-Run the test without collecting errors in a TestResult
-
-
-- - -
-
-#### `tf.test.TestCase.defaultTestResult()` {#TestCase.defaultTestResult}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.doCleanups()` {#TestCase.doCleanups}
-
-Execute all cleanup functions. Normally called for you after
-tearDown.
-
-
-- - -
-
-#### `tf.test.TestCase.fail(msg=None)` {#TestCase.fail}
-
-Fail immediately, with the given message.
-
-
-- - -
-
-#### `tf.test.TestCase.failIf(*args, **kwargs)` {#TestCase.failIf}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.failIfAlmostEqual(*args, **kwargs)` {#TestCase.failIfAlmostEqual}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.failIfEqual(*args, **kwargs)` {#TestCase.failIfEqual}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.failUnless(*args, **kwargs)` {#TestCase.failUnless}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.failUnlessAlmostEqual(*args, **kwargs)` {#TestCase.failUnlessAlmostEqual}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.failUnlessEqual(*args, **kwargs)` {#TestCase.failUnlessEqual}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.failUnlessRaises(*args, **kwargs)` {#TestCase.failUnlessRaises}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.get_temp_dir()` {#TestCase.get_temp_dir}
-
-Returns a unique temporary directory for the test to use.
-
-Across different test runs, this method will return a different folder.
-This will ensure that across different runs tests will not be able to
-pollute each others environment.
-
-##### Returns:
-
- string, the path to the unique temporary directory created for this test.
-
-
-- - -
-
-#### `tf.test.TestCase.id()` {#TestCase.id}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.run(result=None)` {#TestCase.run}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.setUp()` {#TestCase.setUp}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.setUpClass(cls)` {#TestCase.setUpClass}
-
-Hook method for setting up class fixture before running tests in the class.
-
-
-- - -
-
-#### `tf.test.TestCase.shortDescription()` {#TestCase.shortDescription}
-
-Returns a one-line description of the test, or None if no
-description has been provided.
-
-The default implementation of this method returns the first line of
-the specified test method's docstring.
-
-
-- - -
-
-#### `tf.test.TestCase.skipTest(reason)` {#TestCase.skipTest}
-
-Skip this test.
-
-
-- - -
-
-#### `tf.test.TestCase.tearDown()` {#TestCase.tearDown}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.tearDownClass(cls)` {#TestCase.tearDownClass}
-
-Hook method for deconstructing the class fixture after running all tests in the class.
-
-
-- - -
-
-#### `tf.test.TestCase.test_session(graph=None, config=None, use_gpu=False, force_gpu=False)` {#TestCase.test_session}
-
-Returns a TensorFlow Session for use in executing tests.
-
-This method should be used for all functional tests.
-
-This method behaves different than session.Session: for performance reasons
-`test_session` will by default (if `graph` is None) reuse the same session
-across tests. This means you may want to either call the function
-`reset_default_graph()` before tests, or if creating an explicit new graph,
-pass it here (simply setting it with `as_default()` won't do it), which will
-trigger the creation of a new session.
-
-Use the `use_gpu` and `force_gpu` options to control where ops are run. If
-`force_gpu` is True, all ops are pinned to `/gpu:0`. Otherwise, if `use_gpu`
-is True, TensorFlow tries to run as many ops on the GPU as possible. If both
-`force_gpu and `use_gpu` are False, all ops are pinned to the CPU.
-
-Example:
-
- class MyOperatorTest(test_util.TensorFlowTestCase):
- def testMyOperator(self):
- with self.test_session(use_gpu=True):
- valid_input = [1.0, 2.0, 3.0, 4.0, 5.0]
- result = MyOperator(valid_input).eval()
- self.assertEqual(result, [1.0, 2.0, 3.0, 5.0, 8.0]
- invalid_input = [-1.0, 2.0, 7.0]
- with self.assertRaisesOpError("negative input not supported"):
- MyOperator(invalid_input).eval()
-
-##### Args:
-
-
-* <b>`graph`</b>: Optional graph to use during the returned session.
-* <b>`config`</b>: An optional config_pb2.ConfigProto to use to configure the
- session.
-* <b>`use_gpu`</b>: If True, attempt to run as many ops as possible on GPU.
-* <b>`force_gpu`</b>: If True, pin all ops to `/gpu:0`.
-
-##### Returns:
-
- A Session object that should be used as a context manager to surround
- the graph building and execution code in a test case.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.assert_equal_graph_def.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.assert_equal_graph_def.md
deleted file mode 100644
index 026f5df890..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.assert_equal_graph_def.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.test.assert_equal_graph_def(actual, expected, checkpoint_v2=False)` {#assert_equal_graph_def}
-
-Asserts that two `GraphDef`s are (mostly) the same.
-
-Compares two `GraphDef` protos for equality, ignoring versions and ordering of
-nodes, attrs, and control inputs. Node names are used to match up nodes
-between the graphs, so the naming of nodes must be consistent.
-
-##### Args:
-
-
-* <b>`actual`</b>: The `GraphDef` we have.
-* <b>`expected`</b>: The `GraphDef` we expected.
-* <b>`checkpoint_v2`</b>: boolean determining whether to ignore randomized attribute
- values that appear in V2 checkpoints.
-
-##### Raises:
-
-
-* <b>`AssertionError`</b>: If the `GraphDef`s do not match.
-* <b>`TypeError`</b>: If either argument is not a `GraphDef`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.compute_gradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.compute_gradient.md
deleted file mode 100644
index a69224a0c5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.compute_gradient.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.test.compute_gradient(x, x_shape, y, y_shape, x_init_value=None, delta=0.001, init_targets=None, extra_feed_dict=None)` {#compute_gradient}
-
-Computes and returns the theoretical and numerical Jacobian.
-
-If `x` or `y` is complex, the Jacobian will still be real but the
-corresponding Jacobian dimension(s) will be twice as large. This is required
-even if both input and output is complex since TensorFlow graphs are not
-necessarily holomorphic, and may have gradients not expressible as complex
-numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
-with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with
-
- J[:m, :n] = d(Re y)/d(Re x)
- J[:m, n:] = d(Im y)/d(Re x)
- J[m:, :n] = d(Re y)/d(Im x)
- J[m:, n:] = d(Im y)/d(Im x)
-
-##### Args:
-
-
-* <b>`x`</b>: a tensor or list of tensors
-* <b>`x_shape`</b>: the dimensions of x as a tuple or an array of ints. If x is a list,
- then this is the list of shapes.
-
-* <b>`y`</b>: a tensor
-* <b>`y_shape`</b>: the dimensions of y as a tuple or an array of ints.
-* <b>`x_init_value`</b>: (optional) a numpy array of the same shape as "x"
- representing the initial value of x. If x is a list, this should be a list
- of numpy arrays. If this is none, the function will pick a random tensor
- as the initial value.
-* <b>`delta`</b>: (optional) the amount of perturbation.
-* <b>`init_targets`</b>: list of targets to run to initialize model params.
- TODO(mrry): remove this argument.
-* <b>`extra_feed_dict`</b>: dict that allows fixing specified tensor values
- during the Jacobian calculation.
-
-##### Returns:
-
- Two 2-d numpy arrays representing the theoretical and numerical
- Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns
- where "x_size" is the number of elements in x and "y_size" is the
- number of elements in y. If x is a list, returns a list of two numpy arrays.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.is_built_with_cuda.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.is_built_with_cuda.md
deleted file mode 100644
index 51e3d97d8c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.test.is_built_with_cuda.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.test.is_built_with_cuda()` {#is_built_with_cuda}
-
-Returns whether TensorFlow was built with CUDA (GPU) support.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.to_int32.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.to_int32.md
deleted file mode 100644
index fcc9db61cc..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.to_int32.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.to_int32(x, name='ToInt32')` {#to_int32}
-
-Casts a tensor to type `int32`.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` with same shape as `x` with type `int32`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` cannot be cast to the `int32`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.CheckpointSaverHook.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.CheckpointSaverHook.md
deleted file mode 100644
index 8654557bd5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.CheckpointSaverHook.md
+++ /dev/null
@@ -1,77 +0,0 @@
-Saves checkpoints every N steps or seconds.
-- - -
-
-#### `tf.train.CheckpointSaverHook.__init__(checkpoint_dir, save_secs=None, save_steps=None, saver=None, checkpoint_basename='model.ckpt', scaffold=None, listeners=None)` {#CheckpointSaverHook.__init__}
-
-Initialize CheckpointSaverHook monitor.
-
-##### Args:
-
-
-* <b>`checkpoint_dir`</b>: `str`, base directory for the checkpoint files.
-* <b>`save_secs`</b>: `int`, save every N secs.
-* <b>`save_steps`</b>: `int`, save every N steps.
-* <b>`saver`</b>: `Saver` object, used for saving.
-* <b>`checkpoint_basename`</b>: `str`, base name for the checkpoint files.
-* <b>`scaffold`</b>: `Scaffold`, use to get saver object.
-* <b>`listeners`</b>: List of `CheckpointSaverListener` subclass instances.
- Used for callbacks that run immediately after the corresponding
- CheckpointSaverHook callbacks, only in steps where the
- CheckpointSaverHook was triggered.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: One of `save_steps` or `save_secs` should be set.
-* <b>`ValueError`</b>: Exactly one of saver or scaffold should be set.
-
-
-- - -
-
-#### `tf.train.CheckpointSaverHook.after_create_session(session, coord)` {#CheckpointSaverHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.CheckpointSaverHook.after_run(run_context, run_values)` {#CheckpointSaverHook.after_run}
-
-
-
-
-- - -
-
-#### `tf.train.CheckpointSaverHook.before_run(run_context)` {#CheckpointSaverHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.CheckpointSaverHook.begin()` {#CheckpointSaverHook.begin}
-
-
-
-
-- - -
-
-#### `tf.train.CheckpointSaverHook.end(session)` {#CheckpointSaverHook.end}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.Saver.from_proto.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.Saver.from_proto.md
deleted file mode 100644
index 1c3b17e2e9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.Saver.from_proto.md
+++ /dev/null
@@ -1,14 +0,0 @@
-#### `tf.train.Saver.from_proto(saver_def, import_scope=None)` {#Saver.from_proto}
-
-Returns a `Saver` object created from `saver_def`.
-
-##### Args:
-
-
-* <b>`saver_def`</b>: a `SaveDef` protocol buffer.
-* <b>`import_scope`</b>: Optional `string`. Name scope to use.
-
-##### Returns:
-
- A `Saver` built from saver_def.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.SessionCreator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.SessionCreator.md
deleted file mode 100644
index c1df9b3406..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.SessionCreator.md
+++ /dev/null
@@ -1,8 +0,0 @@
-A factory for tf.Session.
-- - -
-
-#### `tf.train.SessionCreator.create_session()` {#SessionCreator.create_session}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.basic_train_loop.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.basic_train_loop.md
deleted file mode 100644
index 774cbe5ad5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.basic_train_loop.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.train.basic_train_loop(supervisor, train_step_fn, args=None, kwargs=None, master='')` {#basic_train_loop}
-
-Basic loop to train a model.
-
-Calls `train_step_fn` in a loop to train a model. The function is called as:
-
-```python
-train_step_fn(session, *args, **kwargs)
-```
-
-It is passed a `tf.Session` in addition to `args` and `kwargs`. The function
-typically runs one training step in the session.
-
-##### Args:
-
-
-* <b>`supervisor`</b>: `tf.Supervisor` to run the training services.
-* <b>`train_step_fn`</b>: Callable to execute one training step. Called
- repeatedly as `train_step_fn(session, *args **kwargs)`.
-* <b>`args`</b>: Optional positional arguments passed to `train_step_fn`.
-* <b>`kwargs`</b>: Optional keyword arguments passed to `train_step_fn`.
-* <b>`master`</b>: Master to use to create the training session. Defaults to
- `""` which causes the session to be created in the local process.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.global_step.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.global_step.md
deleted file mode 100644
index 2ec7f9654a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.global_step.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.train.global_step(sess, global_step_tensor)` {#global_step}
-
-Small helper to get the global step.
-
-```python
-# Creates a variable to hold the global_step.
-global_step_tensor = tf.Variable(10, trainable=False, name='global_step')
-# Creates a session.
-sess = tf.Session()
-# Initializes the variable.
-print('global_step: %s' % tf.train.global_step(sess, global_step_tensor))
-
-global_step: 10
-```
-
-##### Args:
-
-
-* <b>`sess`</b>: A TensorFlow `Session` object.
-* <b>`global_step_tensor`</b>: `Tensor` or the `name` of the operation that contains
- the global step.
-
-##### Returns:
-
- The global step value.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.latest_checkpoint.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.latest_checkpoint.md
deleted file mode 100644
index b1fc87cdd7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.latest_checkpoint.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.train.latest_checkpoint(checkpoint_dir, latest_filename=None)` {#latest_checkpoint}
-
-Finds the filename of latest saved checkpoint file.
-
-##### Args:
-
-
-* <b>`checkpoint_dir`</b>: Directory where the variables were saved.
-* <b>`latest_filename`</b>: Optional name for the protocol buffer file that
- contains the list of most recent checkpoint filenames.
- See the corresponding argument to `Saver.save()`.
-
-##### Returns:
-
- The full path to the latest checkpoint or `None` if no checkpoint was found.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.maybe_shuffle_batch_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.maybe_shuffle_batch_join.md
deleted file mode 100644
index ec101daba3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.maybe_shuffle_batch_join.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.train.maybe_shuffle_batch_join(tensors_list, batch_size, capacity, min_after_dequeue, keep_input, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_shuffle_batch_join}
-
-Create batches by randomly shuffling conditionally-enqueued tensors.
-
-See docstring in `shuffle_batch_join` for more details.
-
-##### Args:
-
-
-* <b>`tensors_list`</b>: A list of tuples or dictionaries of tensors to enqueue.
-* <b>`batch_size`</b>: An integer. The new batch size pulled from the queue.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`min_after_dequeue`</b>: Minimum number elements in the queue after a
- dequeue, used to ensure a level of mixing of elements.
-* <b>`keep_input`</b>: A `bool` Tensor. This tensor controls whether the input is
- added to the queue or not. If it is a scalar and evaluates `True`, then
- `tensors` are all added to the queue. If it is a vector and `enqueue_many`
- is `True`, then each example is added to the queue only if the
- corresonding value in `keep_input` is `True`. This tensor essentially acts
- as a filtering mechanism.
-* <b>`seed`</b>: Seed for the random shuffling within the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list_list` is a single
- example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensors_list[i]`.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (optional). If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the same number and types as
- `tensors_list[i]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensors_list`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.natural_exp_decay.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.natural_exp_decay.md
deleted file mode 100644
index 5fbff8f9d4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.natural_exp_decay.md
+++ /dev/null
@@ -1,56 +0,0 @@
-### `tf.train.natural_exp_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None)` {#natural_exp_decay}
-
-Applies natural exponential decay to the initial learning rate.
-
-When training a model, it is often recommended to lower the learning rate as
-the training progresses. This function applies an exponential decay function
-to a provided initial learning rate. It requires an `global_step` value to
-compute the decayed learning rate. You can just pass a TensorFlow variable
-that you increment at each training step.
-
-The function returns the decayed learning rate. It is computed as:
-
-```python
-decayed_learning_rate = learning_rate * exp(-decay_rate * global_step)
-```
-
-Example: decay exponentially with a base of 0.96:
-
-```python
-...
-global_step = tf.Variable(0, trainable=False)
-learning_rate = 0.1
-k = 0.5
-learning_rate = tf.train.exponential_time_decay(learning_rate, global_step, k)
-
-# Passing global_step to minimize() will increment it at each step.
-learning_step = (
- tf.train.GradientDescentOptimizer(learning_rate)
- .minimize(...my loss..., global_step=global_step)
-)
-```
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A scalar `float32` or `float64` `Tensor` or a
- Python number. The initial learning rate.
-* <b>`global_step`</b>: A Python number.
- Global step to use for the decay computation. Must not be negative.
-* <b>`decay_steps`</b>: How often to apply decay.
-* <b>`decay_rate`</b>: A Python number. The decay rate.
-* <b>`staircase`</b>: Whether to apply decay in a discrete staircase, as opposed to
- continuous, fashion.
-* <b>`name`</b>: String. Optional name of the operation. Defaults to
- 'ExponentialTimeDecay'.
-
-##### Returns:
-
- A scalar `Tensor` of the same type as `learning_rate`. The decayed
- learning rate.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `global_step` is not supplied.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.shuffle_batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.shuffle_batch.md
deleted file mode 100644
index 1da7793d58..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.shuffle_batch.md
+++ /dev/null
@@ -1,86 +0,0 @@
-### `tf.train.shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#shuffle_batch}
-
-Creates batches by randomly shuffling tensors.
-
-This function adds the following to the current `Graph`:
-
-* A shuffling queue into which tensors from `tensors` are enqueued.
-* A `dequeue_many` operation to create batches from the queue.
-* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
- from `tensors`.
-
-If `enqueue_many` is `False`, `tensors` is assumed to represent a
-single example. An input tensor with shape `[x, y, z]` will be output
-as a tensor with shape `[batch_size, x, y, z]`.
-
-If `enqueue_many` is `True`, `tensors` is assumed to represent a
-batch of examples, where the first dimension is indexed by example,
-and all members of `tensors` should have the same size in the
-first dimension. If an input tensor has shape `[*, x, y, z]`, the
-output will have shape `[batch_size, x, y, z]`.
-
-The `capacity` argument controls the how long the prefetching is allowed to
-grow the queues.
-
-The returned operation is a dequeue operation and will throw
-`tf.errors.OutOfRangeError` if the input queue is exhausted. If this
-operation is feeding another input queue, its queue runner will catch
-this exception, however, if this operation is used in your main thread
-you are responsible for catching this yourself.
-
-For example:
-
-```python
-# Creates batches of 32 images and 32 labels.
-image_batch, label_batch = tf.train.shuffle_batch(
- [single_image, single_label],
- batch_size=32,
- num_threads=4,
- capacity=50000,
- min_after_dequeue=10000)
-```
-
-*N.B.:* You must ensure that either (i) the `shapes` argument is
-passed, or (ii) all of the tensors in `tensors` must have
-fully-defined shapes. `ValueError` will be raised if neither of
-these conditions holds.
-
-If `allow_smaller_final_batch` is `True`, a smaller batch value than
-`batch_size` is returned when the queue is closed and there are not enough
-elements to fill the batch, otherwise the pending elements are discarded.
-In addition, all output tensors' static shapes, as accessed via the
-`get_shape` method will have a first `Dimension` value of `None`, and
-operations that depend on fixed batch_size would fail.
-
-Note: if `num_epochs` is not `None`, this function creates local counter
-`epochs`. Use `local_variables_initializer()` to initialize local variables.
-
-##### Args:
-
-
-* <b>`tensors`</b>: The list or dictionary of tensors to enqueue.
-* <b>`batch_size`</b>: The new batch size pulled from the queue.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`min_after_dequeue`</b>: Minimum number elements in the queue after a
- dequeue, used to ensure a level of mixing of elements.
-* <b>`num_threads`</b>: The number of threads enqueuing `tensor_list`.
-* <b>`seed`</b>: Seed for the random shuffling within the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list` is a single example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensor_list`.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (Optional) If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the types as `tensors`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensors`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.start_queue_runners.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.start_queue_runners.md
deleted file mode 100644
index 21ac6efee8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.train.start_queue_runners.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.train.start_queue_runners(sess=None, coord=None, daemon=True, start=True, collection='queue_runners')` {#start_queue_runners}
-
-Starts all queue runners collected in the graph.
-
-This is a companion method to `add_queue_runner()`. It just starts
-threads for all queue runners collected in the graph. It returns
-the list of all threads.
-
-##### Args:
-
-
-* <b>`sess`</b>: `Session` used to run the queue ops. Defaults to the
- default session.
-* <b>`coord`</b>: Optional `Coordinator` for coordinating the started threads.
-* <b>`daemon`</b>: Whether the threads should be marked as `daemons`, meaning
- they don't block program exit.
-* <b>`start`</b>: Set to `False` to only create the threads, not start them.
-* <b>`collection`</b>: A `GraphKey` specifying the graph collection to
- get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`.
-
-##### Returns:
-
- A list of threads.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.uniform_unit_scaling_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.uniform_unit_scaling_initializer.md
deleted file mode 100644
index 7d76e45912..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.uniform_unit_scaling_initializer.md
+++ /dev/null
@@ -1,38 +0,0 @@
-Initializer that generates tensors without scaling variance.
-
-When initializing a deep network, it is in principle advantageous to keep
-the scale of the input variance constant, so it does not explode or diminish
-by reaching the final layer. If the input is `x` and the operation `x * W`,
-and we want to initialize `W` uniformly at random, we need to pick `W` from
-
- [-sqrt(3) / sqrt(dim), sqrt(3) / sqrt(dim)]
-
-to keep the scale intact, where `dim = W.shape[0]` (the size of the input).
-A similar calculation for convolutional networks gives an analogous result
-with `dim` equal to the product of the first 3 dimensions. When
-nonlinearities are present, we need to multiply this by a constant `factor`.
-See [Sussillo et al., 2014](https://arxiv.org/abs/1412.6558)
-([pdf](http://arxiv.org/pdf/1412.6558.pdf)) for deeper motivation, experiments
-and the calculation of constants. In section 2.3 there, the constants were
-numerically computed: for a linear layer it's 1.0, relu: ~1.43, tanh: ~1.15.
-
-Args:
- factor: Float. A multiplicative factor by which the values will be scaled.
- seed: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
- dtype: The data type. Only floating point types are supported.
-- - -
-
-#### `tf.uniform_unit_scaling_initializer.__call__(shape, dtype=None, partition_info=None)` {#uniform_unit_scaling_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.uniform_unit_scaling_initializer.__init__(factor=1.0, seed=None, dtype=tf.float32)` {#uniform_unit_scaling_initializer.__init__}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.variables_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.variables_initializer.md
deleted file mode 100644
index ec779e79f6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.variables_initializer.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.variables_initializer(var_list, name='init')` {#variables_initializer}
-
-Returns an Op that initializes a list of variables.
-
-After you launch the graph in a session, you can run the returned Op to
-initialize all the variables in `var_list`. This Op runs all the
-initializers of the variables in `var_list` in parallel.
-
-Calling `initialize_variables()` is equivalent to passing the list of
-initializers to `Group()`.
-
-If `var_list` is empty, however, the function still returns an Op that can
-be run. That Op just has no effect.
-
-##### Args:
-
-
-* <b>`var_list`</b>: List of `Variable` objects to initialize.
-* <b>`name`</b>: Optional name for the returned operation.
-
-##### Returns:
-
- An Op that run the initializers of all the specified variables.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.write_file.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.write_file.md
deleted file mode 100644
index ccccf9b43b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard2/tf.write_file.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.write_file(filename, contents, name=None)` {#write_file}
-
-Writes contents to the file at input filename. Creates file if not existing.
-
-##### Args:
-
-
-* <b>`filename`</b>: A `Tensor` of type `string`.
- scalar. The name of the file to which we write the contents.
-* <b>`contents`</b>: A `Tensor` of type `string`.
- scalar. The content to be written to the output file.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.PriorityQueue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.PriorityQueue.md
deleted file mode 100644
index 6154301c4e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.PriorityQueue.md
+++ /dev/null
@@ -1,305 +0,0 @@
-A queue implementation that dequeues elements in prioritized order.
-
-See [`tf.QueueBase`](#QueueBase) for a description of the methods on
-this class.
-- - -
-
-#### `tf.PriorityQueue.__init__(capacity, types, shapes=None, names=None, shared_name=None, name='priority_queue')` {#PriorityQueue.__init__}
-
-Creates a queue that dequeues elements in a first-in first-out order.
-
-A `PriorityQueue` has bounded capacity; supports multiple concurrent
-producers and consumers; and provides exactly-once delivery.
-
-A `PriorityQueue` holds a list of up to `capacity` elements. Each
-element is a fixed-length tuple of tensors whose dtypes are
-described by `types`, and whose shapes are optionally described
-by the `shapes` argument.
-
-If the `shapes` argument is specified, each component of a queue
-element must have the respective fixed shape. If it is
-unspecified, different queue elements may have different shapes,
-but the use of `dequeue_many` is disallowed.
-
-Enqueues and Dequeues to the `PriorityQueue` must include an additional
-tuple entry at the beginning: the `priority`. The priority must be
-an int64 scalar (for `enqueue`) or an int64 vector (for `enqueue_many`).
-
-##### Args:
-
-
-* <b>`capacity`</b>: An integer. The upper bound on the number of elements
- that may be stored in this queue.
-* <b>`types`</b>: A list of `DType` objects. The length of `types` must equal
- the number of tensors in each queue element, except the first priority
- element. The first tensor in each element is the priority,
- which must be type int64.
-* <b>`shapes`</b>: (Optional.) A list of fully-defined `TensorShape` objects,
- with the same length as `types`, or `None`.
-* <b>`names`</b>: (Optional.) A list of strings naming the components in the queue
- with the same length as `dtypes`, or `None`. If specified, the dequeue
- methods return a dictionary with the names as keys.
-* <b>`shared_name`</b>: (Optional.) If non-empty, this queue will be shared under
- the given name across multiple sessions.
-* <b>`name`</b>: Optional name for the queue operation.
-
-
-- - -
-
-#### `tf.PriorityQueue.close(cancel_pending_enqueues=False, name=None)` {#PriorityQueue.close}
-
-Closes this queue.
-
-This operation signals that no more elements will be enqueued in
-the given queue. Subsequent `enqueue` and `enqueue_many`
-operations will fail. Subsequent `dequeue` and `dequeue_many`
-operations will continue to succeed if sufficient elements remain
-in the queue. Subsequent `dequeue` and `dequeue_many` operations
-that would block will fail immediately.
-
-If `cancel_pending_enqueues` is `True`, all pending requests will also
-be cancelled.
-
-##### Args:
-
-
-* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
- `False` (described above).
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that closes the queue.
-
-
-- - -
-
-#### `tf.PriorityQueue.dequeue(name=None)` {#PriorityQueue.dequeue}
-
-Dequeues one element from this queue.
-
-If the queue is empty when this operation executes, it will block
-until there is an element to dequeue.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue is empty, and there are no pending
-enqueue operations that can fulfill this request,
-`tf.errors.OutOfRangeError` will be raised. If the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of tensors that was dequeued.
-
-
-- - -
-
-#### `tf.PriorityQueue.dequeue_many(n, name=None)` {#PriorityQueue.dequeue_many}
-
-Dequeues and concatenates `n` elements from this queue.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. All of the
-components in the dequeued tuple will have size `n` in the 0th dimension.
-
-If the queue is closed and there are less than `n` elements left, then an
-`OutOfRange` exception is raised.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue contains fewer than `n` elements, and
-there are no pending enqueue operations that can fulfill this
-request, `tf.errors.OutOfRangeError` will be raised. If the
-session is [closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.PriorityQueue.dequeue_up_to(n, name=None)` {#PriorityQueue.dequeue_up_to}
-
-Dequeues and concatenates `n` elements from this queue.
-
-**Note** This operation is not supported by all queues. If a queue does not
-support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. If the queue
-has not been closed, all of the components in the dequeued tuple
-will have size `n` in the 0th dimension.
-
-If the queue is closed and there are more than `0` but fewer than
-`n` elements remaining, then instead of raising a
-`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
-less than `n` elements are returned immediately. If the queue is
-closed and there are `0` elements left in the queue, then a
-`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
-Otherwise the behavior is identical to `dequeue_many`.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.PriorityQueue.dtypes` {#PriorityQueue.dtypes}
-
-The list of dtypes for each component of a queue element.
-
-
-- - -
-
-#### `tf.PriorityQueue.enqueue(vals, name=None)` {#PriorityQueue.enqueue}
-
-Enqueues one element to this queue.
-
-If the queue is full when this operation executes, it will block
-until the element has been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
- the values to enqueue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a new tuple of tensors to the queue.
-
-
-- - -
-
-#### `tf.PriorityQueue.enqueue_many(vals, name=None)` {#PriorityQueue.enqueue_many}
-
-Enqueues zero or more elements to this queue.
-
-This operation slices each component tensor along the 0th dimension to
-make multiple queue elements. All of the tensors in `vals` must have the
-same size in the 0th dimension.
-
-If the queue is full when this operation executes, it will block
-until all of the elements have been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
- from which the queue elements are taken.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a batch of tuples of tensors to the queue.
-
-
-- - -
-
-#### `tf.PriorityQueue.from_list(index, queues)` {#PriorityQueue.from_list}
-
-Create a queue using the queue reference from `queues[index]`.
-
-##### Args:
-
-
-* <b>`index`</b>: An integer scalar tensor that determines the input that gets
- selected.
-* <b>`queues`</b>: A list of `QueueBase` objects.
-
-##### Returns:
-
- A `QueueBase` object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
- or when the data types of `queues` are not all the same.
-
-
-- - -
-
-#### `tf.PriorityQueue.name` {#PriorityQueue.name}
-
-The name of the underlying queue.
-
-
-- - -
-
-#### `tf.PriorityQueue.names` {#PriorityQueue.names}
-
-The list of names for each component of a queue element.
-
-
-- - -
-
-#### `tf.PriorityQueue.queue_ref` {#PriorityQueue.queue_ref}
-
-The underlying queue reference.
-
-
-- - -
-
-#### `tf.PriorityQueue.shapes` {#PriorityQueue.shapes}
-
-The list of shapes for each component of a queue element.
-
-
-- - -
-
-#### `tf.PriorityQueue.size(name=None)` {#PriorityQueue.size}
-
-Compute the number of elements in this queue.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A scalar tensor containing the number of elements in this queue.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.RegisterGradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.RegisterGradient.md
deleted file mode 100644
index 2a93bbba40..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.RegisterGradient.md
+++ /dev/null
@@ -1,45 +0,0 @@
-A decorator for registering the gradient function for an op type.
-
-This decorator is only used when defining a new op type. For an op
-with `m` inputs and `n` outputs, the gradient function is a function
-that takes the original `Operation` and `n` `Tensor` objects
-(representing the gradients with respect to each output of the op),
-and returns `m` `Tensor` objects (representing the partial gradients
-with respect to each input of the op).
-
-For example, assuming that operations of type `"Sub"` take two
-inputs `x` and `y`, and return a single output `x - y`, the
-following gradient function would be registered:
-
-```python
-@tf.RegisterGradient("Sub")
-def _sub_grad(unused_op, grad):
- return grad, tf.negative(grad)
-```
-
-The decorator argument `op_type` is the string type of an
-operation. This corresponds to the `OpDef.name` field for the proto
-that defines the operation.
-
-- - -
-
-#### `tf.RegisterGradient.__init__(op_type)` {#RegisterGradient.__init__}
-
-Creates a new decorator with `op_type` as the Operation type.
-
-##### Args:
-
-
-* <b>`op_type`</b>: The string type of an operation. This corresponds to the
- `OpDef.name` field for the proto that defines the operation.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.RegisterGradient.__call__(f)` {#RegisterGradient.__call__}
-
-Registers the function `f` as gradient function for `op_type`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.SparseTensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.SparseTensor.md
deleted file mode 100644
index 31c5d725b2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.SparseTensor.md
+++ /dev/null
@@ -1,248 +0,0 @@
-Represents a sparse tensor.
-
-TensorFlow represents a sparse tensor as three separate dense tensors:
-`indices`, `values`, and `dense_shape`. In Python, the three tensors are
-collected into a `SparseTensor` class for ease of use. If you have separate
-`indices`, `values`, and `dense_shape` tensors, wrap them in a `SparseTensor`
-object before passing to the ops below.
-
-Concretely, the sparse tensor `SparseTensor(indices, values, dense_shape)`
-comprises the following components, where `N` and `ndims` are the number
-of values and number of dimensions in the `SparseTensor`, respectively:
-
-* `indices`: A 2-D int64 tensor of dense_shape `[N, ndims]`, which specifies
- the indices of the elements in the sparse tensor that contain nonzero
- values (elements are zero-indexed). For example, `indices=[[1,3], [2,4]]`
- specifies that the elements with indexes of [1,3] and [2,4] have
- nonzero values.
-
-* `values`: A 1-D tensor of any type and dense_shape `[N]`, which supplies the
- values for each element in `indices`. For example, given
- `indices=[[1,3], [2,4]]`, the parameter `values=[18, 3.6]` specifies
- that element [1,3] of the sparse tensor has a value of 18, and element
- [2,4] of the tensor has a value of 3.6.
-
-* `dense_shape`: A 1-D int64 tensor of dense_shape `[ndims]`, which specifies
- the dense_shape of the sparse tensor. Takes a list indicating the number of
- elements in each dimension. For example, `dense_shape=[3,6]` specifies a
- two-dimensional 3x6 tensor, `dense_shape=[2,3,4]` specifies a
- three-dimensional 2x3x4 tensor, and `dense_shape=[9]` specifies a
- one-dimensional tensor with 9 elements.
-
-The corresponding dense tensor satisfies:
-
-```python
-dense.shape = dense_shape
-dense[tuple(indices[i])] = values[i]
-```
-
-By convention, `indices` should be sorted in row-major order (or equivalently
-lexicographic order on the tuples `indices[i]`). This is not enforced when
-`SparseTensor` objects are constructed, but most ops assume correct ordering.
-If the ordering of sparse tensor `st` is wrong, a fixed version can be
-obtained by calling `tf.sparse_reorder(st)`.
-
-Example: The sparse tensor
-
-```python
-SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])
-```
-
-represents the dense tensor
-
-```python
-[[1, 0, 0, 0]
- [0, 0, 2, 0]
- [0, 0, 0, 0]]
-```
-- - -
-
-#### `tf.SparseTensor.__div__(sp_x, y)` {#SparseTensor.__div__}
-
-Component-wise divides a SparseTensor by a dense Tensor.
-
-*Limitation*: this Op only broadcasts the dense side to the sparse side, but not
-the other direction.
-
-##### Args:
-
-
-* <b>`sp_indices`</b>: A `Tensor` of type `int64`.
- 2-D. `N x R` matrix with the indices of non-empty values in a
- SparseTensor, possibly not in canonical ordering.
-* <b>`sp_values`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- 1-D. `N` non-empty values corresponding to `sp_indices`.
-* <b>`sp_shape`</b>: A `Tensor` of type `int64`.
- 1-D. Shape of the input SparseTensor.
-* <b>`dense`</b>: A `Tensor`. Must have the same type as `sp_values`.
- `R`-D. The dense Tensor operand.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `sp_values`.
- 1-D. The `N` values that are operated on.
-
-
-- - -
-
-#### `tf.SparseTensor.__init__(indices, values, dense_shape)` {#SparseTensor.__init__}
-
-Creates a `SparseTensor`.
-
-##### Args:
-
-
-* <b>`indices`</b>: A 2-D int64 tensor of shape `[N, ndims]`.
-* <b>`values`</b>: A 1-D tensor of any type and shape `[N]`.
-* <b>`dense_shape`</b>: A 1-D int64 tensor of shape `[ndims]`.
-
-##### Returns:
-
- A `SparseTensor`.
-
-
-- - -
-
-#### `tf.SparseTensor.__mul__(sp_x, y)` {#SparseTensor.__mul__}
-
-Component-wise multiplies a SparseTensor by a dense Tensor.
-
-The output locations corresponding to the implicitly zero elements in the sparse
-tensor will be zero (i.e., will not take up storage space), regardless of the
-contents of the dense tensor (even if it's +/-INF and that INF*0 == NaN).
-
-*Limitation*: this Op only broadcasts the dense side to the sparse side, but not
-the other direction.
-
-##### Args:
-
-
-* <b>`sp_indices`</b>: A `Tensor` of type `int64`.
- 2-D. `N x R` matrix with the indices of non-empty values in a
- SparseTensor, possibly not in canonical ordering.
-* <b>`sp_values`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- 1-D. `N` non-empty values corresponding to `sp_indices`.
-* <b>`sp_shape`</b>: A `Tensor` of type `int64`.
- 1-D. Shape of the input SparseTensor.
-* <b>`dense`</b>: A `Tensor`. Must have the same type as `sp_values`.
- `R`-D. The dense Tensor operand.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `sp_values`.
- 1-D. The `N` values that are operated on.
-
-
-- - -
-
-#### `tf.SparseTensor.__str__()` {#SparseTensor.__str__}
-
-
-
-
-- - -
-
-#### `tf.SparseTensor.__truediv__(sp_x, y)` {#SparseTensor.__truediv__}
-
-Internal helper function for 'sp_t / dense_t'.
-
-
-- - -
-
-#### `tf.SparseTensor.dense_shape` {#SparseTensor.dense_shape}
-
-A 1-D Tensor of int64 representing the shape of the dense tensor.
-
-
-- - -
-
-#### `tf.SparseTensor.dtype` {#SparseTensor.dtype}
-
-The `DType` of elements in this tensor.
-
-
-- - -
-
-#### `tf.SparseTensor.eval(feed_dict=None, session=None)` {#SparseTensor.eval}
-
-Evaluates this sparse tensor in a `Session`.
-
-Calling this method will execute all preceding operations that
-produce the inputs needed for the operation that produces this
-tensor.
-
-*N.B.* Before invoking `SparseTensor.eval()`, its graph must have been
-launched in a session, and either a default session must be
-available, or `session` must be specified explicitly.
-
-##### Args:
-
-
-* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
- See [`Session.run()`](../../api_docs/python/client.md#Session.run) for a
- description of the valid feed values.
-* <b>`session`</b>: (Optional.) The `Session` to be used to evaluate this sparse
- tensor. If none, the default session will be used.
-
-##### Returns:
-
- A `SparseTensorValue` object.
-
-
-- - -
-
-#### `tf.SparseTensor.from_value(cls, sparse_tensor_value)` {#SparseTensor.from_value}
-
-
-
-
-- - -
-
-#### `tf.SparseTensor.get_shape()` {#SparseTensor.get_shape}
-
-Get the `TensorShape` representing the shape of the dense tensor.
-
-##### Returns:
-
- A `TensorShape` object.
-
-
-- - -
-
-#### `tf.SparseTensor.graph` {#SparseTensor.graph}
-
-The `Graph` that contains the index, value, and dense_shape tensors.
-
-
-- - -
-
-#### `tf.SparseTensor.indices` {#SparseTensor.indices}
-
-The indices of non-zero values in the represented dense tensor.
-
-##### Returns:
-
- A 2-D Tensor of int64 with dense_shape `[N, ndims]`, where `N` is the
- number of non-zero values in the tensor, and `ndims` is the rank.
-
-
-- - -
-
-#### `tf.SparseTensor.op` {#SparseTensor.op}
-
-The `Operation` that produces `values` as an output.
-
-
-- - -
-
-#### `tf.SparseTensor.values` {#SparseTensor.values}
-
-The non-zero values in the represented dense tensor.
-
-##### Returns:
-
- A 1-D Tensor of any data type.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_rank.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_rank.md
deleted file mode 100644
index 488b7519d2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_rank.md
+++ /dev/null
@@ -1,32 +0,0 @@
-### `tf.assert_rank(x, rank, data=None, summarize=None, message=None, name=None)` {#assert_rank}
-
-Assert `x` has rank equal to `rank`.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_rank(x, 2)]):
- output = tf.reduce_sum(x)
-```
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`rank`</b>: Scalar integer `Tensor`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_rank".
-
-##### Returns:
-
- Op raising `InvalidArgumentError` unless `x` has specified rank.
- If static checks determine `x` has correct rank, a `no_op` is returned.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If static checks determine `x` has wrong rank.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_type.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_type.md
deleted file mode 100644
index 922d85b530..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.assert_type.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.assert_type(tensor, tf_type, message=None, name=None)` {#assert_type}
-
-Statically asserts that the given `Tensor` is of the specified type.
-
-##### Args:
-
-
-* <b>`tensor`</b>: A tensorflow `Tensor`.
-* <b>`tf_type`</b>: A tensorflow type (`dtypes.float32`, `tf.int64`, `dtypes.bool`,
- etc).
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name to give this `Op`. Defaults to "assert_type"
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the tensors data type doesn't match `tf_type`.
-
-##### Returns:
-
- A `no_op` that does nothing. Type can be determined statically.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ceil.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ceil.md
deleted file mode 100644
index 34e4a7feed..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ceil.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.ceil(x, name=None)` {#ceil}
-
-Returns element-wise smallest integer in not less than x.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.check_numerics.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.check_numerics.md
deleted file mode 100644
index 46a8f6f7db..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.check_numerics.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.check_numerics(tensor, message, name=None)` {#check_numerics}
-
-Checks a tensor for NaN and Inf values.
-
-When run, reports an `InvalidArgument` error if `tensor` has any values
-that are not a number (NaN) or infinity (Inf). Otherwise, passes `tensor` as-is.
-
-##### Args:
-
-
-* <b>`tensor`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`message`</b>: A `string`. Prefix of the error message.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.bayesflow.stochastic_tensor.SampleValue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.bayesflow.stochastic_tensor.SampleValue.md
deleted file mode 100644
index 5ace6653e3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.bayesflow.stochastic_tensor.SampleValue.md
+++ /dev/null
@@ -1,80 +0,0 @@
-Draw samples, possibly adding new outer dimensions along the way.
-
-This ValueType draws samples from StochasticTensors run within its
-context, increasing the rank according to the requested shape.
-
-Examples:
-
-```python
-mu = tf.zeros((2,3))
-sigma = tf.ones((2, 3))
-with sg.value_type(sg.SampleValue()):
- st = sg.StochasticTensor(
- tf.contrib.distributions.Normal, mu=mu, sigma=sigma)
-# draws 1 sample and does not reshape
-assertEqual(st.value().get_shape(), (2, 3))
-```
-
-```python
-mu = tf.zeros((2,3))
-sigma = tf.ones((2, 3))
-with sg.value_type(sg.SampleValue(4)):
- st = sg.StochasticTensor(
- tf.contrib.distributions.Normal, mu=mu, sigma=sigma)
-# draws 4 samples each with shape (2, 3) and concatenates
-assertEqual(st.value().get_shape(), (4, 2, 3))
-```
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.SampleValue.__init__(shape=(), stop_gradient=False)` {#SampleValue.__init__}
-
-Sample according to shape.
-
-For the given StochasticTensor `st` using this value type,
-the shape of `st.value()` will match that of
-`st.distribution.sample(shape)`.
-
-##### Args:
-
-
-* <b>`shape`</b>: A shape tuple or int32 tensor. The sample shape.
- Default is a scalar: take one sample and do not change the size.
-* <b>`stop_gradient`</b>: If `True`, StochasticTensors' values are wrapped in
- `stop_gradient`, to avoid backpropagation through.
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.SampleValue.declare_inputs(unused_stochastic_tensor, unused_inputs_dict)` {#SampleValue.declare_inputs}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.SampleValue.popped_above(unused_value_type)` {#SampleValue.popped_above}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.SampleValue.pushed_above(unused_value_type)` {#SampleValue.pushed_above}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.SampleValue.shape` {#SampleValue.shape}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.SampleValue.stop_gradient` {#SampleValue.stop_gradient}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.bayesflow.variational_inference.ELBOForms.check_form.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.bayesflow.variational_inference.ELBOForms.check_form.md
deleted file mode 100644
index e3cc3ca4fe..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.bayesflow.variational_inference.ELBOForms.check_form.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.contrib.bayesflow.variational_inference.ELBOForms.check_form(form)` {#ELBOForms.check_form}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.copy_graph.copy_op_to_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.copy_graph.copy_op_to_graph.md
deleted file mode 100644
index d549132fa2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.copy_graph.copy_op_to_graph.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.contrib.copy_graph.copy_op_to_graph(org_instance, to_graph, variables, scope='')` {#copy_op_to_graph}
-
-Given an `Operation` 'org_instance` from one `Graph`,
-initializes and returns a copy of it from another `Graph`,
-under the specified scope (default `""`).
-
-The copying is done recursively, so any `Operation` whose output
-is required to evaluate the `org_instance`, is also copied (unless
-already done).
-
-Since `Variable` instances are copied separately, those required
-to evaluate `org_instance` must be provided as input.
-
-Args:
-org_instance: An `Operation` from some `Graph`. Could be a
- `Placeholder` as well.
-to_graph: The `Graph` to copy `org_instance` to.
-variables: An iterable of `Variable` instances to copy `org_instance` to.
-scope: A scope for the new `Variable` (default `""`).
-
-##### Returns:
-
- The copied `Operation` from `to_graph`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `org_instance` is not an `Operation` or `Tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Binomial.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Binomial.md
deleted file mode 100644
index 6baa4d2700..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Binomial.md
+++ /dev/null
@@ -1,687 +0,0 @@
-Binomial distribution.
-
-This distribution is parameterized by `probs`, a (batch of) probabilities for
-drawing a `1` and `total_count`, the number of trials per draw from the
-Binomial.
-
-#### Mathematical Details
-
-The Binomial is a distribution over the number of `1`'s in `total_count`
-independent trials, with each trial having the same probability of `1`, i.e.,
-`probs`.
-
-The probability mass function (pmf) is,
-
-```none
-pmf(k; n, p) = p**k (1 - p)**(n - k) / Z
-Z = k! (n - k)! / n!
-```
-
-where:
-* `total_count = n`,
-* `probs = p`,
-* `Z` is the normalizaing constant, and,
-* `n!` is the factorial of `n`.
-
-#### Examples
-
-Create a single distribution, corresponding to 5 coin flips.
-
-```python
-dist = Binomial(total_count=5., probs=.5)
-```
-
-Create a single distribution (using logits), corresponding to 5 coin flips.
-
-```python
-dist = Binomial(total_count=5., logits=0.)
-```
-
-Creates 3 distributions with the third distribution most likely to have
-successes.
-
-```python
-p = [.2, .3, .8]
-# n will be broadcast to [4., 4., 4.], to match p.
-dist = Binomial(total_count=4., probs=p)
-```
-
-The distribution functions can be evaluated on counts.
-
-```python
-# counts same shape as p.
-counts = [1., 2, 3]
-dist.prob(counts) # Shape [3]
-
-# p will be broadcast to [[.2, .3, .8], [.2, .3, .8]] to match counts.
-counts = [[1., 2, 1], [2, 2, 4]]
-dist.prob(counts) # Shape [2, 3]
-
-# p will be broadcast to shape [5, 7, 3] to match counts.
-counts = [[...]] # Shape [5, 7, 3]
-dist.prob(counts) # Shape [5, 7, 3]
-```
-- - -
-
-#### `tf.contrib.distributions.Binomial.__init__(total_count, logits=None, probs=None, validate_args=False, allow_nan_stats=True, name='Binomial')` {#Binomial.__init__}
-
-Initialize a batch of Binomial distributions.
-
-##### Args:
-
-
-* <b>`total_count`</b>: Non-negative floating point tensor with shape broadcastable
- to `[N1,..., Nm]` with `m >= 0` and the same dtype as `probs` or
- `logits`. Defines this as a batch of `N1 x ... x Nm` different Binomial
- distributions. Its components should be equal to integer values.
-* <b>`logits`</b>: Floating point tensor representing the log-odds of a
- positive event with shape broadcastable to `[N1,..., Nm]` `m >= 0`, and
- the same dtype as `total_count`. Each entry represents logits for the
- probability of success for independent Binomial distributions. Only one
- of `logits` or `probs` should be passed in.
-* <b>`probs`</b>: Positive floating point tensor with shape broadcastable to
- `[N1,..., Nm]` `m >= 0`, `probs in [0, 1]`. Each entry represents the
- probability of success for independent Binomial distributions. Only one
- of `logits` or `probs` should be passed in.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.allow_nan_stats` {#Binomial.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.batch_shape` {#Binomial.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.batch_shape_tensor(name='batch_shape_tensor')` {#Binomial.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.cdf(value, name='cdf')` {#Binomial.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.copy(**override_parameters_kwargs)` {#Binomial.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.covariance(name='covariance')` {#Binomial.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.dtype` {#Binomial.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.entropy(name='entropy')` {#Binomial.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.event_shape` {#Binomial.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.event_shape_tensor(name='event_shape_tensor')` {#Binomial.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.is_continuous` {#Binomial.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.is_scalar_batch(name='is_scalar_batch')` {#Binomial.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.is_scalar_event(name='is_scalar_event')` {#Binomial.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.log_cdf(value, name='log_cdf')` {#Binomial.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.log_prob(value, name='log_prob')` {#Binomial.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Binomial`:
-
-For each batch member of counts `value`, `P[value]` is the probability that
-after sampling `self.total_count` draws from this Binomial distribution, the
-number of successes is `value`. Since different sequences of draws can result in
-the same counts, the probability includes a combinatorial coefficient.
-
-Note: `value` must be a non-negative tensor with dtype `dtype` and whose shape
-can be broadcast with `self.probs` and `self.total_count`. `value` is only legal
-if it is less than or equal to `self.total_count` and its components are equal
-to integer values.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.log_survival_function(value, name='log_survival_function')` {#Binomial.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.logits` {#Binomial.logits}
-
-Log-odds of drawing a `1`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.mean(name='mean')` {#Binomial.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.mode(name='mode')` {#Binomial.mode}
-
-Mode.
-
-Additional documentation from `Binomial`:
-
-Note that when `(1 + total_count) * probs` is an integer, there are
-actually two modes. Namely, `(1 + total_count) * probs` and
-`(1 + total_count) * probs - 1` are both modes. Here we return only the
-larger of the two modes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.name` {#Binomial.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Binomial.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.param_static_shapes(cls, sample_shape)` {#Binomial.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.parameters` {#Binomial.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.prob(value, name='prob')` {#Binomial.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Binomial`:
-
-For each batch member of counts `value`, `P[value]` is the probability that
-after sampling `self.total_count` draws from this Binomial distribution, the
-number of successes is `value`. Since different sequences of draws can result in
-the same counts, the probability includes a combinatorial coefficient.
-
-Note: `value` must be a non-negative tensor with dtype `dtype` and whose shape
-can be broadcast with `self.probs` and `self.total_count`. `value` is only legal
-if it is less than or equal to `self.total_count` and its components are equal
-to integer values.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.probs` {#Binomial.probs}
-
-Probability of of drawing a `1`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.reparameterization_type` {#Binomial.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.sample(sample_shape=(), seed=None, name='sample')` {#Binomial.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.stddev(name='stddev')` {#Binomial.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.survival_function(value, name='survival_function')` {#Binomial.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.total_count` {#Binomial.total_count}
-
-Number of trials.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.validate_args` {#Binomial.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Binomial.variance(name='variance')` {#Binomial.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DirichletMultinomial.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DirichletMultinomial.md
deleted file mode 100644
index e9d08ed6b9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.DirichletMultinomial.md
+++ /dev/null
@@ -1,726 +0,0 @@
-Dirichlet-Multinomial compound distribution.
-
-The Dirichlet-Multinomial distribution is parameterized by a (batch of)
-length-`k` `concentration` vectors (`k > 1`) and a `total_count` number of
-trials, i.e., the number of trials per draw from the DirichletMultinomial. It
-is defined over a (batch of) length-`k` vector `counts` such that
-`tf.reduce_sum(counts, -1) = total_count`. The Dirichlet-Multinomial is
-identically the Beta-Binomial distribution when `k = 2`.
-
-#### Mathematical Details
-
-The Dirichlet-Multinomial is a distribution over `k`-class counts, i.e., a
-length-`k` vector of non-negative integer `counts = n = [n_0, ..., n_{k-1}]`.
-
-The probability mass function (pmf) is,
-
-```none
-pmf(n; alpha, N) = Beta(alpha + n) / (prod_j n_j!) / Z
-Z = Beta(alpha) / N!
-```
-
-where:
-
-* `concentration = alpha = [alpha_0, ..., alpha_{k-1}]`, `alpha_j > 0`,
-* `total_count = N`, `N` a positive integer,
-* `N!` is `N` factorial, and,
-* `Beta(x) = prod_j Gamma(x_j) / Gamma(sum_j x_j)` is the
- [multivariate beta function](
- https://en.wikipedia.org/wiki/Beta_function#Multivariate_beta_function),
- and,
-* `Gamma` is the [gamma function](
- https://en.wikipedia.org/wiki/Gamma_function).
-
-Dirichlet-Multinomial is a [compound distribution](
-https://en.wikipedia.org/wiki/Compound_probability_distribution), i.e., its
-samples are generated as follows.
-
- 1. Choose class probabilities:
- `probs = [p_0,...,p_{k-1}] ~ Dir(concentration)`
- 2. Draw integers:
- `counts = [n_0,...,n_{k-1}] ~ Multinomial(total_count, probs)`
-
-The last `concentration` dimension parametrizes a single Dirichlet-Multinomial
-distribution. When calling distribution functions (e.g., `dist.prob(counts)`),
-`concentration`, `total_count` and `counts` are broadcast to the same shape.
-The last dimension of of `counts` corresponds single Dirichlet-Multinomial
-distributions.
-
-Distribution parameters are automatically broadcast in all functions; see
-examples for details.
-
-#### Examples
-
-```python
-alpha = [1, 2, 3]
-n = 2
-dist = DirichletMultinomial(n, alpha)
-```
-
-Creates a 3-class distribution, with the 3rd class is most likely to be drawn.
-The distribution functions can be evaluated on counts.
-
-```python
-# counts same shape as alpha.
-counts = [0, 0, 2]
-dist.prob(counts) # Shape []
-
-# alpha will be broadcast to [[1, 2, 3], [1, 2, 3]] to match counts.
-counts = [[1, 1, 0], [1, 0, 1]]
-dist.prob(counts) # Shape [2]
-
-# alpha will be broadcast to shape [5, 7, 3] to match counts.
-counts = [[...]] # Shape [5, 7, 3]
-dist.prob(counts) # Shape [5, 7]
-```
-
-Creates a 2-batch of 3-class distributions.
-
-```python
-alpha = [[1, 2, 3], [4, 5, 6]] # Shape [2, 3]
-n = [3, 3]
-dist = DirichletMultinomial(n, alpha)
-
-# counts will be broadcast to [[2, 1, 0], [2, 1, 0]] to match alpha.
-counts = [2, 1, 0]
-dist.prob(counts) # Shape [2]
-```
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.__init__(total_count, concentration, validate_args=False, allow_nan_stats=True, name='DirichletMultinomial')` {#DirichletMultinomial.__init__}
-
-Initialize a batch of DirichletMultinomial distributions.
-
-##### Args:
-
-
-* <b>`total_count`</b>: Non-negative floating point tensor, whose dtype is the same
- as `concentration`. The shape is broadcastable to `[N1,..., Nm]` with
- `m >= 0`. Defines this as a batch of `N1 x ... x Nm` different
- Dirichlet multinomial distributions. Its components should be equal to
- integer values.
-* <b>`concentration`</b>: Positive floating point tensor, whose dtype is the
- same as `n` with shape broadcastable to `[N1,..., Nm, k]` `m >= 0`.
- Defines this as a batch of `N1 x ... x Nm` different `k` class Dirichlet
- multinomial distributions.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.allow_nan_stats` {#DirichletMultinomial.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.batch_shape` {#DirichletMultinomial.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.batch_shape_tensor(name='batch_shape_tensor')` {#DirichletMultinomial.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.cdf(value, name='cdf')` {#DirichletMultinomial.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.concentration` {#DirichletMultinomial.concentration}
-
-Concentration parameter; expected prior counts for that coordinate.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.copy(**override_parameters_kwargs)` {#DirichletMultinomial.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.covariance(name='covariance')` {#DirichletMultinomial.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-
-Additional documentation from `DirichletMultinomial`:
-
-The covariance for each batch member is defined as the following:
-
-```none
-Var(X_j) = n * alpha_j / alpha_0 * (1 - alpha_j / alpha_0) *
-(n + alpha_0) / (1 + alpha_0)
-```
-
-where `concentration = alpha` and
-`total_concentration = alpha_0 = sum_j alpha_j`.
-
-The covariance between elements in a batch is defined as:
-
-```none
-Cov(X_i, X_j) = -n * alpha_i * alpha_j / alpha_0 ** 2 *
-(n + alpha_0) / (1 + alpha_0)
-```
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.dtype` {#DirichletMultinomial.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.entropy(name='entropy')` {#DirichletMultinomial.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.event_shape` {#DirichletMultinomial.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.event_shape_tensor(name='event_shape_tensor')` {#DirichletMultinomial.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.is_continuous` {#DirichletMultinomial.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.is_scalar_batch(name='is_scalar_batch')` {#DirichletMultinomial.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.is_scalar_event(name='is_scalar_event')` {#DirichletMultinomial.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.log_cdf(value, name='log_cdf')` {#DirichletMultinomial.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.log_prob(value, name='log_prob')` {#DirichletMultinomial.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `DirichletMultinomial`:
-
-For each batch of counts,
-`value = [n_0, ..., n_{k-1}]`, `P[value]` is the probability that after
-sampling `self.total_count` draws from this Dirichlet-Multinomial distribution,
-the number of draws falling in class `j` is `n_j`. Since this definition is
-[exchangeable](https://en.wikipedia.org/wiki/Exchangeable_random_variables);
-different sequences have the same counts so the probability includes a
-combinatorial coefficient.
-
-Note: `value` must be a non-negative tensor with dtype `self.dtype`, have no
-fractional components, and such that
-`tf.reduce_sum(value, -1) = self.total_count`. Its shape must be broadcastable
-with `self.concentration` and `self.total_count`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.log_survival_function(value, name='log_survival_function')` {#DirichletMultinomial.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.mean(name='mean')` {#DirichletMultinomial.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.mode(name='mode')` {#DirichletMultinomial.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.name` {#DirichletMultinomial.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#DirichletMultinomial.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.param_static_shapes(cls, sample_shape)` {#DirichletMultinomial.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.parameters` {#DirichletMultinomial.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.prob(value, name='prob')` {#DirichletMultinomial.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `DirichletMultinomial`:
-
-For each batch of counts,
-`value = [n_0, ..., n_{k-1}]`, `P[value]` is the probability that after
-sampling `self.total_count` draws from this Dirichlet-Multinomial distribution,
-the number of draws falling in class `j` is `n_j`. Since this definition is
-[exchangeable](https://en.wikipedia.org/wiki/Exchangeable_random_variables);
-different sequences have the same counts so the probability includes a
-combinatorial coefficient.
-
-Note: `value` must be a non-negative tensor with dtype `self.dtype`, have no
-fractional components, and such that
-`tf.reduce_sum(value, -1) = self.total_count`. Its shape must be broadcastable
-with `self.concentration` and `self.total_count`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.reparameterization_type` {#DirichletMultinomial.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.sample(sample_shape=(), seed=None, name='sample')` {#DirichletMultinomial.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.stddev(name='stddev')` {#DirichletMultinomial.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.survival_function(value, name='survival_function')` {#DirichletMultinomial.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.total_concentration` {#DirichletMultinomial.total_concentration}
-
-Sum of last dim of concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.total_count` {#DirichletMultinomial.total_count}
-
-Number of trials used to construct a sample.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.validate_args` {#DirichletMultinomial.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.DirichletMultinomial.variance(name='variance')` {#DirichletMultinomial.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.ExpRelaxedOneHotCategorical.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.ExpRelaxedOneHotCategorical.md
deleted file mode 100644
index 4a1f17a6d0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.ExpRelaxedOneHotCategorical.md
+++ /dev/null
@@ -1,688 +0,0 @@
-ExpRelaxedOneHotCategorical distribution with temperature and logits.
-
-An ExpRelaxedOneHotCategorical distribution is a log-transformed
-RelaxedOneHotCategorical distribution. The RelaxedOneHotCategorical is a
-distribution over random probability vectors, vectors of positive real
-values that sum to one, which continuously approximates a OneHotCategorical.
-The degree of approximation is controlled by a temperature: as the temperature
-goes to 0 the RelaxedOneHotCategorical becomes discrete with a distribution
-described by the logits, as the temperature goes to infinity the
-RelaxedOneHotCategorical becomes the constant distribution that is identically
-the constant vector of (1/event_size, ..., 1/event_size).
-
-Because computing log-probabilities of the RelaxedOneHotCategorical can
-suffer from underflow issues, this class is one solution for loss
-functions that depend on log-probabilities, such as the KL Divergence found
-in the variational autoencoder loss. The KL divergence between two
-distributions is invariant under invertible transformations, so evaluating
-KL divergences of ExpRelaxedOneHotCategorical samples, which are always
-followed by a `tf.exp` op, is equivalent to evaluating KL divergences of
-RelaxedOneHotCategorical samples. See the appendix of Maddison et al., 2016
-for more mathematical details, where this distribution is called the
-ExpConcrete.
-
-#### Examples
-
-Creates a continuous distribution, whoe exp approximates a 3-class one-hot
-categorical distiribution. The 2nd class is the most likely to be the
-largest component in samples drawn from this distribution. If those samples
-are followed by a `tf.exp` op, then they are distributed as a relaxed onehot
-categorical.
-
-```python
-temperature = 0.5
-p = [0.1, 0.5, 0.4]
-dist = ExpRelaxedOneHotCategorical(temperature, probs=p)
-samples = dist.sample()
-exp_samples = tf.exp(samples)
-# exp_samples has the same distribution as samples from
-# RelaxedOneHotCategorical(temperature, probs=p)
-```
-
-Creates a continuous distribution, whose exp approximates a 3-class one-hot
-categorical distiribution. The 2nd class is the most likely to be the
-largest component in samples drawn from this distribution.
-
-```python
-temperature = 0.5
-logits = [-2, 2, 0]
-dist = ExpRelaxedOneHotCategorical(temperature, logits=logits)
-samples = dist.sample()
-exp_samples = tf.exp(samples)
-# exp_samples has the same distribution as samples from
-# RelaxedOneHotCategorical(temperature, probs=p)
-```
-
-Creates a continuous distribution, whose exp approximates a 3-class one-hot
-categorical distiribution. Because the temperature is very low, samples from
-this distribution are almost discrete, with one component almost 0 and the
-others very negative. The 2nd class is the most likely to be the largest
-component in samples drawn from this distribution.
-
-```python
-temperature = 1e-5
-logits = [-2, 2, 0]
-dist = ExpRelaxedOneHotCategorical(temperature, logits=logits)
-samples = dist.sample()
-exp_samples = tf.exp(samples)
-# exp_samples has the same distribution as samples from
-# RelaxedOneHotCategorical(temperature, probs=p)
-```
-
-Creates a continuous distribution, whose exp approximates a 3-class one-hot
-categorical distiribution. Because the temperature is very high, samples from
-this distribution are usually close to the (-log(3), -log(3), -log(3)) vector.
-The 2nd class is still the most likely to be the largest component
-in samples drawn from this distribution.
-
-```python
-temperature = 10
-logits = [-2, 2, 0]
-dist = ExpRelaxedOneHotCategorical(temperature, logits=logits)
-samples = dist.sample()
-exp_samples = tf.exp(samples)
-# exp_samples has the same distribution as samples from
-# RelaxedOneHotCategorical(temperature, probs=p)
-```
-
-Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete Distribution:
-A Continuous Relaxation of Discrete Random Variables. 2016.
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.__init__(temperature, logits=None, probs=None, dtype=tf.float32, validate_args=False, allow_nan_stats=True, name='ExpRelaxedOneHotCategorical')` {#ExpRelaxedOneHotCategorical.__init__}
-
-Initialize ExpRelaxedOneHotCategorical using class log-probabilities.
-
-##### Args:
-
-
-* <b>`temperature`</b>: An 0-D `Tensor`, representing the temperature
- of a set of ExpRelaxedCategorical distributions. The temperature should
- be positive.
-* <b>`logits`</b>: An N-D `Tensor`, `N >= 1`, representing the log probabilities
- of a set of ExpRelaxedCategorical distributions. The first
- `N - 1` dimensions index into a batch of independent distributions and
- the last dimension represents a vector of logits for each class. Only
- one of `logits` or `probs` should be passed in.
-* <b>`probs`</b>: An N-D `Tensor`, `N >= 1`, representing the probabilities
- of a set of ExpRelaxedCategorical distributions. The first
- `N - 1` dimensions index into a batch of independent distributions and
- the last dimension represents a vector of probabilities for each
- class. Only one of `logits` or `probs` should be passed in.
-* <b>`dtype`</b>: The type of the event samples (default: int32).
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.allow_nan_stats` {#ExpRelaxedOneHotCategorical.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.batch_shape` {#ExpRelaxedOneHotCategorical.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.batch_shape_tensor(name='batch_shape_tensor')` {#ExpRelaxedOneHotCategorical.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.cdf(value, name='cdf')` {#ExpRelaxedOneHotCategorical.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.copy(**override_parameters_kwargs)` {#ExpRelaxedOneHotCategorical.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.covariance(name='covariance')` {#ExpRelaxedOneHotCategorical.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.dtype` {#ExpRelaxedOneHotCategorical.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.entropy(name='entropy')` {#ExpRelaxedOneHotCategorical.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.event_shape` {#ExpRelaxedOneHotCategorical.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.event_shape_tensor(name='event_shape_tensor')` {#ExpRelaxedOneHotCategorical.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.event_size` {#ExpRelaxedOneHotCategorical.event_size}
-
-Scalar `int32` tensor: the number of classes.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.is_continuous` {#ExpRelaxedOneHotCategorical.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.is_scalar_batch(name='is_scalar_batch')` {#ExpRelaxedOneHotCategorical.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.is_scalar_event(name='is_scalar_event')` {#ExpRelaxedOneHotCategorical.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.log_cdf(value, name='log_cdf')` {#ExpRelaxedOneHotCategorical.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.log_prob(value, name='log_prob')` {#ExpRelaxedOneHotCategorical.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.log_survival_function(value, name='log_survival_function')` {#ExpRelaxedOneHotCategorical.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.logits` {#ExpRelaxedOneHotCategorical.logits}
-
-Vector of coordinatewise logits.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.mean(name='mean')` {#ExpRelaxedOneHotCategorical.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.mode(name='mode')` {#ExpRelaxedOneHotCategorical.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.name` {#ExpRelaxedOneHotCategorical.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#ExpRelaxedOneHotCategorical.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.param_static_shapes(cls, sample_shape)` {#ExpRelaxedOneHotCategorical.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.parameters` {#ExpRelaxedOneHotCategorical.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.prob(value, name='prob')` {#ExpRelaxedOneHotCategorical.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.probs` {#ExpRelaxedOneHotCategorical.probs}
-
-Vector of probabilities summing to one.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.reparameterization_type` {#ExpRelaxedOneHotCategorical.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.sample(sample_shape=(), seed=None, name='sample')` {#ExpRelaxedOneHotCategorical.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.stddev(name='stddev')` {#ExpRelaxedOneHotCategorical.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.survival_function(value, name='survival_function')` {#ExpRelaxedOneHotCategorical.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.temperature` {#ExpRelaxedOneHotCategorical.temperature}
-
-Batchwise temperature tensor of a RelaxedCategorical.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.validate_args` {#ExpRelaxedOneHotCategorical.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExpRelaxedOneHotCategorical.variance(name='variance')` {#ExpRelaxedOneHotCategorical.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Exponential.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Exponential.md
deleted file mode 100644
index 6ce65a7b2f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Exponential.md
+++ /dev/null
@@ -1,608 +0,0 @@
-Exponential distribution.
-
-The Exponential distribution is parameterized by an event `rate` parameter.
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; lambda, x > 0) = exp(-lambda x) / Z
-Z = 1 / lambda
-```
-
-where `rate = lambda` and `Z` is the normalizaing constant.
-
-The Exponential distribution is a special case of the Gamma distribution,
-i.e.,
-
-```python
-Exponential(rate) = Gamma(concentration=1., rate)
-```
-
-The Exponential distribution uses a `rate` parameter, or "inverse scale",
-which can be intuited as,
-
-```none
-X ~ Exponential(rate=1)
-Y = X / rate
-```
-- - -
-
-#### `tf.contrib.distributions.Exponential.__init__(rate, validate_args=False, allow_nan_stats=True, name='Exponential')` {#Exponential.__init__}
-
-Construct Exponential distribution with parameter `rate`.
-
-##### Args:
-
-
-* <b>`rate`</b>: Floating point tensor, equivalent to `1 / mean`. Must contain only
- positive values.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.allow_nan_stats` {#Exponential.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.batch_shape` {#Exponential.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.batch_shape_tensor(name='batch_shape_tensor')` {#Exponential.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.cdf(value, name='cdf')` {#Exponential.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.concentration` {#Exponential.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.copy(**override_parameters_kwargs)` {#Exponential.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.covariance(name='covariance')` {#Exponential.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.dtype` {#Exponential.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.entropy(name='entropy')` {#Exponential.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.event_shape` {#Exponential.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.event_shape_tensor(name='event_shape_tensor')` {#Exponential.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.is_continuous` {#Exponential.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.is_scalar_batch(name='is_scalar_batch')` {#Exponential.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.is_scalar_event(name='is_scalar_event')` {#Exponential.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.log_cdf(value, name='log_cdf')` {#Exponential.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.log_prob(value, name='log_prob')` {#Exponential.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.log_survival_function(value, name='log_survival_function')` {#Exponential.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.mean(name='mean')` {#Exponential.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.mode(name='mode')` {#Exponential.mode}
-
-Mode.
-
-Additional documentation from `Gamma`:
-
-The mode of a gamma distribution is `(shape - 1) / rate` when
-`shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`,
-an exception will be raised rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.name` {#Exponential.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Exponential.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.param_static_shapes(cls, sample_shape)` {#Exponential.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.parameters` {#Exponential.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.prob(value, name='prob')` {#Exponential.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.rate` {#Exponential.rate}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.reparameterization_type` {#Exponential.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.sample(sample_shape=(), seed=None, name='sample')` {#Exponential.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.stddev(name='stddev')` {#Exponential.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.survival_function(value, name='survival_function')` {#Exponential.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.validate_args` {#Exponential.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Exponential.variance(name='variance')` {#Exponential.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Gamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Gamma.md
deleted file mode 100644
index 78246d36f8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Gamma.md
+++ /dev/null
@@ -1,639 +0,0 @@
-Gamma distribution.
-
-The Gamma distribution is defined over positive real numbers using
-parameters `concentration` (aka "alpha") and `rate` (aka "beta").
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; alpha, beta, x > 0) = x**(alpha - 1) exp(-x beta) / Z
-Z = Gamma(alpha) beta**alpha
-```
-
-where:
-
-* `concentration = alpha`, `alpha > 0`,
-* `rate = beta`, `beta > 0`,
-* `Z` is the normalizing constant, and,
-* `Gamma` is the [gamma function](
- https://en.wikipedia.org/wiki/Gamma_function).
-
-The cumulative density function (cdf) is,
-
-```none
-cdf(x; alpha, beta, x > 0) = GammaInc(alpha, beta x) / Gamma(alpha)
-```
-
-where `GammaInc` is the [lower incomplete Gamma function](
-https://en.wikipedia.org/wiki/Incomplete_gamma_function).
-
-The parameters can be intuited via their relationship to mean and stddev,
-
-```none
-concentration = alpha = (mean / stddev)**2
-rate = beta = mean / stddev**2 = concentration / mean
-```
-
-Distribution parameters are automatically broadcast in all functions; see
-examples for details.
-
-WARNING: This distribution may draw 0-valued samples for small `concentration`
-values. See note in `tf.random_gamma` docstring.
-
-#### Examples
-
-```python
-dist = Gamma(concentration=3.0, rate=2.0)
-dist2 = Gamma(concentration=[3.0, 4.0], rate=[2.0, 3.0])
-```
-- - -
-
-#### `tf.contrib.distributions.Gamma.__init__(concentration, rate, validate_args=False, allow_nan_stats=True, name='Gamma')` {#Gamma.__init__}
-
-Construct Gamma with `concentration` and `rate` parameters.
-
-The parameters `concentration` and `rate` must be shaped in a way that
-supports broadcasting (e.g. `concentration + rate` is a valid operation).
-
-##### Args:
-
-
-* <b>`concentration`</b>: Floating point tensor, the concentration params of the
- distribution(s). Must contain only positive values.
-* <b>`rate`</b>: Floating point tensor, the inverse scale params of the
- distribution(s). Must contain only positive values.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `concentration` and `rate` are different dtypes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.allow_nan_stats` {#Gamma.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.batch_shape` {#Gamma.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.batch_shape_tensor(name='batch_shape_tensor')` {#Gamma.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.cdf(value, name='cdf')` {#Gamma.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.concentration` {#Gamma.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.copy(**override_parameters_kwargs)` {#Gamma.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.covariance(name='covariance')` {#Gamma.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.dtype` {#Gamma.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.entropy(name='entropy')` {#Gamma.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.event_shape` {#Gamma.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.event_shape_tensor(name='event_shape_tensor')` {#Gamma.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.is_continuous` {#Gamma.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.is_scalar_batch(name='is_scalar_batch')` {#Gamma.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.is_scalar_event(name='is_scalar_event')` {#Gamma.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.log_cdf(value, name='log_cdf')` {#Gamma.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.log_prob(value, name='log_prob')` {#Gamma.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.log_survival_function(value, name='log_survival_function')` {#Gamma.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.mean(name='mean')` {#Gamma.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.mode(name='mode')` {#Gamma.mode}
-
-Mode.
-
-Additional documentation from `Gamma`:
-
-The mode of a gamma distribution is `(shape - 1) / rate` when
-`shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`,
-an exception will be raised rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.name` {#Gamma.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Gamma.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.param_static_shapes(cls, sample_shape)` {#Gamma.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.parameters` {#Gamma.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.prob(value, name='prob')` {#Gamma.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.rate` {#Gamma.rate}
-
-Rate parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.reparameterization_type` {#Gamma.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.sample(sample_shape=(), seed=None, name='sample')` {#Gamma.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.stddev(name='stddev')` {#Gamma.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.survival_function(value, name='survival_function')` {#Gamma.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.validate_args` {#Gamma.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Gamma.variance(name='variance')` {#Gamma.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.InverseGamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.InverseGamma.md
deleted file mode 100644
index e908ceac08..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.InverseGamma.md
+++ /dev/null
@@ -1,653 +0,0 @@
-InverseGamma distribution.
-
-The `InverseGamma` distribution is defined over positive real numbers using
-parameters `concentration` (aka "alpha") and `rate` (aka "beta").
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; alpha, beta, x > 0) = x**(-alpha - 1) exp(-beta / x) / Z
-Z = Gamma(alpha) beta**-alpha
-```
-
-where:
-
-* `concentration = alpha`,
-* `rate = beta`,
-* `Z` is the normalizing constant, and,
-* `Gamma` is the [gamma function](
- https://en.wikipedia.org/wiki/Gamma_function).
-
-The cumulative density function (cdf) is,
-
-```none
-cdf(x; alpha, beta, x > 0) = GammaInc(alpha, beta / x) / Gamma(alpha)
-```
-
-where `GammaInc` is the [upper incomplete Gamma function](
-https://en.wikipedia.org/wiki/Incomplete_gamma_function).
-
-The parameters can be intuited via their relationship to mean and stddev,
-
-```none
-concentration = alpha = (mean / stddev)**2
-rate = beta = mean / stddev**2
-```
-
-Distribution parameters are automatically broadcast in all functions; see
-examples for details.
-
-WARNING: This distribution may draw 0-valued samples for small concentration
-values. See note in `tf.random_gamma` docstring.
-
-#### Examples
-
-```python
-dist = InverseGamma(concentration=3.0, rate=2.0)
-dist2 = InverseGamma(concentration=[3.0, 4.0], rate=[2.0, 3.0])
-```
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.__init__(concentration, rate, validate_args=False, allow_nan_stats=True, name='InverseGamma')` {#InverseGamma.__init__}
-
-Construct InverseGamma with `concentration` and `rate` parameters.
-
-The parameters `concentration` and `rate` must be shaped in a way that
-supports broadcasting (e.g. `concentration + rate` is a valid operation).
-
-##### Args:
-
-
-* <b>`concentration`</b>: Floating point tensor, the concentration params of the
- distribution(s). Must contain only positive values.
-* <b>`rate`</b>: Floating point tensor, the inverse scale params of the
- distribution(s). Must contain only positive values.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `concentration` and `rate` are different dtypes.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.allow_nan_stats` {#InverseGamma.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.batch_shape` {#InverseGamma.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.batch_shape_tensor(name='batch_shape_tensor')` {#InverseGamma.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.cdf(value, name='cdf')` {#InverseGamma.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.concentration` {#InverseGamma.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.copy(**override_parameters_kwargs)` {#InverseGamma.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.covariance(name='covariance')` {#InverseGamma.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.dtype` {#InverseGamma.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.entropy(name='entropy')` {#InverseGamma.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.event_shape` {#InverseGamma.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.event_shape_tensor(name='event_shape_tensor')` {#InverseGamma.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.is_continuous` {#InverseGamma.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.is_scalar_batch(name='is_scalar_batch')` {#InverseGamma.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.is_scalar_event(name='is_scalar_event')` {#InverseGamma.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.log_cdf(value, name='log_cdf')` {#InverseGamma.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.log_prob(value, name='log_prob')` {#InverseGamma.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.log_survival_function(value, name='log_survival_function')` {#InverseGamma.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.mean(name='mean')` {#InverseGamma.mean}
-
-Mean.
-
-Additional documentation from `InverseGamma`:
-
-The mean of an inverse gamma distribution is
-`rate / (concentration - 1)`, when `concentration > 1`, and `NaN`
-otherwise. If `self.allow_nan_stats` is `False`, an exception will be
-raised rather than returning `NaN`
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.mode(name='mode')` {#InverseGamma.mode}
-
-Mode.
-
-Additional documentation from `InverseGamma`:
-
-The mode of an inverse gamma distribution is `rate / (concentration +
-1)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.name` {#InverseGamma.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#InverseGamma.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.param_static_shapes(cls, sample_shape)` {#InverseGamma.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.parameters` {#InverseGamma.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.prob(value, name='prob')` {#InverseGamma.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.rate` {#InverseGamma.rate}
-
-Rate parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.reparameterization_type` {#InverseGamma.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.sample(sample_shape=(), seed=None, name='sample')` {#InverseGamma.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.stddev(name='stddev')` {#InverseGamma.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.survival_function(value, name='survival_function')` {#InverseGamma.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.validate_args` {#InverseGamma.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.InverseGamma.variance(name='variance')` {#InverseGamma.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-
-Additional documentation from `InverseGamma`:
-
-Variance for inverse gamma is defined only for `concentration > 2`. If
-`self.allow_nan_stats` is `False`, an exception will be raised rather
-than returning `NaN`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Multinomial.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Multinomial.md
deleted file mode 100644
index 796aece469..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.Multinomial.md
+++ /dev/null
@@ -1,697 +0,0 @@
-Multinomial distribution.
-
-This Multinomial distribution is parameterized by `probs`, a (batch of)
-length-`k` `prob` (probability) vectors (`k > 1`) such that
-`tf.reduce_sum(probs, -1) = 1`, and a `total_count` number of trials, i.e.,
-the number of trials per draw from the Multinomial. It is defined over a
-(batch of) length-`k` vector `counts` such that
-`tf.reduce_sum(counts, -1) = total_count`. The Multinomial is identically the
-Binomial distribution when `k = 2`.
-
-#### Mathematical Details
-
-The Multinomial is a distribution over `k`-class counts, i.e., a length-`k`
-vector of non-negative integer `counts = n = [n_0, ..., n_{k-1}]`.
-
-The probability mass function (pmf) is,
-
-```none
-pmf(n; pi, N) = prod_j (pi_j)**n_j / Z
-Z = (prod_j n_j!) / N!
-```
-
-where:
-* `probs = pi = [pi_0, ..., pi_{k-1}]`, `pi_j > 0`, `sum_j pi_j = 1`,
-* `total_count = N`, `N` a positive integer,
-* `Z` is the normalization constant, and,
-* `N!` denotes `N` factorial.
-
-Distribution parameters are automatically broadcast in all functions; see
-examples for details.
-
-#### Examples
-
-Create a 3-class distribution, with the 3rd class is most likely to be drawn,
-using logits.
-
-```python
-logits = [-50., -43, 0]
-dist = Multinomial(total_count=4., logits=logits)
-```
-
-Create a 3-class distribution, with the 3rd class is most likely to be drawn.
-
-```python
-p = [.2, .3, .5]
-dist = Multinomial(total_count=4., probs=p)
-```
-
-The distribution functions can be evaluated on counts.
-
-```python
-# counts same shape as p.
-counts = [1., 0, 3]
-dist.prob(counts) # Shape []
-
-# p will be broadcast to [[.2, .3, .5], [.2, .3, .5]] to match counts.
-counts = [[1., 2, 1], [2, 2, 0]]
-dist.prob(counts) # Shape [2]
-
-# p will be broadcast to shape [5, 7, 3] to match counts.
-counts = [[...]] # Shape [5, 7, 3]
-dist.prob(counts) # Shape [5, 7]
-```
-
-Create a 2-batch of 3-class distributions.
-
-```python
-p = [[.1, .2, .7], [.3, .3, .4]] # Shape [2, 3]
-dist = Multinomial(total_count=[4., 5], probs=p)
-
-counts = [[2., 1, 1], [3, 1, 1]]
-dist.prob(counts) # Shape [2]
-```
-- - -
-
-#### `tf.contrib.distributions.Multinomial.__init__(total_count, logits=None, probs=None, validate_args=False, allow_nan_stats=True, name='Multinomial')` {#Multinomial.__init__}
-
-Initialize a batch of Multinomial distributions.
-
-##### Args:
-
-
-* <b>`total_count`</b>: Non-negative floating point tensor with shape broadcastable
- to `[N1,..., Nm]` with `m >= 0`. Defines this as a batch of
- `N1 x ... x Nm` different Multinomial distributions. Its components
- should be equal to integer values.
-* <b>`logits`</b>: Floating point tensor representing the log-odds of a
- positive event with shape broadcastable to `[N1,..., Nm, k], m >= 0`,
- and the same dtype as `total_count`. Defines this as a batch of
- `N1 x ... x Nm` different `k` class Multinomial distributions. Only one
- of `logits` or `probs` should be passed in.
-* <b>`probs`</b>: Positive floating point tensor with shape broadcastable to
- `[N1,..., Nm, k]` `m >= 0` and same dtype as `total_count`. Defines
- this as a batch of `N1 x ... x Nm` different `k` class Multinomial
- distributions. `probs`'s components in the last portion of its shape
- should sum to `1`. Only one of `logits` or `probs` should be passed in.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.allow_nan_stats` {#Multinomial.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.batch_shape` {#Multinomial.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.batch_shape_tensor(name='batch_shape_tensor')` {#Multinomial.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.cdf(value, name='cdf')` {#Multinomial.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.copy(**override_parameters_kwargs)` {#Multinomial.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.covariance(name='covariance')` {#Multinomial.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.dtype` {#Multinomial.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.entropy(name='entropy')` {#Multinomial.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.event_shape` {#Multinomial.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.event_shape_tensor(name='event_shape_tensor')` {#Multinomial.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.is_continuous` {#Multinomial.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.is_scalar_batch(name='is_scalar_batch')` {#Multinomial.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.is_scalar_event(name='is_scalar_event')` {#Multinomial.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.log_cdf(value, name='log_cdf')` {#Multinomial.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.log_prob(value, name='log_prob')` {#Multinomial.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Multinomial`:
-
-For each batch of counts, `value = [n_0, ...
-,n_{k-1}]`, `P[value]` is the probability that after sampling `self.total_count`
-draws from this Multinomial distribution, the number of draws falling in class
-`j` is `n_j`. Since this definition is [exchangeable](
-https://en.wikipedia.org/wiki/Exchangeable_random_variables); different
-sequences have the same counts so the probability includes a combinatorial
-coefficient.
-
-Note: `value` must be a non-negative tensor with dtype `self.dtype`, have no
-fractional components, and such that
-`tf.reduce_sum(value, -1) = self.total_count`. Its shape must be broadcastable
-with `self.probs` and `self.total_count`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.log_survival_function(value, name='log_survival_function')` {#Multinomial.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.logits` {#Multinomial.logits}
-
-Vector of coordinatewise logits.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.mean(name='mean')` {#Multinomial.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.mode(name='mode')` {#Multinomial.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.name` {#Multinomial.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Multinomial.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.param_static_shapes(cls, sample_shape)` {#Multinomial.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.parameters` {#Multinomial.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.prob(value, name='prob')` {#Multinomial.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Multinomial`:
-
-For each batch of counts, `value = [n_0, ...
-,n_{k-1}]`, `P[value]` is the probability that after sampling `self.total_count`
-draws from this Multinomial distribution, the number of draws falling in class
-`j` is `n_j`. Since this definition is [exchangeable](
-https://en.wikipedia.org/wiki/Exchangeable_random_variables); different
-sequences have the same counts so the probability includes a combinatorial
-coefficient.
-
-Note: `value` must be a non-negative tensor with dtype `self.dtype`, have no
-fractional components, and such that
-`tf.reduce_sum(value, -1) = self.total_count`. Its shape must be broadcastable
-with `self.probs` and `self.total_count`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.probs` {#Multinomial.probs}
-
-Probability of of drawing a `1` in that coordinate.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.reparameterization_type` {#Multinomial.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.sample(sample_shape=(), seed=None, name='sample')` {#Multinomial.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.stddev(name='stddev')` {#Multinomial.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.survival_function(value, name='survival_function')` {#Multinomial.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.total_count` {#Multinomial.total_count}
-
-Number of trials used to construct a sample.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.validate_args` {#Multinomial.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Multinomial.variance(name='variance')` {#Multinomial.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.NormalWithSoftplusScale.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.NormalWithSoftplusScale.md
deleted file mode 100644
index b9d6592fdb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.NormalWithSoftplusScale.md
+++ /dev/null
@@ -1,559 +0,0 @@
-Normal with softplus applied to `scale`.
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.__init__(loc, scale, validate_args=False, allow_nan_stats=True, name='NormalWithSoftplusScale')` {#NormalWithSoftplusScale.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.allow_nan_stats` {#NormalWithSoftplusScale.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.batch_shape` {#NormalWithSoftplusScale.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.batch_shape_tensor(name='batch_shape_tensor')` {#NormalWithSoftplusScale.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.cdf(value, name='cdf')` {#NormalWithSoftplusScale.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.copy(**override_parameters_kwargs)` {#NormalWithSoftplusScale.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.covariance(name='covariance')` {#NormalWithSoftplusScale.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.dtype` {#NormalWithSoftplusScale.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.entropy(name='entropy')` {#NormalWithSoftplusScale.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.event_shape` {#NormalWithSoftplusScale.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.event_shape_tensor(name='event_shape_tensor')` {#NormalWithSoftplusScale.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.is_continuous` {#NormalWithSoftplusScale.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.is_scalar_batch(name='is_scalar_batch')` {#NormalWithSoftplusScale.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.is_scalar_event(name='is_scalar_event')` {#NormalWithSoftplusScale.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.loc` {#NormalWithSoftplusScale.loc}
-
-Distribution parameter for the mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.log_cdf(value, name='log_cdf')` {#NormalWithSoftplusScale.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.log_prob(value, name='log_prob')` {#NormalWithSoftplusScale.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.log_survival_function(value, name='log_survival_function')` {#NormalWithSoftplusScale.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.mean(name='mean')` {#NormalWithSoftplusScale.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.mode(name='mode')` {#NormalWithSoftplusScale.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.name` {#NormalWithSoftplusScale.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#NormalWithSoftplusScale.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.param_static_shapes(cls, sample_shape)` {#NormalWithSoftplusScale.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.parameters` {#NormalWithSoftplusScale.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.prob(value, name='prob')` {#NormalWithSoftplusScale.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.reparameterization_type` {#NormalWithSoftplusScale.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.sample(sample_shape=(), seed=None, name='sample')` {#NormalWithSoftplusScale.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.scale` {#NormalWithSoftplusScale.scale}
-
-Distribution parameter for standard deviation.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.stddev(name='stddev')` {#NormalWithSoftplusScale.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.survival_function(value, name='survival_function')` {#NormalWithSoftplusScale.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.validate_args` {#NormalWithSoftplusScale.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.NormalWithSoftplusScale.variance(name='variance')` {#NormalWithSoftplusScale.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.OneHotCategorical.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.OneHotCategorical.md
deleted file mode 100644
index 6cf39c39de..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.distributions.OneHotCategorical.md
+++ /dev/null
@@ -1,637 +0,0 @@
-OneHotCategorical distribution.
-
-The categorical distribution is parameterized by the log-probabilities
-of a set of classes. The difference between OneHotCategorical and Categorical
-distributions is that OneHotCategorical is a discrete distribution over
-one-hot bit vectors whereas Categorical is a discrete distribution over
-positive integers. OneHotCategorical is equivalent to Categorical except
-Categorical has event_dim=() while OneHotCategorical has event_dim=K, where
-K is the number of classes.
-
-This class provides methods to create indexed batches of OneHotCategorical
-distributions. If the provided `logits` or `probs` is rank 2 or higher, for
-every fixed set of leading dimensions, the last dimension represents one
-single OneHotCategorical distribution. When calling distribution
-functions (e.g. `dist.prob(x)`), `logits` and `x` are broadcast to the
-same shape (if possible). In all cases, the last dimension of `logits,x`
-represents single OneHotCategorical distributions.
-
-#### Examples
-
-Creates a 3-class distiribution, with the 2nd class, the most likely to be
-drawn from.
-
-```python
-p = [0.1, 0.5, 0.4]
-dist = OneHotCategorical(probs=p)
-```
-
-Creates a 3-class distiribution, with the 2nd class the most likely to be
-drawn from, using logits.
-
-```python
-logits = [-2, 2, 0]
-dist = OneHotCategorical(logits=logits)
-```
-
-Creates a 3-class distribution, with the 3rd class is most likely to be drawn.
-
-```python
-# counts is a scalar.
-p = [0.1, 0.4, 0.5]
-dist = OneHotCategorical(probs=p)
-dist.prob([0,1,0]) # Shape []
-
-# p will be broadcast to [[0.1, 0.4, 0.5], [0.1, 0.4, 0.5]] to match.
-samples = [[0,1,0], [1,0,0]]
-dist.prob(samples) # Shape [2]
-```
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.__init__(logits=None, probs=None, dtype=tf.int32, validate_args=False, allow_nan_stats=True, name='OneHotCategorical')` {#OneHotCategorical.__init__}
-
-Initialize OneHotCategorical distributions using class log-probabilities.
-
-##### Args:
-
-
-* <b>`logits`</b>: An N-D `Tensor`, `N >= 1`, representing the log probabilities of a
- set of Categorical distributions. The first `N - 1` dimensions index
- into a batch of independent distributions and the last dimension
- represents a vector of logits for each class. Only one of `logits` or
- `probs` should be passed in.
-* <b>`probs`</b>: An N-D `Tensor`, `N >= 1`, representing the probabilities of a set
- of Categorical distributions. The first `N - 1` dimensions index into a
- batch of independent distributions and the last dimension represents a
- vector of probabilities for each class. Only one of `logits` or `probs`
- should be passed in.
-* <b>`dtype`</b>: The type of the event samples (default: int32).
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.allow_nan_stats` {#OneHotCategorical.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.batch_shape` {#OneHotCategorical.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.batch_shape_tensor(name='batch_shape_tensor')` {#OneHotCategorical.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.cdf(value, name='cdf')` {#OneHotCategorical.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.copy(**override_parameters_kwargs)` {#OneHotCategorical.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.covariance(name='covariance')` {#OneHotCategorical.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.dtype` {#OneHotCategorical.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.entropy(name='entropy')` {#OneHotCategorical.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.event_shape` {#OneHotCategorical.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.event_shape_tensor(name='event_shape_tensor')` {#OneHotCategorical.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.event_size` {#OneHotCategorical.event_size}
-
-Scalar `int32` tensor: the number of classes.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.is_continuous` {#OneHotCategorical.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.is_scalar_batch(name='is_scalar_batch')` {#OneHotCategorical.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.is_scalar_event(name='is_scalar_event')` {#OneHotCategorical.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.log_cdf(value, name='log_cdf')` {#OneHotCategorical.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.log_prob(value, name='log_prob')` {#OneHotCategorical.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.log_survival_function(value, name='log_survival_function')` {#OneHotCategorical.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.logits` {#OneHotCategorical.logits}
-
-Vector of coordinatewise logits.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.mean(name='mean')` {#OneHotCategorical.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.mode(name='mode')` {#OneHotCategorical.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.name` {#OneHotCategorical.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#OneHotCategorical.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.param_static_shapes(cls, sample_shape)` {#OneHotCategorical.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.parameters` {#OneHotCategorical.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.prob(value, name='prob')` {#OneHotCategorical.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.probs` {#OneHotCategorical.probs}
-
-Vector of coordinatewise probabilities.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.reparameterization_type` {#OneHotCategorical.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.sample(sample_shape=(), seed=None, name='sample')` {#OneHotCategorical.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.stddev(name='stddev')` {#OneHotCategorical.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.survival_function(value, name='survival_function')` {#OneHotCategorical.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.validate_args` {#OneHotCategorical.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.OneHotCategorical.variance(name='variance')` {#OneHotCategorical.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.ffmpeg.encode_audio.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.ffmpeg.encode_audio.md
deleted file mode 100644
index fb9d958f26..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.ffmpeg.encode_audio.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.contrib.ffmpeg.encode_audio(audio, file_format=None, samples_per_second=None)` {#encode_audio}
-
-Creates an op that encodes an audio file using sampled audio from a tensor.
-
-##### Args:
-
-
-* <b>`audio`</b>: A rank 2 tensor that has time along dimension 0 and channels along
- dimension 1. Dimension 0 is `samples_per_second * length` long in
- seconds.
-* <b>`file_format`</b>: The type of file to encode. "wav" is the only supported format.
-* <b>`samples_per_second`</b>: The number of samples in the audio tensor per second of
- audio.
-
-##### Returns:
-
- A scalar tensor that contains the encoded audio in the specified file
- format.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.framework.add_model_variable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.framework.add_model_variable.md
deleted file mode 100644
index 45944baf03..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.framework.add_model_variable.md
+++ /dev/null
@@ -1,9 +0,0 @@
-### `tf.contrib.framework.add_model_variable(var)` {#add_model_variable}
-
-Adds a variable to the `GraphKeys.MODEL_VARIABLES` collection.
-
-##### Args:
-
-
-* <b>`var`</b>: a variable.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.framework.get_model_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.framework.get_model_variables.md
deleted file mode 100644
index 078140ccb6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.framework.get_model_variables.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.contrib.framework.get_model_variables(scope=None, suffix=None)` {#get_model_variables}
-
-Gets the list of model variables, filtered by scope and/or suffix.
-
-##### Args:
-
-
-* <b>`scope`</b>: an optional scope for filtering the variables to return.
-* <b>`suffix`</b>: an optional suffix for filtering the variables to return.
-
-##### Returns:
-
- a list of variables in collection with scope and suffix.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.framework.local_variable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.framework.local_variable.md
deleted file mode 100644
index ac0abb46ad..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.framework.local_variable.md
+++ /dev/null
@@ -1,15 +0,0 @@
-### `tf.contrib.framework.local_variable(initial_value, validate_shape=True, name=None)` {#local_variable}
-
-Create variable and add it to `GraphKeys.LOCAL_VARIABLES` collection.
-
-##### Args:
-
-
-* <b>`initial_value`</b>: See variables.Variable.__init__.
-* <b>`validate_shape`</b>: See variables.Variable.__init__.
-* <b>`name`</b>: See variables.Variable.__init__.
-
-##### Returns:
-
- New variable.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.copy_op_handler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.copy_op_handler.md
deleted file mode 100644
index 1ea461dd9e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.copy_op_handler.md
+++ /dev/null
@@ -1,15 +0,0 @@
-### `tf.contrib.graph_editor.copy_op_handler(info, op, copy_shape=True)` {#copy_op_handler}
-
-Copy a `tf.Operation`.
-
-##### Args:
-
-
-* <b>`info`</b>: Transform._TmpInfo instance.
-* <b>`op`</b>: the `tf.Operation` to be copied.
-* <b>`copy_shape`</b>: also copy the shape of the tensor
-
-##### Returns:
-
- A `(op, op_outputs)` tuple containgin the transformed op and its outputs.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.detach_outputs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.detach_outputs.md
deleted file mode 100644
index 922d905c82..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.detach_outputs.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.contrib.graph_editor.detach_outputs(sgv, control_outputs=None)` {#detach_outputs}
-
-Detach the output of a subgraph view.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the subgraph view to be detached. This argument is converted to a
- subgraph using the same rules as the function subgraph.make_view.
- Note that sgv is modified in place.
-* <b>`control_outputs`</b>: a util.ControlOutputs instance or None. If not None the
- control outputs are also detached.
-
-##### Returns:
-
- A tuple `(sgv, output_placeholders)` where
- `sgv` is a new subgraph view of the detached subgraph;
- `output_placeholders` is a list of the created output placeholders.
-
-##### Raises:
-
-
-* <b>`StandardError`</b>: if sgv cannot be converted to a SubGraphView using
- the same rules than the function subgraph.make_view.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.filter_ops_from_regex.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.filter_ops_from_regex.md
deleted file mode 100644
index 41332dfdba..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.filter_ops_from_regex.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.contrib.graph_editor.filter_ops_from_regex(ops, regex)` {#filter_ops_from_regex}
-
-Get all the operations that match the given regex.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of `tf.Operation`.
-* <b>`regex`</b>: a regular expression matching the operation's name.
- For example, `"^foo(/.*)?$"` will match all the operations in the "foo"
- scope.
-
-##### Returns:
-
- A list of `tf.Operation`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ops cannot be converted to a list of `tf.Operation`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.filter_ts.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.filter_ts.md
deleted file mode 100644
index ef4764bc2c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.filter_ts.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.contrib.graph_editor.filter_ts(ops, positive_filter)` {#filter_ts}
-
-Get all the tensors which are input or output of an op in ops.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of `tf.Operation`.
-* <b>`positive_filter`</b>: a function deciding whether to keep a tensor or not.
- If `True`, all the tensors are returned.
-
-##### Returns:
-
- A list of `tf.Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ops cannot be converted to a list of `tf.Operation`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.op_type.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.op_type.md
deleted file mode 100644
index bbf3dfc4c7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.op_type.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.contrib.graph_editor.op_type(op_types, op=None)` {#op_type}
-
-Check if an op is of the given type.
-
-##### Args:
-
-
-* <b>`op_types`</b>: tuple of strings containing the types to check against.
- For instance: ("Add", "Const")
-* <b>`op`</b>: the operation to check (or None).
-
-##### Returns:
-
- if op is not None, return True if the op is of the correct type.
- if op is None, return a lambda function which does the type checking.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.reroute_inputs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.reroute_inputs.md
deleted file mode 100644
index 91c1d91008..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.reroute_inputs.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.graph_editor.reroute_inputs(sgv0, sgv1)` {#reroute_inputs}
-
-Re-route all the inputs of sgv0 to sgv1 (see reroute_inputs).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.select_ts.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.select_ts.md
deleted file mode 100644
index a37e0948c8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.select_ts.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.contrib.graph_editor.select_ts(*args, **kwargs)` {#select_ts}
-
-Helper to select tensors.
-
-##### Args:
-
-
-* <b>`*args`</b>: list of 1) regular expressions (compiled or not) or 2) (array of)
- `tf.Tensor`. `tf.Operation` instances are silently ignored.
-* <b>`**kwargs`</b>: 'graph': `tf.Graph` in which to perform the regex query.This is
- required when using regex.
- 'positive_filter': an elem if selected only if `positive_filter(elem)` is
- `True`. This is optional.
- 'restrict_ts_regex': a regular expression is ignored if it doesn't start
- with the substring "(?#ts)".
-
-##### Returns:
-
- A list of `tf.Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if the optional keyword argument graph is not a `tf.Graph`
- or if an argument in args is not an (array of) `tf.Tensor`
- or an (array of) `tf.Operation` (silently ignored) or a string
- or a regular expression.
-* <b>`ValueError`</b>: if one of the keyword arguments is unexpected or if a regular
- expression is used without passing a graph as a keyword argument.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.swap_outputs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.swap_outputs.md
deleted file mode 100644
index 31ed5df8d4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.graph_editor.swap_outputs.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.graph_editor.swap_outputs(sgv0, sgv1)` {#swap_outputs}
-
-Swap all the outputs of sgv0 and sgv1 (see _reroute_outputs).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.crossed_column.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.crossed_column.md
deleted file mode 100644
index a7ca34c986..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.crossed_column.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.contrib.layers.crossed_column(columns, hash_bucket_size, combiner='sum', ckpt_to_load_from=None, tensor_name_in_ckpt=None, hash_key=None)` {#crossed_column}
-
-Creates a _CrossedColumn for performing feature crosses.
-
-##### Args:
-
-
-* <b>`columns`</b>: An iterable of _FeatureColumn. Items can be an instance of
- _SparseColumn, _CrossedColumn, or _BucketizedColumn.
-* <b>`hash_bucket_size`</b>: An int that is > 1. The number of buckets.
-* <b>`combiner`</b>: A string specifying how to reduce if there are multiple entries
- in a single row. Currently "mean", "sqrtn" and "sum" are supported, with
- "sum" the default. "sqrtn" often achieves good accuracy, in particular
- with bag-of-words columns. Each of this can be thought as example level
- normalizations on the column::
- * "sum": do not normalize
- * "mean": do l1 normalization
- * "sqrtn": do l2 normalization
- For more information: `tf.embedding_lookup_sparse`.
-* <b>`ckpt_to_load_from`</b>: (Optional). String representing checkpoint name/pattern
- to restore the column weights. Required if `tensor_name_in_ckpt` is not
- None.
-* <b>`tensor_name_in_ckpt`</b>: (Optional). Name of the `Tensor` in the provided
- checkpoint from which to restore the column weights. Required if
- `ckpt_to_load_from` is not None.
-* <b>`hash_key`</b>: Specify the hash_key that will be used by the `FingerprintCat64`
- function to combine the crosses fingerprints on SparseFeatureCrossOp
- (optional).
-
-##### Returns:
-
- A _CrossedColumn.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if any item in columns is not an instance of _SparseColumn,
- _CrossedColumn, or _BucketizedColumn, or
- hash_bucket_size is not an int.
-* <b>`ValueError`</b>: if hash_bucket_size is not > 1 or
- len(columns) is not > 1.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.separable_convolution2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.separable_convolution2d.md
deleted file mode 100644
index a946caf980..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.separable_convolution2d.md
+++ /dev/null
@@ -1,52 +0,0 @@
-### `tf.contrib.layers.separable_convolution2d(*args, **kwargs)` {#separable_convolution2d}
-
-Adds a depth-separable 2D convolution with optional batch_norm layer.
-
-This op first performs a depthwise convolution that acts separately on
-channels, creating a variable called `depthwise_weights`. If `num_outputs`
-is not None, it adds a pointwise convolution that mixes channels, creating a
-variable called `pointwise_weights`. Then, if `batch_norm_params` is None,
-it adds bias to the result, creating a variable called 'biases', otherwise
-it adds a batch normalization layer. It finally applies an activation function
-to produce the end result.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A tensor of size [batch_size, height, width, channels].
-* <b>`num_outputs`</b>: The number of pointwise convolution output filters. If is
- None, then we skip the pointwise convolution stage.
-* <b>`kernel_size`</b>: A list of length 2: [kernel_height, kernel_width] of
- of the filters. Can be an int if both values are the same.
-* <b>`depth_multiplier`</b>: The number of depthwise convolution output channels for
- each input channel. The total number of depthwise convolution output
- channels will be equal to `num_filters_in * depth_multiplier`.
-* <b>`stride`</b>: A list of length 2: [stride_height, stride_width], specifying the
- depthwise convolution stride. Can be an int if both strides are the same.
-* <b>`padding`</b>: One of 'VALID' or 'SAME'.
-* <b>`rate`</b>: A list of length 2: [rate_height, rate_width], specifying the dilation
- rates for a'trous convolution. Can be an int if both rates are the same.
- If any value is larger than one, then both stride values need to be one.
-* <b>`activation_fn`</b>: Activation function. The default value is a ReLU function.
- Explicitly set it to None to skip it and maintain a linear activation.
-* <b>`normalizer_fn`</b>: Normalization function to use instead of `biases`. If
- `normalizer_fn` is provided then `biases_initializer` and
- `biases_regularizer` are ignored and `biases` are not created nor added.
- default set to None for no normalizer function
-* <b>`normalizer_params`</b>: Normalization function parameters.
-* <b>`weights_initializer`</b>: An initializer for the weights.
-* <b>`weights_regularizer`</b>: Optional regularizer for the weights.
-* <b>`biases_initializer`</b>: An initializer for the biases. If None skip biases.
-* <b>`biases_regularizer`</b>: Optional regularizer for the biases.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional list of collections for all the variables or
- a dictionay containing a different list of collection per variable.
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`trainable`</b>: Whether or not the variables should be trainable or not.
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- A `Tensor` representing the output of the operation.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.sparse_column_with_integerized_feature.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.sparse_column_with_integerized_feature.md
deleted file mode 100644
index 99e91fd792..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.sparse_column_with_integerized_feature.md
+++ /dev/null
@@ -1,43 +0,0 @@
-### `tf.contrib.layers.sparse_column_with_integerized_feature(column_name, bucket_size, combiner='sum', dtype=tf.int64)` {#sparse_column_with_integerized_feature}
-
-Creates an integerized _SparseColumn.
-
-Use this when your features are already pre-integerized into int64 IDs, that
-is, when the set of values to output is already coming in as what's desired in
-the output. Integerized means we can use the feature value itself as id.
-
-Typically this is used for reading contiguous ranges of integers indexes, but
-it doesn't have to be. The output value is simply copied from the
-input_feature, whatever it is. Just be aware, however, that if you have large
-gaps of unused integers it might affect what you feed those in (for instance,
-if you make up a one-hot tensor from these, the unused integers will appear as
-values in the tensor which are always zero.)
-
-##### Args:
-
-
-* <b>`column_name`</b>: A string defining sparse column name.
-* <b>`bucket_size`</b>: An int that is > 1. The number of buckets. It should be bigger
- than maximum feature. In other words features in this column should be an
- int64 in range [0, bucket_size)
-* <b>`combiner`</b>: A string specifying how to reduce if the sparse column is
- multivalent. Currently "mean", "sqrtn" and "sum" are supported, with "sum"
- the default. "sqrtn" often achieves good accuracy, in particular with
- bag-of-words columns.
- * "sum": do not normalize features in the column
- * "mean": do l1 normalization on features in the column
- * "sqrtn": do l2 normalization on features in the column
- For more information: `tf.embedding_lookup_sparse`.
-* <b>`dtype`</b>: Type of features. It should be an integer type. Default value is
- dtypes.int64.
-
-##### Returns:
-
- An integerized _SparseColumn definition.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: bucket_size is not greater than 1.
-* <b>`ValueError`</b>: dtype is not integer.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.summarize_activation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.summarize_activation.md
deleted file mode 100644
index 3aed0ff43c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.summarize_activation.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.contrib.layers.summarize_activation(op)` {#summarize_activation}
-
-Summarize an activation.
-
-This applies the given activation and adds useful summaries specific to the
-activation.
-
-##### Args:
-
-
-* <b>`op`</b>: The tensor to summarize (assumed to be a layer activation).
-
-##### Returns:
-
- The summary op created to summarize `op`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.variance_scaling_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.variance_scaling_initializer.md
deleted file mode 100644
index 27b1b58d7e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.layers.variance_scaling_initializer.md
+++ /dev/null
@@ -1,54 +0,0 @@
-### `tf.contrib.layers.variance_scaling_initializer(factor=2.0, mode='FAN_IN', uniform=False, seed=None, dtype=tf.float32)` {#variance_scaling_initializer}
-
-Returns an initializer that generates tensors without scaling variance.
-
-When initializing a deep network, it is in principle advantageous to keep
-the scale of the input variance constant, so it does not explode or diminish
-by reaching the final layer. This initializer use the following formula:
-
-```python
- if mode='FAN_IN': # Count only number of input connections.
- n = fan_in
- elif mode='FAN_OUT': # Count only number of output connections.
- n = fan_out
- elif mode='FAN_AVG': # Average number of inputs and output connections.
- n = (fan_in + fan_out)/2.0
-
- truncated_normal(shape, 0.0, stddev=sqrt(factor / n))
-```
-
-* To get [Delving Deep into Rectifiers](
- http://arxiv.org/pdf/1502.01852v1.pdf), use (Default):<br/>
- `factor=2.0 mode='FAN_IN' uniform=False`
-* To get [Convolutional Architecture for Fast Feature Embedding](
- http://arxiv.org/abs/1408.5093), use:<br/>
- `factor=1.0 mode='FAN_IN' uniform=True`
-* To get [Understanding the difficulty of training deep feedforward neural
- networks](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf),
- use:<br/>
- `factor=1.0 mode='FAN_AVG' uniform=True.`
-* To get `xavier_initializer` use either:<br/>
- `factor=1.0 mode='FAN_AVG' uniform=True`, or<br/>
- `factor=1.0 mode='FAN_AVG' uniform=False`.
-
-##### Args:
-
-
-* <b>`factor`</b>: Float. A multiplicative factor.
-* <b>`mode`</b>: String. 'FAN_IN', 'FAN_OUT', 'FAN_AVG'.
-* <b>`uniform`</b>: Whether to use uniform or normal distributed random initialization.
-* <b>`seed`</b>: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`dtype`</b>: The data type. Only floating point types are supported.
-
-##### Returns:
-
- An initializer that generates tensors with unit variance.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `dtype` is not a floating point type.
-* <b>`TypeError`</b>: if `mode` is not in ['FAN_IN', 'FAN_OUT', 'FAN_AVG'].
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.Estimator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.Estimator.md
deleted file mode 100644
index c564b6fcf8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.Estimator.md
+++ /dev/null
@@ -1,397 +0,0 @@
-Estimator class is the basic TensorFlow model trainer/evaluator.
-- - -
-
-#### `tf.contrib.learn.Estimator.__init__(model_fn=None, model_dir=None, config=None, params=None, feature_engineering_fn=None)` {#Estimator.__init__}
-
-Constructs an `Estimator` instance.
-
-##### Args:
-
-
-* <b>`model_fn`</b>: Model function. Follows the signature:
- * Args:
- * `features`: single `Tensor` or `dict` of `Tensor`s
- (depending on data passed to `fit`),
- * `labels`: `Tensor` or `dict` of `Tensor`s (for multi-head
- models). If mode is `ModeKeys.INFER`, `labels=None` will be
- passed. If the `model_fn`'s signature does not accept
- `mode`, the `model_fn` must still be able to handle
- `labels=None`.
- * `mode`: Optional. Specifies if this training, evaluation or
- prediction. See `ModeKeys`.
- * `params`: Optional `dict` of hyperparameters. Will receive what
- is passed to Estimator in `params` parameter. This allows
- to configure Estimators from hyper parameter tuning.
- * `config`: Optional configuration object. Will receive what is passed
- to Estimator in `config` parameter, or the default `config`.
- Allows updating things in your model_fn based on configuration
- such as `num_ps_replicas`.
- * `model_dir`: Optional directory where model parameters, graph etc
- are saved. Will receive what is passed to Estimator in
- `model_dir` parameter, or the default `model_dir`. Allows
- updating things in your model_fn that expect model_dir, such as
- training hooks.
-
- * Returns:
- `ModelFnOps`
-
- Also supports a legacy signature which returns tuple of:
-
- * predictions: `Tensor`, `SparseTensor` or dictionary of same.
- Can also be any type that is convertible to a `Tensor` or
- `SparseTensor`, or dictionary of same.
- * loss: Scalar loss `Tensor`.
- * train_op: Training update `Tensor` or `Operation`.
-
- Supports next three signatures for the function:
-
- * `(features, labels) -> (predictions, loss, train_op)`
- * `(features, labels, mode) -> (predictions, loss, train_op)`
- * `(features, labels, mode, params) -> (predictions, loss, train_op)`
- * `(features, labels, mode, params, config) ->
- (predictions, loss, train_op)`
- * `(features, labels, mode, params, config, model_dir) ->
- (predictions, loss, train_op)`
-
-
-* <b>`model_dir`</b>: Directory to save model parameters, graph and etc. This can
- also be used to load checkpoints from the directory into a estimator to
- continue training a previously saved model.
-* <b>`config`</b>: Configuration object.
-* <b>`params`</b>: `dict` of hyper parameters that will be passed into `model_fn`.
- Keys are names of parameters, values are basic python types.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into `model_fn`. Please check `model_fn` for
- a definition of features and labels.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: parameters of `model_fn` don't match `params`.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.__repr__()` {#Estimator.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.config` {#Estimator.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.evaluate(*args, **kwargs)` {#Estimator.evaluate}
-
-See `Evaluable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` or `y` is provided, and at least one of
- `input_fn` or `feed_fn` is provided.
- Or if `metrics` is not `None` or `dict`.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.export(*args, **kwargs)` {#Estimator.export}
-
-Exports inference graph into given dir. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-23.
-Instructions for updating:
-The signature of the input_fn accepted by export is changing to be consistent with what's used by tf.Learn Estimator's train/evaluate. input_fn (and in most cases, input_feature_key) will become required args, and use_deprecated_input_fn will default to False and be removed altogether.
-
-##### Args:
-
-
-* <b>`export_dir`</b>: A string containing a directory to write the exported graph
- and checkpoints.
-* <b>`input_fn`</b>: If `use_deprecated_input_fn` is true, then a function that given
- `Tensor` of `Example` strings, parses it into features that are then
- passed to the model. Otherwise, a function that takes no argument and
- returns a tuple of (features, labels), where features is a dict of
- string key to `Tensor` and labels is a `Tensor` that's currently not
- used (and so can be `None`).
-* <b>`input_feature_key`</b>: Only used if `use_deprecated_input_fn` is false. String
- key into the features dict returned by `input_fn` that corresponds to a
- the raw `Example` strings `Tensor` that the exported model will take as
- input. Can only be `None` if you're using a custom `signature_fn` that
- does not use the first arg (examples).
-* <b>`use_deprecated_input_fn`</b>: Determines the signature format of `input_fn`.
-* <b>`signature_fn`</b>: Function that returns a default signature and a named
- signature map, given `Tensor` of `Example` strings, `dict` of `Tensor`s
- for features and `Tensor` or `dict` of `Tensor`s for predictions.
-* <b>`prediction_key`</b>: The key for a tensor in the `predictions` dict (output
- from the `model_fn`) to use as the `predictions` input to the
- `signature_fn`. Optional. If `None`, predictions will pass to
- `signature_fn` without filtering.
-* <b>`default_batch_size`</b>: Default batch size of the `Example` placeholder.
-* <b>`exports_to_keep`</b>: Number of exports to keep.
-* <b>`checkpoint_path`</b>: the checkpoint path of the model to be exported. If it is
- `None` (which is default), will use the latest checkpoint in
- export_dir.
-
-##### Returns:
-
- The string path to the exported directory. NB: this functionality was
- added ca. 2016/09/25; clients that depend on the return value may need
- to handle the case where this function returns None because subclasses
- are not returning a value.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#Estimator.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.fit(*args, **kwargs)` {#Estimator.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.get_params(deep=True)` {#Estimator.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.get_variable_names()` {#Estimator.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.get_variable_value(name)` {#Estimator.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.model_dir` {#Estimator.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.partial_fit(*args, **kwargs)` {#Estimator.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.predict(*args, **kwargs)` {#Estimator.predict}
-
-Returns predictions for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x` and 'batch_size' must be `None`.
-* <b>`batch_size`</b>: Override default batch size. If set, 'input_fn' must be
- 'None'.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns all.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- A numpy array of predicted classes or regression values if the
- constructor's `model_fn` returns a `Tensor` for `predictions` or a `dict`
- of numpy arrays if `model_fn` returns a `dict`. Returns an iterable of
- predictions if as_iterable is True.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If x and input_fn are both provided or both `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.Estimator.set_params(**params)` {#Estimator.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.extract_dask_data.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.extract_dask_data.md
deleted file mode 100644
index 16342ea708..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.extract_dask_data.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.contrib.learn.extract_dask_data(data)` {#extract_dask_data}
-
-Extract data from dask.Series or dask.DataFrame for predictors.
-
-Given a distributed dask.DataFrame or dask.Series containing columns or names
-for one or more predictors, this operation returns a single dask.DataFrame or
-dask.Series that can be iterated over.
-
-##### Args:
-
-
-* <b>`data`</b>: A distributed dask.DataFrame or dask.Series.
-
-##### Returns:
-
- A dask.DataFrame or dask.Series that can be iterated over.
- If the supplied argument is neither a dask.DataFrame nor a dask.Series this
- operation returns it without modification.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.monitors.StopAtStep.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.monitors.StopAtStep.md
deleted file mode 100644
index 55d4104813..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.monitors.StopAtStep.md
+++ /dev/null
@@ -1,154 +0,0 @@
-Monitor to request stop at a specified step.
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.__init__(num_steps=None, last_step=None)` {#StopAtStep.__init__}
-
-Create a StopAtStep monitor.
-
-This monitor requests stop after either a number of steps have been
-executed or a last step has been reached. Only of the two options can be
-specified.
-
-if `num_steps` is specified, it indicates the number of steps to execute
-after `begin()` is called. If instead `last_step` is specified, it
-indicates the last step we want to execute, as passed to the `step_begin()`
-call.
-
-##### Args:
-
-
-* <b>`num_steps`</b>: Number of steps to execute.
-* <b>`last_step`</b>: Step after which to stop.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If one of the arguments is invalid.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.begin(max_steps=None)` {#StopAtStep.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.end(session=None)` {#StopAtStep.end}
-
-Callback at the end of training/evaluation.
-
-##### Args:
-
-
-* <b>`session`</b>: A `tf.Session` object that can be used to run ops.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.epoch_begin(epoch)` {#StopAtStep.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.epoch_end(epoch)` {#StopAtStep.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.post_step(step, session)` {#StopAtStep.post_step}
-
-Callback after the step is finished.
-
-Called after step_end and receives session to perform extra session.run
-calls. If failure occurred in the process, will be called as well.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, global step of the model.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.run_on_all_workers` {#StopAtStep.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.set_estimator(estimator)` {#StopAtStep.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.step_begin(step)` {#StopAtStep.step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.StopAtStep.step_end(step, output)` {#StopAtStep.step_end}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.monitors.SummarySaver.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.monitors.SummarySaver.md
deleted file mode 100644
index 056c1c1839..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.monitors.SummarySaver.md
+++ /dev/null
@@ -1,175 +0,0 @@
-Saves summaries every N steps.
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.__init__(summary_op, save_steps=100, output_dir=None, summary_writer=None, scaffold=None)` {#SummarySaver.__init__}
-
-Initializes a `SummarySaver` monitor.
-
-##### Args:
-
-
-* <b>`summary_op`</b>: `Tensor` of type `string`. A serialized `Summary` protocol
- buffer, as output by TF summary methods like `summary.scalar` or
- `summary.merge_all`.
-* <b>`save_steps`</b>: `int`, save summaries every N steps. See `EveryN`.
-* <b>`output_dir`</b>: `string`, the directory to save the summaries to. Only used
- if no `summary_writer` is supplied.
-* <b>`summary_writer`</b>: `SummaryWriter`. If `None` and an `output_dir` was passed,
- one will be created accordingly.
-* <b>`scaffold`</b>: `Scaffold` to get summary_op if it's not provided.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.begin(max_steps=None)` {#SummarySaver.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.end(session=None)` {#SummarySaver.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.epoch_begin(epoch)` {#SummarySaver.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.epoch_end(epoch)` {#SummarySaver.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.every_n_post_step(step, session)` {#SummarySaver.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.every_n_step_begin(step)` {#SummarySaver.every_n_step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.every_n_step_end(step, outputs)` {#SummarySaver.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.post_step(step, session)` {#SummarySaver.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.run_on_all_workers` {#SummarySaver.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.set_estimator(estimator)` {#SummarySaver.set_estimator}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.step_begin(step)` {#SummarySaver.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummarySaver.step_end(step, output)` {#SummarySaver.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.monitors.SummaryWriterCache.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.monitors.SummaryWriterCache.md
deleted file mode 100644
index 8c700a1899..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.learn.monitors.SummaryWriterCache.md
+++ /dev/null
@@ -1,26 +0,0 @@
-Cache for file writers.
-
-This class caches file writers, one per directory.
-- - -
-
-#### `tf.contrib.learn.monitors.SummaryWriterCache.clear()` {#SummaryWriterCache.clear}
-
-Clear cached summary writers. Currently only used for unit tests.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.SummaryWriterCache.get(logdir)` {#SummaryWriterCache.get}
-
-Returns the FileWriter for the specified directory.
-
-##### Args:
-
-
-* <b>`logdir`</b>: str, name of the directory.
-
-##### Returns:
-
- A `FileWriter`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.legacy_seq2seq.basic_rnn_seq2seq.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.legacy_seq2seq.basic_rnn_seq2seq.md
deleted file mode 100644
index a3f7faac5e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.legacy_seq2seq.basic_rnn_seq2seq.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.contrib.legacy_seq2seq.basic_rnn_seq2seq(encoder_inputs, decoder_inputs, cell, dtype=tf.float32, scope=None)` {#basic_rnn_seq2seq}
-
-Basic RNN sequence-to-sequence model.
-
-This model first runs an RNN to encode encoder_inputs into a state vector,
-then runs decoder, initialized with the last encoder state, on decoder_inputs.
-Encoder and decoder use the same RNN cell type, but don't share parameters.
-
-##### Args:
-
-
-* <b>`encoder_inputs`</b>: A list of 2D Tensors [batch_size x input_size].
-* <b>`decoder_inputs`</b>: A list of 2D Tensors [batch_size x input_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function and size.
-* <b>`dtype`</b>: The dtype of the initial state of the RNN cell (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; default: "basic_rnn_seq2seq".
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors with
- shape [batch_size x output_size] containing the generated outputs.
-* <b>`state`</b>: The state of each decoder cell in the final time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.legacy_seq2seq.tied_rnn_seq2seq.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.legacy_seq2seq.tied_rnn_seq2seq.md
deleted file mode 100644
index 06369b1173..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.legacy_seq2seq.tied_rnn_seq2seq.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.contrib.legacy_seq2seq.tied_rnn_seq2seq(encoder_inputs, decoder_inputs, cell, loop_function=None, dtype=tf.float32, scope=None)` {#tied_rnn_seq2seq}
-
-RNN sequence-to-sequence model with tied encoder and decoder parameters.
-
-This model first runs an RNN to encode encoder_inputs into a state vector, and
-then runs decoder, initialized with the last encoder state, on decoder_inputs.
-Encoder and decoder use the same RNN cell and share parameters.
-
-##### Args:
-
-
-* <b>`encoder_inputs`</b>: A list of 2D Tensors [batch_size x input_size].
-* <b>`decoder_inputs`</b>: A list of 2D Tensors [batch_size x input_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function and size.
-* <b>`loop_function`</b>: If not None, this function will be applied to i-th output
- in order to generate i+1-th input, and decoder_inputs will be ignored,
- except for the first element ("GO" symbol), see rnn_decoder for details.
-* <b>`dtype`</b>: The dtype of the initial state of the rnn cell (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; default: "tied_rnn_seq2seq".
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors with
- shape [batch_size x output_size] containing the generated outputs.
-* <b>`state`</b>: The state of each decoder cell in each time-step. This is a list
- with length len(decoder_inputs) -- one item for each time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.linalg.LinearOperatorScaledIdentity.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.linalg.LinearOperatorScaledIdentity.md
deleted file mode 100644
index 4dc1f88b1f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.linalg.LinearOperatorScaledIdentity.md
+++ /dev/null
@@ -1,543 +0,0 @@
-`LinearOperator` acting like a scaled [batch] identity matrix `A = c I`.
-
-This operator acts like a scaled [batch] identity matrix `A` with shape
-`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a
-batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is
-a scaled version of the `N x N` identity matrix.
-
-`LinearOperatorIdentity` is initialized with `num_rows`, and a `multiplier`
-(a `Tensor`) of shape `[B1,...,Bb]`. `N` is set to `num_rows`, and the
-`multiplier` determines the scale for each batch member.
-
-```python
-# Create a 2 x 2 scaled identity matrix.
-operator = LinearOperatorIdentity(num_rows=2, multiplier=3.)
-
-operator.to_dense()
-==> [[3., 0.]
- [0., 3.]]
-
-operator.shape
-==> [2, 2]
-
-operator.log_determinant()
-==> 2 * Log[3]
-
-x = ... Shape [2, 4] Tensor
-operator.apply(x)
-==> 3 * x
-
-y = tf.random_normal(shape=[3, 2, 4])
-# Note that y.shape is compatible with operator.shape because operator.shape
-# is broadcast to [3, 2, 2].
-x = operator.solve(y)
-==> 3 * x
-
-# Create a 2-batch of 2x2 identity matrices
-operator = LinearOperatorIdentity(num_rows=2, multiplier=5.)
-operator.to_dense()
-==> [[[5., 0.]
- [0., 5.]],
- [[5., 0.]
- [0., 5.]]]
-
-x = ... Shape [2, 2, 3]
-operator.apply(x)
-==> 5 * x
-
-# Here the operator and x have different batch_shape, and are broadcast.
-x = ... Shape [1, 2, 3]
-operator.apply(x)
-==> 5 * x
-```
-
-### Shape compatibility
-
-This operator acts on [batch] matrix with compatible shape.
-`x` is a batch matrix with compatible shape for `apply` and `solve` if
-
-```
-operator.shape = [B1,...,Bb] + [N, N], with b >= 0
-x.shape = [C1,...,Cc] + [N, R],
-and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]
-```
-
-### Performance
-
-* `operator.apply(x)` is `O(D1*...*Dd*N*R)`
-* `operator.solve(x)` is `O(D1*...*Dd*N*R)`
-* `operator.determinant()` is `O(D1*...*Dd)`
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite`.
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.__init__(num_rows, multiplier, is_non_singular=None, is_self_adjoint=None, is_positive_definite=None, assert_proper_shapes=False, name='LinearOperatorScaledIdentity')` {#LinearOperatorScaledIdentity.__init__}
-
-Initialize a `LinearOperatorScaledIdentity`.
-
-The `LinearOperatorScaledIdentity` is initialized with `num_rows`, which
-determines the size of each identity matrix, and a `multiplier`,
-which defines `dtype`, batch shape, and scale of each matrix.
-
-This operator is able to broadcast the leading (batch) dimensions.
-
-##### Args:
-
-
-* <b>`num_rows`</b>: Scalar non-negative integer `Tensor`. Number of rows in the
- corresponding identity matrix.
-* <b>`multiplier`</b>: `Tensor` of shape `[B1,...,Bb]`, or `[]` (a scalar).
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite.
-* <b>`assert_proper_shapes`</b>: Python `bool`. If `False`, only perform static
- checks that initialization and method arguments have proper shape.
- If `True`, and static checks are inconclusive, add asserts to the graph.
-* <b>`name`</b>: A name for this `LinearOperator`
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `num_rows` is determined statically to be non-scalar, or
- negative.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.add_to_tensor(mat, name='add_to_tensor')` {#LinearOperatorScaledIdentity.add_to_tensor}
-
-Add matrix represented by this operator to `mat`. Equiv to `I + mat`.
-
-##### Args:
-
-
-* <b>`mat`</b>: `Tensor` with same `dtype` and shape broadcastable to `self`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.apply(x, adjoint=False, name='apply')` {#LinearOperatorScaledIdentity.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.assert_non_singular(name='assert_non_singular')` {#LinearOperatorScaledIdentity.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.assert_positive_definite(name='assert_positive_definite')` {#LinearOperatorScaledIdentity.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperatorScaledIdentity.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.batch_shape` {#LinearOperatorScaledIdentity.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperatorScaledIdentity.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.determinant(name='det')` {#LinearOperatorScaledIdentity.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.diag_part(name='diag_part')` {#LinearOperatorScaledIdentity.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.domain_dimension` {#LinearOperatorScaledIdentity.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperatorScaledIdentity.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.dtype` {#LinearOperatorScaledIdentity.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.graph_parents` {#LinearOperatorScaledIdentity.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.is_non_singular` {#LinearOperatorScaledIdentity.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.is_positive_definite` {#LinearOperatorScaledIdentity.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.is_self_adjoint` {#LinearOperatorScaledIdentity.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.is_square` {#LinearOperatorScaledIdentity.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.log_abs_determinant(name='log_abs_det')` {#LinearOperatorScaledIdentity.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.multiplier` {#LinearOperatorScaledIdentity.multiplier}
-
-The [batch] scalar `Tensor`, `c` in `cI`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.name` {#LinearOperatorScaledIdentity.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.range_dimension` {#LinearOperatorScaledIdentity.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperatorScaledIdentity.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.shape` {#LinearOperatorScaledIdentity.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.shape_tensor(name='shape_tensor')` {#LinearOperatorScaledIdentity.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.solve(rhs, adjoint=False, name='solve')` {#LinearOperatorScaledIdentity.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.tensor_rank` {#LinearOperatorScaledIdentity.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperatorScaledIdentity.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorScaledIdentity.to_dense(name='to_dense')` {#LinearOperatorScaledIdentity.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.linalg.LinearOperatorUDVHUpdate.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.linalg.LinearOperatorUDVHUpdate.md
deleted file mode 100644
index 6705f62ac6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.linalg.LinearOperatorUDVHUpdate.md
+++ /dev/null
@@ -1,600 +0,0 @@
-Perturb a `LinearOperator` with a rank `K` update.
-
-This operator acts like a [batch] matrix `A` with shape
-`[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a
-batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is
-an `M x N` matrix.
-
-`LinearOperatorUDVHUpdate` represents `A = L + U D V^H`, where
-
-```
-L, is a LinearOperator representing [batch] M x N matrices
-U, is a [batch] M x K matrix. Typically K << M.
-D, is a [batch] K x K matrix.
-V, is a [batch] N x K matrix. Typically K << N.
-V^H is the Hermitian transpose (adjoint) of V.
-```
-
-If `M = N`, determinants and solves are done using the matrix determinant
-lemma and Woodbury identities, and thus require L and D to be non-singular.
-
-Solves and determinants will be attempted unless the "is_non_singular"
-property of L and D is False.
-
-In the event that L and D are positive-definite, and U = V, solves and
-determinants can be done using a Cholesky factorization.
-
-```python
-# Create a 3 x 3 diagonal linear operator.
-diag_operator = LinearOperatorDiag(
- diag=[1., 2., 3.], is_non_singular=True, is_self_adjoint=True,
- is_positive_definite=True)
-
-# Perturb with a rank 2 perturbation
-operator = LinearOperatorUDVHUpdate(
- operator=diag_operator,
- u=[[1., 2.], [-1., 3.], [0., 0.]],
- diag=[11., 12.],
- v=[[1., 2.], [-1., 3.], [10., 10.]])
-
-operator.shape
-==> [3, 3]
-
-operator.log_determinant()
-==> scalar Tensor
-
-x = ... Shape [3, 4] Tensor
-operator.apply(x)
-==> Shape [3, 4] Tensor
-```
-
-### Shape compatibility
-
-This operator acts on [batch] matrix with compatible shape.
-`x` is a batch matrix with compatible shape for `apply` and `solve` if
-
-```
-operator.shape = [B1,...,Bb] + [M, N], with b >= 0
-x.shape = [B1,...,Bb] + [N, R], with R >= 0.
-```
-
-### Performance
-
-Suppose `operator` is a `LinearOperatorUDVHUpdate` of shape `[M, N]`,
-made from a rank `K` update of `base_operator` which performs `.apply(x)` on
-`x` having `x.shape = [N, R]` with `O(L_apply*N*R)` complexity (and similarly
-for `solve`, `determinant`. Then, if `x.shape = [N, R]`,
-
-* `operator.apply(x)` is `O(L_apply*N*R + K*N*R)`
-
-and if `M = N`,
-
-* `operator.solve(x)` is `O(L_apply*N*R + N*K*R + K^2*R + K^3)`
-* `operator.determinant()` is `O(L_determinant + L_solve*N*K + K^2*N + K^3)`
-
-If instead `operator` and `x` have shape `[B1,...,Bb, M, N]` and
-`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite, diag_positive, square`
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.__init__(base_operator, u, diag=None, v=None, is_diag_positive=None, is_non_singular=None, is_self_adjoint=None, is_positive_definite=None, is_square=None, name='LinearOperatorUDVHUpdate')` {#LinearOperatorUDVHUpdate.__init__}
-
-Initialize a `LinearOperatorUDVHUpdate`.
-
-This creates a `LinearOperator` of the form `A = L + U D V^H`, with
-`L` a `LinearOperator`, `U, V` both [batch] matrices, and `D` a [batch]
-diagonal matrix.
-
-If `L` is non-singular, solves and determinants are available.
-Solves/determinants both involve a solve/determinant of a `K x K` system.
-In the event that L and D are self-adjoint positive-definite, and U = V,
-this can be done using a Cholesky factorization. The user should set the
-`is_X` matrix property hints, which will trigger the appropriate code path.
-
-##### Args:
-
-
-* <b>`base_operator`</b>: Shape `[B1,...,Bb, M, N]` real `float32` or `float64`
- `LinearOperator`. This is `L` above.
-* <b>`u`</b>: Shape `[B1,...,Bb, M, K]` `Tensor` of same `dtype` as `base_operator`.
- This is `U` above.
-* <b>`diag`</b>: Optional shape `[B1,...,Bb, K]` `Tensor` with same `dtype` as
- `base_operator`. This is the diagonal of `D` above.
- Defaults to `D` being the identity operator.
-* <b>`v`</b>: Optional `Tensor` of same `dtype` as `u` and shape `[B1,...,Bb, N, K]`
- Defaults to `v = u`, in which case the perturbation is symmetric.
- If `M != N`, then `v` must be set since the pertrubation is not square.
-* <b>`is_diag_positive`</b>: Python `bool`. If `True`, expect `diag > 0`.
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
- Default is `None`, unless `is_positive_definite` is auto-set to be
- `True` (see below).
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose. Default is `None`, unless `base_operator` is self-adjoint
- and `v = None` (meaning `u=v`), in which case this defaults to `True`.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite.
- Default is `None`, unless `base_operator` is positive-definite
- `v = None` (meaning `u=v`), and `is_diag_positive`, in which case this
- defaults to `True`.
-* <b>`is_square`</b>: Expect that this operator acts like square [batch] matrices.
-* <b>`name`</b>: A name for this `LinearOperator`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `is_X` flags are set in an inconsistent way.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.add_to_tensor(x, name='add_to_tensor')` {#LinearOperatorUDVHUpdate.add_to_tensor}
-
-Add matrix represented by this operator to `x`. Equivalent to `A + x`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with same `dtype` and shape broadcastable to `self.shape`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.apply(x, adjoint=False, name='apply')` {#LinearOperatorUDVHUpdate.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.assert_non_singular(name='assert_non_singular')` {#LinearOperatorUDVHUpdate.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.assert_positive_definite(name='assert_positive_definite')` {#LinearOperatorUDVHUpdate.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperatorUDVHUpdate.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.base_operator` {#LinearOperatorUDVHUpdate.base_operator}
-
-If this operator is `A = L + U D V^H`, this is the `L`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.batch_shape` {#LinearOperatorUDVHUpdate.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperatorUDVHUpdate.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.determinant(name='det')` {#LinearOperatorUDVHUpdate.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.diag_arg` {#LinearOperatorUDVHUpdate.diag_arg}
-
-If this operator is `A = L + U D V^H`, this is the diagonal of `D`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.diag_operator` {#LinearOperatorUDVHUpdate.diag_operator}
-
-If this operator is `A = L + U D V^H`, this is `D`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.diag_part(name='diag_part')` {#LinearOperatorUDVHUpdate.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.domain_dimension` {#LinearOperatorUDVHUpdate.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperatorUDVHUpdate.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.dtype` {#LinearOperatorUDVHUpdate.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.graph_parents` {#LinearOperatorUDVHUpdate.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.is_diag_positive` {#LinearOperatorUDVHUpdate.is_diag_positive}
-
-If this operator is `A = L + U D V^H`, this hints `D > 0` elementwise.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.is_non_singular` {#LinearOperatorUDVHUpdate.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.is_positive_definite` {#LinearOperatorUDVHUpdate.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.is_self_adjoint` {#LinearOperatorUDVHUpdate.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.is_square` {#LinearOperatorUDVHUpdate.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.log_abs_determinant(name='log_abs_det')` {#LinearOperatorUDVHUpdate.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.name` {#LinearOperatorUDVHUpdate.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.range_dimension` {#LinearOperatorUDVHUpdate.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperatorUDVHUpdate.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.shape` {#LinearOperatorUDVHUpdate.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.shape_tensor(name='shape_tensor')` {#LinearOperatorUDVHUpdate.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.solve(rhs, adjoint=False, name='solve')` {#LinearOperatorUDVHUpdate.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.tensor_rank` {#LinearOperatorUDVHUpdate.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperatorUDVHUpdate.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.to_dense(name='to_dense')` {#LinearOperatorUDVHUpdate.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.u` {#LinearOperatorUDVHUpdate.u}
-
-If this operator is `A = L + U D V^H`, this is the `U`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorUDVHUpdate.v` {#LinearOperatorUDVHUpdate.v}
-
-If this operator is `A = L + U D V^H`, this is the `V`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.losses.add_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.losses.add_loss.md
deleted file mode 100644
index ba2cba6f1b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.losses.add_loss.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.contrib.losses.add_loss(*args, **kwargs)` {#add_loss}
-
-Adds a externally defined loss to the collection of losses. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.add_loss instead.
-
-##### Args:
-
-
-* <b>`loss`</b>: A loss `Tensor`.
-* <b>`loss_collection`</b>: Optional collection to add the loss to.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.losses.cosine_distance.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.losses.cosine_distance.md
deleted file mode 100644
index 8d888a8996..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.losses.cosine_distance.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.contrib.losses.cosine_distance(*args, **kwargs)` {#cosine_distance}
-
-Adds a cosine-distance loss to the training procedure. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.cosine_distance instead.
-
-Note that the function assumes that `predictions` and `labels` are already
-unit-normalized.
-
-##### Args:
-
-
-* <b>`predictions`</b>: An arbitrary matrix.
-* <b>`labels`</b>: A `Tensor` whose shape matches 'predictions'
-* <b>`dim`</b>: The dimension along which the cosine distance is computed.
-* <b>`weights`</b>: Coefficients for the loss a scalar, a tensor of shape
- [batch_size] or a tensor whose shape matches `predictions`.
-* <b>`scope`</b>: The scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` shape doesn't match `labels` shape, or
- `weights` is `None`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.losses.get_regularization_losses.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.losses.get_regularization_losses.md
deleted file mode 100644
index e48896b8fa..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.losses.get_regularization_losses.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.contrib.losses.get_regularization_losses(*args, **kwargs)` {#get_regularization_losses}
-
-Gets the regularization losses. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.get_regularization_losses instead.
-
-##### Args:
-
-
-* <b>`scope`</b>: an optional scope for filtering the losses to return.
-
-##### Returns:
-
- A list of loss variables.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.auc_using_histogram.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.auc_using_histogram.md
deleted file mode 100644
index 01f67e402c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.auc_using_histogram.md
+++ /dev/null
@@ -1,38 +0,0 @@
-### `tf.contrib.metrics.auc_using_histogram(boolean_labels, scores, score_range, nbins=100, collections=None, check_shape=True, name=None)` {#auc_using_histogram}
-
-AUC computed by maintaining histograms.
-
-Rather than computing AUC directly, this Op maintains Variables containing
-histograms of the scores associated with `True` and `False` labels. By
-comparing these the AUC is generated, with some discretization error.
-See: "Efficient AUC Learning Curve Calculation" by Bouckaert.
-
-This AUC Op updates in `O(batch_size + nbins)` time and works well even with
-large class imbalance. The accuracy is limited by discretization error due
-to finite number of bins. If scores are concentrated in a fewer bins,
-accuracy is lower. If this is a concern, we recommend trying different
-numbers of bins and comparing results.
-
-##### Args:
-
-
-* <b>`boolean_labels`</b>: 1-D boolean `Tensor`. Entry is `True` if the corresponding
- record is in class.
-* <b>`scores`</b>: 1-D numeric `Tensor`, same shape as boolean_labels.
-* <b>`score_range`</b>: `Tensor` of shape `[2]`, same dtype as `scores`. The min/max
- values of score that we expect. Scores outside range will be clipped.
-* <b>`nbins`</b>: Integer number of bins to use. Accuracy strictly increases as the
- number of bins increases.
-* <b>`collections`</b>: List of graph collections keys. Internal histogram Variables
- are added to these collections. Defaults to `[GraphKeys.LOCAL_VARIABLES]`.
-* <b>`check_shape`</b>: Boolean. If `True`, do a runtime shape check on the scores
- and labels.
-* <b>`name`</b>: A name for this Op. Defaults to "auc_using_histogram".
-
-##### Returns:
-
-
-* <b>`auc`</b>: `float32` scalar `Tensor`. Fetching this converts internal histograms
- to auc value.
-* <b>`update_op`</b>: `Op`, when run, updates internal histograms.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_concat.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_concat.md
deleted file mode 100644
index 814ef347f6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_concat.md
+++ /dev/null
@@ -1,44 +0,0 @@
-### `tf.contrib.metrics.streaming_concat(values, axis=0, max_size=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_concat}
-
-Concatenate values along an axis across batches.
-
-The function `streaming_concat` creates two local variables, `array` and
-`size`, that are used to store concatenated values. Internally, `array` is
-used as storage for a dynamic array (if `maxsize` is `None`), which ensures
-that updates can be run in amortized constant time.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that appends the values of a tensor and returns the
-length of the concatenated axis.
-
-This op allows for evaluating metrics that cannot be updated incrementally
-using the same framework as other streaming metrics.
-
-##### Args:
-
-
-* <b>`values`</b>: `Tensor` to concatenate. Rank and the shape along all axes other
- than the axis to concatenate along must be statically known.
-* <b>`axis`</b>: optional integer axis to concatenate along.
-* <b>`max_size`</b>: optional integer maximum size of `value` along the given axis.
- Once the maximum size is reached, further updates are no-ops. By default,
- there is no maximum size: the array is resized as necessary.
-* <b>`metrics_collections`</b>: An optional list of collections that `value`
- should be added to.
-* <b>`updates_collections`</b>: An optional list of collections `update_op` should be
- added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`value`</b>: A `Tensor` representing the concatenated values.
-* <b>`update_op`</b>: An operation that concatenates the next values.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `values` does not have a statically known rank, `axis` is
- not in the valid range or the size of `values` is not statically known
- along any axis other than `axis`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_true_negatives_at_thresholds.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_true_negatives_at_thresholds.md
deleted file mode 100644
index d8fede8878..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.metrics.streaming_true_negatives_at_thresholds.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.metrics.streaming_true_negatives_at_thresholds(predictions, labels, thresholds, weights=None)` {#streaming_true_negatives_at_thresholds}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.CoupledInputForgetGateLSTMCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.CoupledInputForgetGateLSTMCell.md
deleted file mode 100644
index 31e4e3808b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.CoupledInputForgetGateLSTMCell.md
+++ /dev/null
@@ -1,127 +0,0 @@
-Long short-term memory unit (LSTM) recurrent network cell.
-
-The default non-peephole implementation is based on:
-
- http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf
-
-S. Hochreiter and J. Schmidhuber.
-"Long Short-Term Memory". Neural Computation, 9(8):1735-1780, 1997.
-
-The peephole implementation is based on:
-
- https://research.google.com/pubs/archive/43905.pdf
-
-Hasim Sak, Andrew Senior, and Francoise Beaufays.
-"Long short-term memory recurrent neural network architectures for
- large scale acoustic modeling." INTERSPEECH, 2014.
-
-The coupling of input and forget gate is based on:
-
- http://arxiv.org/pdf/1503.04069.pdf
-
-Greff et al. "LSTM: A Search Space Odyssey"
-
-The class uses optional peep-hole connections, and an optional projection
-layer.
-- - -
-
-#### `tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__call__(inputs, state, scope=None)` {#CoupledInputForgetGateLSTMCell.__call__}
-
-Run one step of LSTM.
-
-##### Args:
-
-
-* <b>`inputs`</b>: input Tensor, 2D, batch x num_units.
-* <b>`state`</b>: if `state_is_tuple` is False, this must be a state Tensor,
- `2-D, batch x state_size`. If `state_is_tuple` is True, this must be a
- tuple of state Tensors, both `2-D`, with column sizes `c_state` and
- `m_state`.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "LSTMCell".
-
-##### Returns:
-
- A tuple containing:
- - A `2-D, [batch x output_dim]`, Tensor representing the output of the
- LSTM after reading `inputs` when previous state was `state`.
- Here output_dim is:
- num_proj if num_proj was set,
- num_units otherwise.
- - Tensor(s) representing the new state of LSTM after reading `inputs` when
- the previous state was `state`. Same type and shape(s) as `state`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If input size cannot be inferred from inputs via
- static shape inference.
-
-
-- - -
-
-#### `tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__init__(num_units, use_peepholes=False, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=1, num_proj_shards=1, forget_bias=1.0, state_is_tuple=False, activation=tanh)` {#CoupledInputForgetGateLSTMCell.__init__}
-
-Initialize the parameters for an LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell
-* <b>`use_peepholes`</b>: bool, set True to enable diagonal/peephole connections.
-* <b>`initializer`</b>: (optional) The initializer to use for the weight and
- projection matrices.
-* <b>`num_proj`</b>: (optional) int, The output dimensionality for the projection
- matrices. If None, no projection is performed.
-* <b>`proj_clip`</b>: (optional) A float value. If `num_proj > 0` and `proj_clip` is
- provided, then the projected values are clipped elementwise to within
- `[-proj_clip, proj_clip]`.
-
-* <b>`num_unit_shards`</b>: How to split the weight matrix. If >1, the weight
- matrix is stored across num_unit_shards.
-* <b>`num_proj_shards`</b>: How to split the projection matrix. If >1, the
- projection matrix is stored across num_proj_shards.
-* <b>`forget_bias`</b>: Biases of the forget gate are initialized by default to 1
- in order to reduce the scale of forgetting at the beginning of
- the training.
-* <b>`state_is_tuple`</b>: If True, accepted and returned states are 2-tuples of
- the `c_state` and `m_state`. By default (False), they are concatenated
- along the column axis. This default behavior will soon be deprecated.
-* <b>`activation`</b>: Activation function of the inner states.
-
-
-- - -
-
-#### `tf.contrib.rnn.CoupledInputForgetGateLSTMCell.output_size` {#CoupledInputForgetGateLSTMCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.CoupledInputForgetGateLSTMCell.state_size` {#CoupledInputForgetGateLSTMCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.CoupledInputForgetGateLSTMCell.zero_state(batch_size, dtype)` {#CoupledInputForgetGateLSTMCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.DropoutWrapper.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.DropoutWrapper.md
deleted file mode 100644
index af7dce705d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.DropoutWrapper.md
+++ /dev/null
@@ -1,69 +0,0 @@
-Operator adding dropout to inputs and outputs of the given cell.
-- - -
-
-#### `tf.contrib.rnn.DropoutWrapper.__call__(inputs, state, scope=None)` {#DropoutWrapper.__call__}
-
-Run the cell with the declared dropouts.
-
-
-- - -
-
-#### `tf.contrib.rnn.DropoutWrapper.__init__(cell, input_keep_prob=1.0, output_keep_prob=1.0, seed=None)` {#DropoutWrapper.__init__}
-
-Create a cell with added input and/or output dropout.
-
-Dropout is never used on the state.
-
-##### Args:
-
-
-* <b>`cell`</b>: an RNNCell, a projection to output_size is added to it.
-* <b>`input_keep_prob`</b>: unit Tensor or float between 0 and 1, input keep
- probability; if it is float and 1, no input dropout will be added.
-* <b>`output_keep_prob`</b>: unit Tensor or float between 0 and 1, output keep
- probability; if it is float and 1, no output dropout will be added.
-* <b>`seed`</b>: (optional) integer, the randomness seed.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if cell is not an RNNCell.
-* <b>`ValueError`</b>: if keep_prob is not between 0 and 1.
-
-
-- - -
-
-#### `tf.contrib.rnn.DropoutWrapper.output_size` {#DropoutWrapper.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.DropoutWrapper.state_size` {#DropoutWrapper.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.DropoutWrapper.zero_state(batch_size, dtype)` {#DropoutWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.InputProjectionWrapper.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.InputProjectionWrapper.md
deleted file mode 100644
index 1898136704..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.InputProjectionWrapper.md
+++ /dev/null
@@ -1,67 +0,0 @@
-Operator adding an input projection to the given cell.
-
-Note: in many cases it may be more efficient to not use this wrapper,
-but instead concatenate the whole sequence of your inputs in time,
-do the projection on this batch-concatenated sequence, then split it.
-- - -
-
-#### `tf.contrib.rnn.InputProjectionWrapper.__call__(inputs, state, scope=None)` {#InputProjectionWrapper.__call__}
-
-Run the input projection and then the cell.
-
-
-- - -
-
-#### `tf.contrib.rnn.InputProjectionWrapper.__init__(cell, num_proj, input_size=None)` {#InputProjectionWrapper.__init__}
-
-Create a cell with input projection.
-
-##### Args:
-
-
-* <b>`cell`</b>: an RNNCell, a projection of inputs is added before it.
-* <b>`num_proj`</b>: Python integer. The dimension to project to.
-* <b>`input_size`</b>: Deprecated and unused.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if cell is not an RNNCell.
-
-
-- - -
-
-#### `tf.contrib.rnn.InputProjectionWrapper.output_size` {#InputProjectionWrapper.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.InputProjectionWrapper.state_size` {#InputProjectionWrapper.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.InputProjectionWrapper.zero_state(batch_size, dtype)` {#InputProjectionWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.LSTMBlockWrapper.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.LSTMBlockWrapper.md
deleted file mode 100644
index 5cb59b7a0f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.rnn.LSTMBlockWrapper.md
+++ /dev/null
@@ -1,49 +0,0 @@
-This is a helper class that provides housekeeping for LSTM cells.
-
-This may be useful for alternative LSTM and similar type of cells.
-The subclasses must implement `_call_cell` method and `num_units` property.
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockWrapper.__call__(inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)` {#LSTMBlockWrapper.__call__}
-
-Run this LSTM on inputs, starting from the given state.
-
-##### Args:
-
-
-* <b>`inputs`</b>: `3-D` tensor with shape `[time_len, batch_size, input_size]`
- or a list of `time_len` tensors of shape `[batch_size, input_size]`.
-* <b>`initial_state`</b>: a tuple `(initial_cell_state, initial_output)` with tensors
- of shape `[batch_size, self._num_units]`. If this is not provided, the
- cell is expected to create a zero initial state of type `dtype`.
-* <b>`dtype`</b>: The data type for the initial state and expected output. Required
- if `initial_state` is not provided or RNN state has a heterogeneous
- dtype.
-* <b>`sequence_length`</b>: Specifies the length of each sequence in inputs. An
- `int32` or `int64` vector (tensor) size `[batch_size]`, values in `[0,
- time_len).`
- Defaults to `time_len` for each element.
-* <b>`scope`</b>: `VariableScope` for the created subgraph; defaults to class name.
-
-##### Returns:
-
- A pair containing:
-
- - Output: A `3-D` tensor of shape `[time_len, batch_size, output_size]`
- or a list of time_len tensors of shape `[batch_size, output_size]`,
- to match the type of the `inputs`.
- - Final state: a tuple `(cell_state, output)` matching `initial_state`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: in case of shape mismatches
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockWrapper.num_units` {#LSTMBlockWrapper.num_units}
-
-Number of units in this cell (output dimension).
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.training.bucket.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.training.bucket.md
deleted file mode 100644
index 19cd27dec8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.training.bucket.md
+++ /dev/null
@@ -1,86 +0,0 @@
-### `tf.contrib.training.bucket(tensors, which_bucket, batch_size, num_buckets, num_threads=1, capacity=32, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, keep_input=True, shared_name=None, name=None)` {#bucket}
-
-Lazy bucketing of input tensors according to `which_bucket`.
-
-The argument `tensors` can be a list or a dictionary of tensors.
-The value returned by the function will be of the same type
-as `tensors`.
-
-The tensors entering this function are put into the bucket given by
-`which_bucket`. Each bucket has its own queue. When a bucket contains
-`batch_size` elements, this minibatch is pushed onto a top queue. The
-tensors returned from this function are a the result of dequeueing the
-next minibatch from this top queue.
-
-This function is implemented using several queues. A `QueueRunner` for the
-queues is added to the current `Graph`'s `QUEUE_RUNNER` collection.
-
-As the returned tensors are the result of of a dequeue operation, evaluating
-them will throw a `tf.errors.OutOfRangeError` when the input queue is
-exhausted. If these tensors are feeding another input queue, its queue runner
-will catch this exception, however, if they are used in your main thread
-you are responsible for catching this yourself.
-
-*N.B.:* If `dynamic_pad` is `False`, you must ensure that either
-(i) the `shapes` argument is passed, or (ii) all of the tensors in
-`tensors` must have fully-defined shapes. `ValueError` will be
-raised if neither of these conditions holds.
-
-If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
-tensors is known, but individual dimensions may have shape `None`.
-In this case, for each enqueue the dimensions with value `None`
-may have a variable length; upon dequeue, the output tensors will be padded
-on the right to the maximum shape of the tensors in the current minibatch.
-For numbers, this padding takes value 0. For strings, this padding is
-the empty string. See `PaddingFIFOQueue` for more info.
-
-If `allow_smaller_final_batch` is `True`, a smaller batch value than
-`batch_size` is returned when the queues are closed and there are not enough
-elements to fill the batch, otherwise the pending elements are discarded.
-In addition, all output tensors' static shapes, as accessed via the
-`get_shape()` method will have a 0th `Dimension` value of `None`, and
-operations that depend on fixed batch_size would fail.
-
-##### Args:
-
-
-* <b>`tensors`</b>: The list or dictionary of tensors, representing a single element,
- to bucket. Nested lists are not supported.
-* <b>`which_bucket`</b>: An `int32` scalar Tensor taking a value in `[0, num_buckets)`.
-* <b>`batch_size`</b>: The new batch size pulled from the queue (all queues will have
- the same size). If a list is passed in then each bucket will have a
- different batch_size.
- (python int, int32 scalar or iterable of integers of length num_buckets).
-* <b>`num_buckets`</b>: A python integer, the number of buckets.
-* <b>`num_threads`</b>: An integer. The number of threads enqueuing `tensors`.
-* <b>`capacity`</b>: An integer. The maximum number of minibatches in the top queue,
- and also the maximum number of elements within each bucket.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensors`.
-* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
- The given dimensions are padded upon dequeue so that tensors within a
- batch have the same shapes.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batches to be smaller if there are insufficient items left in the queues.
-* <b>`keep_input`</b>: A `bool` scalar Tensor. If provided, this tensor controls
- whether the input is added to the queue or not. If it evaluates `True`,
- then `tensors` are added to the bucket; otherwise they are dropped. This
- tensor essentially acts as a filtering mechanism.
-* <b>`shared_name`</b>: (Optional). If set, the queues will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A tuple `(bucket, outputs)` where `bucket` is
- a `int32` scalar tensor and `outputs` is a list or
- dictionary of batched outputs corresponding to elements of `tensors`.
- Every step will receive a new bucket of outputs.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensors` or if batch_size is a sequence
- but it's length != num_buckets.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.training.rejection_sample.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.training.rejection_sample.md
deleted file mode 100644
index fe3c9866e8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.training.rejection_sample.md
+++ /dev/null
@@ -1,57 +0,0 @@
-### `tf.contrib.training.rejection_sample(tensors, accept_prob_fn, batch_size, queue_threads=1, enqueue_many=False, prebatch_capacity=16, prebatch_threads=1, runtime_checks=False, name=None)` {#rejection_sample}
-
-Stochastically creates batches by rejection sampling.
-
-Each list of non-batched tensors is evaluated by `accept_prob_fn`, to produce
-a scalar tensor between 0 and 1. This tensor corresponds to the probability of
-being accepted. When `batch_size` tensor groups have been accepted, the batch
-queue will return a mini-batch.
-
-##### Args:
-
-
-* <b>`tensors`</b>: List of tensors for data. All tensors are either one item or a
- batch, according to enqueue_many.
-* <b>`accept_prob_fn`</b>: A python lambda that takes a non-batch tensor from each
- item in `tensors`, and produces a scalar tensor.
-* <b>`batch_size`</b>: Size of batch to be returned.
-* <b>`queue_threads`</b>: The number of threads for the queue that will hold the final
- batch.
-* <b>`enqueue_many`</b>: Bool. If true, interpret input tensors as having a batch
- dimension.
-* <b>`prebatch_capacity`</b>: Capacity for the large queue that is used to convert
- batched tensors to single examples.
-* <b>`prebatch_threads`</b>: Number of threads for the large queue that is used to
- convert batched tensors to single examples.
-* <b>`runtime_checks`</b>: Bool. If true, insert runtime checks on the output of
- `accept_prob_fn`. Using `True` might have a performance impact.
-* <b>`name`</b>: Optional prefix for ops created by this function.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: enqueue_many is True and labels doesn't have a batch
- dimension, or if enqueue_many is False and labels isn't a scalar.
-* <b>`ValueError`</b>: enqueue_many is True, and batch dimension on data and labels
- don't match.
-* <b>`ValueError`</b>: if a zero initial probability class has a nonzero target
- probability.
-
-##### Returns:
-
- A list of tensors of the same length as `tensors`, with batch dimension
- `batch_size`.
-
-##### Example:
-
- # Get tensor for a single data and label example.
- data, label = data_provider.Get(['data', 'label'])
-
- # Get stratified batch according to data tensor.
- accept_prob_fn = lambda x: (tf.tanh(x[0]) + 1) / 2
- data_batch = tf.contrib.training.rejection_sample(
- [data, label], accept_prob_fn, 16)
-
- # Run batch through network.
- ...
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.training.stratified_sample.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.training.stratified_sample.md
deleted file mode 100644
index 27251e3b1c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.contrib.training.stratified_sample.md
+++ /dev/null
@@ -1,58 +0,0 @@
-### `tf.contrib.training.stratified_sample(tensors, labels, target_probs, batch_size, init_probs=None, enqueue_many=False, queue_capacity=16, threads_per_queue=1, name=None)` {#stratified_sample}
-
-Stochastically creates batches based on per-class probabilities.
-
-This method discards examples. Internally, it creates one queue to amortize
-the cost of disk reads, and one queue to hold the properly-proportioned
-batch.
-
-##### Args:
-
-
-* <b>`tensors`</b>: List of tensors for data. All tensors are either one item or a
- batch, according to enqueue_many.
-* <b>`labels`</b>: Tensor for label of data. Label is a single integer or a batch,
- depending on enqueue_many. It is not a one-hot vector.
-* <b>`target_probs`</b>: Target class proportions in batch. An object whose type has a
- registered Tensor conversion function.
-* <b>`batch_size`</b>: Size of batch to be returned.
-* <b>`init_probs`</b>: Class proportions in the data. An object whose type has a
- registered Tensor conversion function, or `None` for estimating the
- initial distribution.
-* <b>`enqueue_many`</b>: Bool. If true, interpret input tensors as having a batch
- dimension.
-* <b>`queue_capacity`</b>: Capacity of the large queue that holds input examples.
-* <b>`threads_per_queue`</b>: Number of threads for the large queue that holds input
- examples and for the final queue with the proper class proportions.
-* <b>`name`</b>: Optional prefix for ops created by this function.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: enqueue_many is True and labels doesn't have a batch
- dimension, or if enqueue_many is False and labels isn't a scalar.
-* <b>`ValueError`</b>: enqueue_many is True, and batch dimension on data and labels
- don't match.
-* <b>`ValueError`</b>: if probs don't sum to one.
-* <b>`ValueError`</b>: if a zero initial probability class has a nonzero target
- probability.
-* <b>`TFAssertion`</b>: if labels aren't integers in [0, num classes).
-
-##### Returns:
-
- (data_batch, label_batch), where data_batch is a list of tensors of the same
- length as `tensors`
-
-##### Example:
-
- # Get tensor for a single data and label example.
- data, label = data_provider.Get(['data', 'label'])
-
- # Get stratified batch according to per-class probabilities.
- target_probs = [...distribution you want...]
- [data_batch], labels = tf.contrib.training.stratified_sample(
- [data], label, target_probs)
-
- # Run batch through network.
- ...
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.control_dependencies.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.control_dependencies.md
deleted file mode 100644
index 070f8788e5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.control_dependencies.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.control_dependencies(control_inputs)` {#control_dependencies}
-
-Wrapper for `Graph.control_dependencies()` using the default graph.
-
-See [`Graph.control_dependencies()`](../../api_docs/python/framework.md#Graph.control_dependencies)
-for more details.
-
-##### Args:
-
-
-* <b>`control_inputs`</b>: A list of `Operation` or `Tensor` objects which
- must be executed or computed before running the operations
- defined in the context. Can also be `None` to clear the control
- dependencies.
-
-##### Returns:
-
- A context manager that specifies control dependencies for all
- operations constructed within the context.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.convert_to_tensor_or_indexed_slices.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.convert_to_tensor_or_indexed_slices.md
deleted file mode 100644
index 18cf6c58be..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.convert_to_tensor_or_indexed_slices.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.convert_to_tensor_or_indexed_slices(value, dtype=None, name=None)` {#convert_to_tensor_or_indexed_slices}
-
-Converts the given object to a `Tensor` or an `IndexedSlices`.
-
-If `value` is an `IndexedSlices` or `SparseTensor` it is returned
-unmodified. Otherwise, it is converted to a `Tensor` using
-`convert_to_tensor()`.
-
-##### Args:
-
-
-* <b>`value`</b>: An `IndexedSlices`, `SparseTensor`, or an object that can be consumed
- by `convert_to_tensor()`.
-* <b>`dtype`</b>: (Optional.) The required `DType` of the returned `Tensor` or
- `IndexedSlices`.
-* <b>`name`</b>: (Optional.) A name to use if a new `Tensor` is created.
-
-##### Returns:
-
- An `Tensor`, `IndexedSlices`, or `SparseTensor` based on `value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `dtype` does not match the element type of `value`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_csv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_csv.md
deleted file mode 100644
index f2ebf6945b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_csv.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.decode_csv(records, record_defaults, field_delim=None, name=None)` {#decode_csv}
-
-Convert CSV records to tensors. Each column maps to one tensor.
-
-RFC 4180 format is expected for the CSV records.
-(https://tools.ietf.org/html/rfc4180)
-Note that we allow leading and trailing spaces with int or float field.
-
-##### Args:
-
-
-* <b>`records`</b>: A `Tensor` of type `string`.
- Each string is a record/row in the csv and all records should have
- the same format.
-* <b>`record_defaults`</b>: A list of `Tensor` objects with types from: `float32`, `int32`, `int64`, `string`.
- One tensor per column of the input record, with either a
- scalar default value for that column or empty if the column is required.
-* <b>`field_delim`</b>: An optional `string`. Defaults to `","`.
- delimiter to separate fields in a record.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A list of `Tensor` objects. Has the same type as `record_defaults`.
- Each tensor will have the same shape as records.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_raw.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_raw.md
deleted file mode 100644
index 8beeae4c00..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.decode_raw.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.decode_raw(bytes, out_type, little_endian=None, name=None)` {#decode_raw}
-
-Reinterpret the bytes of a string as a vector of numbers.
-
-##### Args:
-
-
-* <b>`bytes`</b>: A `Tensor` of type `string`.
- All the elements must have the same length.
-* <b>`out_type`</b>: A `tf.DType` from: `tf.half, tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.int64`.
-* <b>`little_endian`</b>: An optional `bool`. Defaults to `True`.
- Whether the input `bytes` are in little-endian order.
- Ignored for `out_type` values that are stored in a single byte like
- `uint8`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `out_type`.
- A Tensor with one more dimension than the input `bytes`. The
- added dimension will have size equal to the length of the elements
- of `bytes` divided by the number of bytes to represent `out_type`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.OutOfRangeError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.OutOfRangeError.md
deleted file mode 100644
index ef996b0a88..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.OutOfRangeError.md
+++ /dev/null
@@ -1,15 +0,0 @@
-Raised when an operation iterates past the valid input range.
-
-This exception is raised in "end-of-file" conditions, such as when a
-[`queue.dequeue()`](../../api_docs/python/io_ops.md#QueueBase.dequeue)
-operation is blocked on an empty queue, and a
-[`queue.close()`](../../api_docs/python/io_ops.md#QueueBase.close)
-operation executes.
-
-- - -
-
-#### `tf.errors.OutOfRangeError.__init__(node_def, op, message)` {#OutOfRangeError.__init__}
-
-Creates an `OutOfRangeError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.UnauthenticatedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.UnauthenticatedError.md
deleted file mode 100644
index d3344dc6b1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.errors.UnauthenticatedError.md
+++ /dev/null
@@ -1,11 +0,0 @@
-The request does not have valid authentication credentials.
-
-This exception is not currently used.
-
-- - -
-
-#### `tf.errors.UnauthenticatedError.__init__(node_def, op, message)` {#UnauthenticatedError.__init__}
-
-Creates an `UnauthenticatedError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.exp.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.exp.md
deleted file mode 100644
index f31531e762..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.exp.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.exp(x, name=None)` {#exp}
-
-Computes exponential of x element-wise. \\(y = e^x\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.fake_quant_with_min_max_vars.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.fake_quant_with_min_max_vars.md
deleted file mode 100644
index 74ed0e0242..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.fake_quant_with_min_max_vars.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.fake_quant_with_min_max_vars(inputs, min, max, name=None)` {#fake_quant_with_min_max_vars}
-
-Fake-quantize the 'inputs' tensor of type float via global float scalars `min`
-
-and `max` to 'outputs' tensor of same shape as `inputs`.
-
-[min; max] is the clamping range for the 'inputs' data. Op divides this range
-into 255 steps (total of 256 values), then replaces each 'inputs' value with the
-closest of the quantized step values.
-
-This operation has a gradient and thus allows for training `min` and `max` values.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A `Tensor` of type `float32`.
-* <b>`min`</b>: A `Tensor` of type `float32`.
-* <b>`max`</b>: A `Tensor` of type `float32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.fake_quant_with_min_max_vars_per_channel.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.fake_quant_with_min_max_vars_per_channel.md
deleted file mode 100644
index bc39cf9570..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.fake_quant_with_min_max_vars_per_channel.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.fake_quant_with_min_max_vars_per_channel(inputs, min, max, name=None)` {#fake_quant_with_min_max_vars_per_channel}
-
-Fake-quantize the 'inputs' tensor of type float and one of the shapes: `[d]`,
-
-`[b, d]` `[b, h, w, d]` via per-channel floats `min` and `max` of shape `[d]`
-to 'outputs' tensor of same shape as `inputs`.
-
-[min; max] is the clamping range for the 'inputs' data in the corresponding
-depth channel. Op divides this range into 255 steps (total of 256 values), then
-replaces each 'inputs' value with the closest of the quantized step values.
-
-This operation has a gradient and thus allows for training `min` and `max` values.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A `Tensor` of type `float32`.
-* <b>`min`</b>: A `Tensor` of type `float32`.
-* <b>`max`</b>: A `Tensor` of type `float32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.foldl.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.foldl.md
deleted file mode 100644
index 1206976da8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.foldl.md
+++ /dev/null
@@ -1,44 +0,0 @@
-### `tf.foldl(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#foldl}
-
-foldl on the list of tensors unpacked from `elems` on dimension 0.
-
-This foldl operator repeatedly applies the callable `fn` to a sequence
-of elements from first to last. The elements are made of the tensors
-unpacked from `elems` on dimension 0. The callable fn takes two tensors as
-arguments. The first argument is the accumulated value computed from the
-preceding invocation of fn. If `initializer` is None, `elems` must contain
-at least one element, and its first element is used as the initializer.
-
-Suppose that `elems` is unpacked into `values`, a list of tensors. The shape
-of the result tensor is fn(initializer, values[0]).shape`.
-
-##### Args:
-
-
-* <b>`fn`</b>: The callable to be performed.
-* <b>`elems`</b>: A tensor to be unpacked on dimension 0.
-* <b>`initializer`</b>: (optional) The initial value for the accumulator.
-* <b>`parallel_iterations`</b>: (optional) The number of iterations allowed to run
- in parallel.
-* <b>`back_prop`</b>: (optional) True enables support for back propagation.
-* <b>`swap_memory`</b>: (optional) True enables GPU-CPU memory swapping.
-* <b>`name`</b>: (optional) Name prefix for the returned tensors.
-
-##### Returns:
-
- A tensor resulting from applying `fn` consecutively to the list of tensors
- unpacked from `elems`, from first to last.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `fn` is not callable.
-
-##### Example:
-
- ```python
- elems = [1, 2, 3, 4, 5, 6]
- sum = foldl(lambda a, x: a + x, elems)
- # sum == 21
- ```
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.global_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.global_variables.md
deleted file mode 100644
index 1939f42224..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.global_variables.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.global_variables()` {#global_variables}
-
-Returns global variables.
-
-Global variables are variables that are shared across machines in a
-distributed environment. The `Variable()` constructor or `get_variable()`
-automatically adds new variables to the graph collection
-`GraphKeys.GLOBAL_VARIABLES`.
-This convenience function returns the contents of that collection.
-
-An alternative to global variables are local variables. See
-[`tf.local_variables()`](../../api_docs/python/state_ops.md#local_variables)
-
-##### Returns:
-
- A list of `Variable` objects.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.group.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.group.md
deleted file mode 100644
index 7958cf9e58..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.group.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.group(*inputs, **kwargs)` {#group}
-
-Create an op that groups multiple operations.
-
-When this op finishes, all ops in `input` have finished. This op has no
-output.
-
-See also `tuple` and `with_dependencies`.
-
-##### Args:
-
-
-* <b>`*inputs`</b>: Zero or more tensors to group.
-* <b>`**kwargs`</b>: Optional parameters to pass when constructing the NodeDef.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- An Operation that executes all its inputs.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If an unknown keyword argument is provided.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ifft2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ifft2d.md
deleted file mode 100644
index d19b164d8c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.ifft2d.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.ifft2d(input, name=None)` {#ifft2d}
-
-Compute the inverse 2-dimensional discrete Fourier Transform over the inner-most
-
-2 dimensions of `input`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `complex64`. A complex64 tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `complex64`.
- A complex64 tensor of the same shape as `input`. The inner-most 2
- dimensions of `input` are replaced with their inverse 2D Fourier Transform.
-
- @compatibility(numpy)
- Equivalent to np.ifft2
- @end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_contrast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_contrast.md
deleted file mode 100644
index 2fbf1b3e2a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_contrast.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.image.adjust_contrast(images, contrast_factor)` {#adjust_contrast}
-
-Adjust contrast of RGB or grayscale images.
-
-This is a convenience method that converts an RGB image to float
-representation, adjusts its contrast, and then converts it back to the
-original data type. If several adjustments are chained it is advisable to
-minimize the number of redundant conversions.
-
-`images` is a tensor of at least 3 dimensions. The last 3 dimensions are
-interpreted as `[height, width, channels]`. The other dimensions only
-represent a collection of images, such as `[batch, height, width, channels].`
-
-Contrast is adjusted independently for each channel of each image.
-
-For each channel, this Op computes the mean of the image pixels in the
-channel and then adjusts each component `x` of each pixel to
-`(x - mean) * contrast_factor + mean`.
-
-##### Args:
-
-
-* <b>`images`</b>: Images to adjust. At least 3-D.
-* <b>`contrast_factor`</b>: A float multiplier for adjusting contrast.
-
-##### Returns:
-
- The contrast-adjusted image or images.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_saturation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_saturation.md
deleted file mode 100644
index 1829271ff6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.adjust_saturation.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.image.adjust_saturation(image, saturation_factor, name=None)` {#adjust_saturation}
-
-Adjust saturation of an RGB image.
-
-This is a convenience method that converts an RGB image to float
-representation, converts it to HSV, add an offset to the saturation channel,
-converts back to RGB and then back to the original data type. If several
-adjustments are chained it is advisable to minimize the number of redundant
-conversions.
-
-`image` is an RGB image. The image saturation is adjusted by converting the
-image to HSV and multiplying the saturation (S) channel by
-`saturation_factor` and clipping. The image is then converted back to RGB.
-
-##### Args:
-
-
-* <b>`image`</b>: RGB image or images. Size of the last dimension must be 3.
-* <b>`saturation_factor`</b>: float. Factor to multiply the saturation by.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- Adjusted image(s), same shape and DType as `image`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.convert_image_dtype.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.convert_image_dtype.md
deleted file mode 100644
index 63db6f36a9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.convert_image_dtype.md
+++ /dev/null
@@ -1,32 +0,0 @@
-### `tf.image.convert_image_dtype(image, dtype, saturate=False, name=None)` {#convert_image_dtype}
-
-Convert `image` to `dtype`, scaling its values if needed.
-
-Images that are represented using floating point values are expected to have
-values in the range [0,1). Image data stored in integer data types are
-expected to have values in the range `[0,MAX]`, where `MAX` is the largest
-positive representable number for the data type.
-
-This op converts between data types, scaling the values appropriately before
-casting.
-
-Note that converting from floating point inputs to integer types may lead to
-over/underflow problems. Set saturate to `True` to avoid such problem in
-problematic conversions. If enabled, saturation will clip the output into the
-allowed range before performing a potentially dangerous cast (and only before
-performing such a cast, i.e., when casting from a floating point to an integer
-type, and when casting from a signed to an unsigned type; `saturate` has no
-effect on casts between floats, or on casts that increase the type's range).
-
-##### Args:
-
-
-* <b>`image`</b>: An image.
-* <b>`dtype`</b>: A `DType` to convert `image` to.
-* <b>`saturate`</b>: If `True`, clip the input before casting (if necessary).
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- `image`, converted to `dtype`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.decode_png.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.decode_png.md
deleted file mode 100644
index 4332af7704..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.decode_png.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.image.decode_png(contents, channels=None, dtype=None, name=None)` {#decode_png}
-
-Decode a PNG-encoded image to a uint8 or uint16 tensor.
-
-The attr `channels` indicates the desired number of color channels for the
-decoded image.
-
-Accepted values are:
-
-* 0: Use the number of channels in the PNG-encoded image.
-* 1: output a grayscale image.
-* 3: output an RGB image.
-* 4: output an RGBA image.
-
-If needed, the PNG-encoded image is transformed to match the requested number
-of color channels.
-
-##### Args:
-
-
-* <b>`contents`</b>: A `Tensor` of type `string`. 0-D. The PNG-encoded image.
-* <b>`channels`</b>: An optional `int`. Defaults to `0`.
- Number of color channels for the decoded image.
-* <b>`dtype`</b>: An optional `tf.DType` from: `tf.uint8, tf.uint16`. Defaults to `tf.uint8`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `dtype`. 3-D with shape `[height, width, channels]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_contrast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_contrast.md
deleted file mode 100644
index 76cd2292cf..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_contrast.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.image.random_contrast(image, lower, upper, seed=None)` {#random_contrast}
-
-Adjust the contrast of an image by a random factor.
-
-Equivalent to `adjust_contrast()` but uses a `contrast_factor` randomly
-picked in the interval `[lower, upper]`.
-
-##### Args:
-
-
-* <b>`image`</b>: An image tensor with 3 or more dimensions.
-* <b>`lower`</b>: float. Lower bound for the random contrast factor.
-* <b>`upper`</b>: float. Upper bound for the random contrast factor.
-* <b>`seed`</b>: A Python integer. Used to create a random seed. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-
-##### Returns:
-
- The contrast-adjusted tensor.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `upper <= lower` or if `lower < 0`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_saturation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_saturation.md
deleted file mode 100644
index 397bfc4d0b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.image.random_saturation.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.image.random_saturation(image, lower, upper, seed=None)` {#random_saturation}
-
-Adjust the saturation of an RGB image by a random factor.
-
-Equivalent to `adjust_saturation()` but uses a `saturation_factor` randomly
-picked in the interval `[lower, upper]`.
-
-##### Args:
-
-
-* <b>`image`</b>: RGB image or images. Size of the last dimension must be 3.
-* <b>`lower`</b>: float. Lower bound for the random saturation factor.
-* <b>`upper`</b>: float. Upper bound for the random saturation factor.
-* <b>`seed`</b>: An operation-specific seed. It will be used in conjunction
- with the graph-level seed to determine the real seeds that will be
- used in this operation. Please see the documentation of
- set_random_seed for its interaction with the graph-level random seed.
-
-##### Returns:
-
- Adjusted image(s), same shape and DType as `image`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `upper <= lower` or if `lower < 0`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.local_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.local_variables.md
deleted file mode 100644
index 2bf8d2f912..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.local_variables.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.local_variables()` {#local_variables}
-
-Returns local variables.
-
-Local variables - per process variables, usually not saved/restored to
-checkpoint and used for temporary or intermediate values.
-For example, they can be used as counters for metrics computation or
-number of epochs this machine has read data.
-The `tf.contrib.framework.local_variable()` function automatically adds the
-new variable to `GraphKeys.LOCAL_VARIABLES`.
-This convenience function returns the contents of that collection.
-
-An alternative to local variables are global variables. See
-[`tf.global_variables()`](../../api_docs/python/state_ops.md#global_variables)
-
-##### Returns:
-
- A list of local `Variable` objects.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.logical_xor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.logical_xor.md
deleted file mode 100644
index 20db3e60a6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.logical_xor.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.logical_xor(x, y, name='LogicalXor')` {#logical_xor}
-
-x ^ y = (x | y) & ~(x & y).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.multinomial.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.multinomial.md
deleted file mode 100644
index 8e8e1e102d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.multinomial.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.multinomial(logits, num_samples, seed=None, name=None)` {#multinomial}
-
-Draws samples from a multinomial distribution.
-
-Example:
-
-```python
-# samples has shape [1, 5], where each value is either 0 or 1 with equal
-# probability.
-samples = tf.multinomial(tf.log([[10., 10.]]), 5)
-```
-
-##### Args:
-
-
-* <b>`logits`</b>: 2-D Tensor with shape `[batch_size, num_classes]`. Each slice
- `[i, :]` represents the unnormalized log probabilities for all classes.
-* <b>`num_samples`</b>: 0-D. Number of independent samples to draw for each row slice.
-* <b>`seed`</b>: A Python integer. Used to create a random seed for the distribution.
- See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- The drawn samples of shape `[batch_size, num_samples]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.dropout.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.dropout.md
deleted file mode 100644
index 4f2b7c0214..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.dropout.md
+++ /dev/null
@@ -1,38 +0,0 @@
-### `tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None)` {#dropout}
-
-Computes dropout.
-
-With probability `keep_prob`, outputs the input element scaled up by
-`1 / keep_prob`, otherwise outputs `0`. The scaling is so that the expected
-sum is unchanged.
-
-By default, each element is kept or dropped independently. If `noise_shape`
-is specified, it must be
-[broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]`
-will make independent decisions. For example, if `shape(x) = [k, l, m, n]`
-and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be
-kept independently and each row and column will be kept or not kept together.
-
-##### Args:
-
-
-* <b>`x`</b>: A tensor.
-* <b>`keep_prob`</b>: A scalar `Tensor` with the same type as x. The probability
- that each element is kept.
-* <b>`noise_shape`</b>: A 1-D `Tensor` of type `int32`, representing the
- shape for randomly generated keep/drop flags.
-* <b>`seed`</b>: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A Tensor of the same shape of `x`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `keep_prob` is not in `(0, 1]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.fractional_max_pool.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.fractional_max_pool.md
deleted file mode 100644
index 8f8fb0237c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.fractional_max_pool.md
+++ /dev/null
@@ -1,81 +0,0 @@
-### `tf.nn.fractional_max_pool(value, pooling_ratio, pseudo_random=None, overlapping=None, deterministic=None, seed=None, seed2=None, name=None)` {#fractional_max_pool}
-
-Performs fractional max pooling on the input.
-
-Fractional max pooling is slightly different than regular max pooling. In
-regular max pooling, you downsize an input set by taking the maximum value of
-smaller N x N subsections of the set (often 2x2), and try to reduce the set by
-a factor of N, where N is an integer. Fractional max pooling, as you might
-expect from the word "fractional", means that the overall reduction ratio N
-does not have to be an integer.
-
-The sizes of the pooling regions are generated randomly but are fairly uniform.
-For example, let's look at the height dimension, and the constraints on the
-list of rows that will be pool boundaries.
-
-First we define the following:
-
-1. input_row_length : the number of rows from the input set
-2. output_row_length : which will be smaller than the input
-3. alpha = input_row_length / output_row_length : our reduction ratio
-4. K = floor(alpha)
-5. row_pooling_sequence : this is the result list of pool boundary rows
-
-Then, row_pooling_sequence should satisfy:
-
-1. a[0] = 0 : the first value of the sequence is 0
-2. a[end] = input_row_length : the last value of the sequence is the size
-3. K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size
-4. length(row_pooling_sequence) = output_row_length+1
-
-For more details on fractional max pooling, see this paper:
-[Benjamin Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071)
-
-##### Args:
-
-
-* <b>`value`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
- 4-D with shape `[batch, height, width, channels]`.
-* <b>`pooling_ratio`</b>: A list of `floats` that has length `>= 4`.
- Pooling ratio for each dimension of `value`, currently only
- supports row and col dimension and should be >= 1.0. For example, a valid
- pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements
- must be 1.0 because we don't allow pooling on batch and channels
- dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions
- respectively.
-* <b>`pseudo_random`</b>: An optional `bool`. Defaults to `False`.
- When set to True, generates the pooling sequence in a
- pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin
- Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071) for
- difference between pseudorandom and random.
-* <b>`overlapping`</b>: An optional `bool`. Defaults to `False`.
- When set to True, it means when pooling, the values at the boundary
- of adjacent pooling cells are used by both cells. For example:
-
- `index 0 1 2 3 4`
-
- `value 20 5 16 3 7`
-
- If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice.
- The result would be [20, 16] for fractional max pooling.
-
-* <b>`deterministic`</b>: An optional `bool`. Defaults to `False`.
- When set to True, a fixed pooling region will be used when
- iterating over a FractionalMaxPool node in the computation graph. Mainly used
- in unit test to make FractionalMaxPool deterministic.
-* <b>`seed`</b>: An optional `int`. Defaults to `0`.
- If either seed or seed2 are set to be non-zero, the random number
- generator is seeded by the given seed. Otherwise, it is seeded by a
- random seed.
-* <b>`seed2`</b>: An optional `int`. Defaults to `0`.
- An second seed to avoid seed collision.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, row_pooling_sequence, col_pooling_sequence).
-
-* <b>`output`</b>: A `Tensor`. Has the same type as `value`. output tensor after fractional max pooling.
-* <b>`row_pooling_sequence`</b>: A `Tensor` of type `int64`. row pooling sequence, needed to calculate gradient.
-* <b>`col_pooling_sequence`</b>: A `Tensor` of type `int64`. column pooling sequence, needed to calculate gradient.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.log_softmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.log_softmax.md
deleted file mode 100644
index ac55b177d0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.log_softmax.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.nn.log_softmax(logits, dim=-1, name=None)` {#log_softmax}
-
-Computes log softmax activations.
-
-For each batch `i` and class `j` we have
-
- logsoftmax = logits - log(reduce_sum(exp(logits), dim))
-
-##### Args:
-
-
-* <b>`logits`</b>: A non-empty `Tensor`. Must be one of the following types: `half`,
- `float32`, `float64`.
-* <b>`dim`</b>: The dimension softmax would be performed on. The default is -1 which
- indicates the last dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `logits`. Same shape as `logits`.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: if `logits` is empty or `dim` is beyond the last
- dimension of `logits`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.max_pool.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.max_pool.md
deleted file mode 100644
index 05934c00e1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.max_pool.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.nn.max_pool(value, ksize, strides, padding, data_format='NHWC', name=None)` {#max_pool}
-
-Performs the max pooling on the input.
-
-##### Args:
-
-
-* <b>`value`</b>: A 4-D `Tensor` with shape `[batch, height, width, channels]` and
- type `tf.float32`.
-* <b>`ksize`</b>: A list of ints that has length >= 4. The size of the window for
- each dimension of the input tensor.
-* <b>`strides`</b>: A list of ints that has length >= 4. The stride of the sliding
- window for each dimension of the input tensor.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
- See the [comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`data_format`</b>: A string. 'NHWC' and 'NCHW' are supported.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- A `Tensor` with type `tf.float32`. The max pooled output tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.quantized_max_pool.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.quantized_max_pool.md
deleted file mode 100644
index 3ddffd5b83..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.quantized_max_pool.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.nn.quantized_max_pool(input, min_input, max_input, ksize, strides, padding, name=None)` {#quantized_max_pool}
-
-Produces the max pool of the input tensor for quantized types.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
- The 4D (batch x rows x cols x depth) Tensor to MaxReduce over.
-* <b>`min_input`</b>: A `Tensor` of type `float32`.
- The float value that the lowest quantized input value represents.
-* <b>`max_input`</b>: A `Tensor` of type `float32`.
- The float value that the highest quantized input value represents.
-* <b>`ksize`</b>: A list of `ints`.
- The size of the window for each dimension of the input tensor.
- The length must be 4 to match the number of dimensions of the input.
-* <b>`strides`</b>: A list of `ints`.
- The stride of the sliding window for each dimension of the input
- tensor. The length must be 4 to match the number of dimensions of the input.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, min_output, max_output).
-
-* <b>`output`</b>: A `Tensor`. Has the same type as `input`.
-* <b>`min_output`</b>: A `Tensor` of type `float32`. The float value that the lowest quantized output value represents.
-* <b>`max_output`</b>: A `Tensor` of type `float32`. The float value that the highest quantized output value represents.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.quantized_relu_x.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.quantized_relu_x.md
deleted file mode 100644
index 2738a4bdab..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.quantized_relu_x.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.nn.quantized_relu_x(features, max_value, min_features, max_features, out_type=None, name=None)` {#quantized_relu_x}
-
-Computes Quantized Rectified Linear X: `min(max(features, 0), max_value)`
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
-* <b>`max_value`</b>: A `Tensor` of type `float32`.
-* <b>`min_features`</b>: A `Tensor` of type `float32`.
- The float value that the lowest quantized value represents.
-* <b>`max_features`</b>: A `Tensor` of type `float32`.
- The float value that the highest quantized value represents.
-* <b>`out_type`</b>: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32`. Defaults to `tf.quint8`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (activations, min_activations, max_activations).
-
-* <b>`activations`</b>: A `Tensor` of type `out_type`. Has the same output shape as "features".
-* <b>`min_activations`</b>: A `Tensor` of type `float32`. The float value that the lowest quantized value represents.
-* <b>`max_activations`</b>: A `Tensor` of type `float32`. The float value that the highest quantized value represents.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.softsign.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.softsign.md
deleted file mode 100644
index 971b2a8134..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.nn.softsign.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.nn.softsign(features, name=None)` {#softsign}
-
-Computes softsign: `features / (abs(features) + 1)`.
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `features`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.parse_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.parse_tensor.md
deleted file mode 100644
index 796eb39598..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.parse_tensor.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.parse_tensor(serialized, out_type, name=None)` {#parse_tensor}
-
-Transforms a serialized tensorflow.TensorProto proto into a Tensor.
-
-##### Args:
-
-
-* <b>`serialized`</b>: A `Tensor` of type `string`.
- A scalar string containing a serialized TensorProto proto.
-* <b>`out_type`</b>: A `tf.DType`.
- The type of the serialized tensor. The provided type must match the
- type of the serialized tensor and no implicit conversion will take place.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `out_type`. A Tensor of type `out_type`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.placeholder.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.placeholder.md
deleted file mode 100644
index 28cdc11cce..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.placeholder.md
+++ /dev/null
@@ -1,34 +0,0 @@
-### `tf.placeholder(dtype, shape=None, name=None)` {#placeholder}
-
-Inserts a placeholder for a tensor that will be always fed.
-
-**Important**: This tensor will produce an error if evaluated. Its value must
-be fed using the `feed_dict` optional argument to `Session.run()`,
-`Tensor.eval()`, or `Operation.run()`.
-
-For example:
-
-```python
-x = tf.placeholder(tf.float32, shape=(1024, 1024))
-y = tf.matmul(x, x)
-
-with tf.Session() as sess:
- print(sess.run(y)) # ERROR: will fail because x was not fed.
-
- rand_array = np.random.rand(1024, 1024)
- print(sess.run(y, feed_dict={x: rand_array})) # Will succeed.
-```
-
-##### Args:
-
-
-* <b>`dtype`</b>: The type of elements in the tensor to be fed.
-* <b>`shape`</b>: The shape of the tensor to be fed (optional). If the shape is not
- specified, you can feed a tensor of any shape.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` that may be used as a handle for feeding a value, but not
- evaluated directly.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.random_gamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.random_gamma.md
deleted file mode 100644
index 1d99f8c2f8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.random_gamma.md
+++ /dev/null
@@ -1,65 +0,0 @@
-### `tf.random_gamma(shape, alpha, beta=None, dtype=tf.float32, seed=None, name=None)` {#random_gamma}
-
-Draws `shape` samples from each of the given Gamma distribution(s).
-
-`alpha` is the shape parameter describing the distribution(s), and `beta` is
-the inverse scale parameter(s).
-
-Example:
-
- samples = tf.random_gamma([10], [0.5, 1.5])
- # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents
- # the samples drawn from each distribution
-
- samples = tf.random_gamma([7, 5], [0.5, 1.5])
- # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]
- # represents the 7x5 samples drawn from each of the two distributions
-
- samples = tf.random_gamma([30], [[1.],[3.],[5.]], beta=[[3., 4.]])
- # samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions.
-
- Note that for small alpha values, there is a chance you will draw a value of
- exactly 0, which gets worse for lower-precision dtypes, even though zero is
- not in the support of the gamma distribution.
-
- Relevant cdfs (~chance you will draw a exactly-0 value):
- ```
- stats.gamma(.01).cdf(np.finfo(np.float16).tiny)
- 0.91269738769897879
- stats.gamma(.01).cdf(np.finfo(np.float32).tiny)
- 0.41992668622045726
- stats.gamma(.01).cdf(np.finfo(np.float64).tiny)
- 0.00084322740680686662
- stats.gamma(.35).cdf(np.finfo(np.float16).tiny)
- 0.037583276135263931
- stats.gamma(.35).cdf(np.finfo(np.float32).tiny)
- 5.9514895726818067e-14
- stats.gamma(.35).cdf(np.finfo(np.float64).tiny)
- 2.3529843400647272e-108
- ```
-
-##### Args:
-
-
-* <b>`shape`</b>: A 1-D integer Tensor or Python array. The shape of the output samples
- to be drawn per alpha/beta-parameterized distribution.
-* <b>`alpha`</b>: A Tensor or Python value or N-D array of type `dtype`. `alpha`
- provides the shape parameter(s) describing the gamma distribution(s) to
- sample. Must be broadcastable with `beta`.
-* <b>`beta`</b>: A Tensor or Python value or N-D array of type `dtype`. Defaults to 1.
- `beta` provides the inverse scale parameter(s) of the gamma
- distribution(s) to sample. Must be broadcastable with `alpha`.
-* <b>`dtype`</b>: The type of alpha, beta, and the output: `float16`, `float32`, or
- `float64`.
-* <b>`seed`</b>: A Python integer. Used to create a random seed for the distributions.
- See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` of shape `tf.concat(shape, tf.shape(alpha + beta))`
- with values of type `dtype`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.random_shuffle.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.random_shuffle.md
deleted file mode 100644
index 14f40d64af..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.random_shuffle.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.random_shuffle(value, seed=None, name=None)` {#random_shuffle}
-
-Randomly shuffles a tensor along its first dimension.
-
-The tensor is shuffled along dimension 0, such that each `value[j]` is mapped
-to one and only one `output[i]`. For example, a mapping that might occur for a
-3x2 tensor is:
-
-```python
-[[1, 2], [[5, 6],
- [3, 4], ==> [1, 2],
- [5, 6]] [3, 4]]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: A Tensor to be shuffled.
-* <b>`seed`</b>: A Python integer. Used to create a random seed for the distribution.
- See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tensor of same shape and type as `value`, shuffled along its first
- dimension.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reduce_min.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reduce_min.md
deleted file mode 100644
index f1b0ba6614..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.reduce_min.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.reduce_min(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_min}
-
-Computes the minimum of elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The tensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
-@compatibility(numpy)
-Equivalent to np.min
-@end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.register_tensor_conversion_function.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.register_tensor_conversion_function.md
deleted file mode 100644
index dc55e629b4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.register_tensor_conversion_function.md
+++ /dev/null
@@ -1,44 +0,0 @@
-### `tf.register_tensor_conversion_function(base_type, conversion_func, priority=100)` {#register_tensor_conversion_function}
-
-Registers a function for converting objects of `base_type` to `Tensor`.
-
-The conversion function must have the following signature:
-
-```python
- def conversion_func(value, dtype=None, name=None, as_ref=False):
- # ...
-```
-
-It must return a `Tensor` with the given `dtype` if specified. If the
-conversion function creates a new `Tensor`, it should use the given
-`name` if specified. All exceptions will be propagated to the caller.
-
-The conversion function may return `NotImplemented` for some
-inputs. In this case, the conversion process will continue to try
-subsequent conversion functions.
-
-If `as_ref` is true, the function must return a `Tensor` reference,
-such as a `Variable`.
-
-NOTE: The conversion functions will execute in order of priority,
-followed by order of registration. To ensure that a conversion function
-`F` runs before another conversion function `G`, ensure that `F` is
-registered with a smaller priority than `G`.
-
-##### Args:
-
-
-* <b>`base_type`</b>: The base type or tuple of base types for all objects that
- `conversion_func` accepts.
-* <b>`conversion_func`</b>: A function that converts instances of `base_type` to
- `Tensor`.
-* <b>`priority`</b>: Optional integer that indicates the priority for applying this
- conversion function. Conversion functions with smaller priority values
- run earlier than conversion functions with larger priority values.
- Defaults to 100.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the arguments do not have the appropriate type.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_mul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_mul.md
deleted file mode 100644
index 94da4712d4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_mul.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.scatter_mul(ref, indices, updates, use_locking=None, name=None)` {#scatter_mul}
-
-Multiplies sparse updates into a variable reference.
-
-This operation computes
-
- # Scalar indices
- ref[indices, ...] *= updates[...]
-
- # Vector indices (for each i)
- ref[indices[i], ...] *= updates[i, ...]
-
- # High rank indices (for each i, ..., j)
- ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...]
-
-This operation outputs `ref` after the update is done.
-This makes it easier to chain operations that need to use the reset value.
-
-Duplicate entries are handled correctly: if multiple `indices` reference
-the same location, their contributions multiply.
-
-Requires `updates.shape = indices.shape + ref.shape[1:]`.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Should be from a `Variable` node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A tensor of indices into the first dimension of `ref`.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A tensor of updated values to multiply to `ref`.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- If True, the operation will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as `ref`. Returned as a convenience for operations that want
- to use the updated values after the update is done.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_nd_add.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_nd_add.md
deleted file mode 100644
index 4d1472205d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.scatter_nd_add.md
+++ /dev/null
@@ -1,61 +0,0 @@
-### `tf.scatter_nd_add(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_add}
-
-Applies sparse addition between `updates` and individual values or slices
-
-within a given variable according to `indices`.
-
-`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
-
-`indices` must be integer tensor, containing indices into `ref`.
-It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
-
-The innermost dimension of `indices` (with length `K`) corresponds to
-indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
-dimension of `ref`.
-
-`updates` is `Tensor` of rank `Q-1+P-K` with shape:
-
-```
-[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
-```
-
-For example, say we want to add 4 scattered elements to a rank-1 tensor to 8
-elements. In Python, that addition would look like this:
-
- ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
- indices = tf.constant([[4], [3], [1], [7]])
- updates = tf.constant([9, 10, 11, 12])
- add = tf.scatter_nd_add(ref, indices, updates)
- with tf.Session() as sess:
- print sess.run(add)
-
-The resulting update to ref would look like this:
-
- [1, 13, 3, 14, 14, 6, 7, 20]
-
-See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to
-slices.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- A mutable Tensor. Should be from a Variable node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A Tensor. Must be one of the following types: int32, int64.
- A tensor of indices into ref.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A Tensor. Must have the same type as ref. A tensor of updated values
- to add to ref.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- An optional bool. Defaults to True. If True, the assignment will
- be protected by a lock; otherwise the behavior is undefined,
- but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A mutable `Tensor`. Has the same type as `ref`.
- Same as ref. Returned as a convenience for operations that want
- to use the updated values after the update is done.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_mean.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_mean.md
deleted file mode 100644
index 5d901859a9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.segment_mean.md
+++ /dev/null
@@ -1,32 +0,0 @@
-### `tf.segment_mean(data, segment_ids, name=None)` {#segment_mean}
-
-Computes the mean along segments of a tensor.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-Computes a tensor such that
-\\(output_i = \frac{\sum_j data_j}{N}\\) where `mean` is
-over `j` such that `segment_ids[j] == i` and `N` is the total number of
-values summed.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/SegmentMean.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`segment_ids`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor whose rank is equal to the rank of `data`'s
- first dimension. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.shape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.shape.md
deleted file mode 100644
index 2032de86af..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.shape.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.shape(input, name=None, out_type=tf.int32)` {#shape}
-
-Returns the shape of a tensor.
-
-This operation returns a 1-D integer tensor representing the shape of `input`.
-
-For example:
-
-```python
-# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
-shape(t) ==> [2, 2, 3]
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`out_type`</b>: (Optional) The specified output type of the operation
- (`int32` or `int64`). Defaults to `tf.int32`.
-
-##### Returns:
-
- A `Tensor` of type `out_type`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_concat.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_concat.md
deleted file mode 100644
index 70cab998b4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_concat.md
+++ /dev/null
@@ -1,102 +0,0 @@
-### `tf.sparse_concat(axis, sp_inputs, name=None, expand_nonconcat_dim=False, concat_dim=None)` {#sparse_concat}
-
-Concatenates a list of `SparseTensor` along the specified dimension.
-
-Concatenation is with respect to the dense versions of each sparse input.
-It is assumed that each inputs is a `SparseTensor` whose elements are ordered
-along increasing dimension number.
-
-If expand_nonconcat_dim is False, all inputs' shapes must match, except for
-the concat dimension. If expand_nonconcat_dim is True, then inputs' shapes are
-allowed to vary among all inputs.
-
-The `indices`, `values`, and `shapes` lists must have the same length.
-
-If expand_nonconcat_dim is False, then the output shape is identical to the
-inputs', except along the concat dimension, where it is the sum of the inputs'
-sizes along that dimension.
-
-If expand_nonconcat_dim is True, then the output shape along the non-concat
-dimensions will be expand to be the largest among all inputs, and it is the
-sum of the inputs sizes along the concat dimension.
-
-The output elements will be resorted to preserve the sort order along
-increasing dimension number.
-
-This op runs in `O(M log M)` time, where `M` is the total number of non-empty
-values across all inputs. This is due to the need for an internal sort in
-order to concatenate efficiently across an arbitrary dimension.
-
-For example, if `axis = 1` and the inputs are
-
- sp_inputs[0]: shape = [2, 3]
- [0, 2]: "a"
- [1, 0]: "b"
- [1, 1]: "c"
-
- sp_inputs[1]: shape = [2, 4]
- [0, 1]: "d"
- [0, 2]: "e"
-
-then the output will be
-
- shape = [2, 7]
- [0, 2]: "a"
- [0, 4]: "d"
- [0, 5]: "e"
- [1, 0]: "b"
- [1, 1]: "c"
-
-Graphically this is equivalent to doing
-
- [ a] concat [ d e ] = [ a d e ]
- [b c ] [ ] [b c ]
-
-Another example, if 'axis = 1' and the inputs are
-
- sp_inputs[0]: shape = [3, 3]
- [0, 2]: "a"
- [1, 0]: "b"
- [2, 1]: "c"
-
- sp_inputs[1]: shape = [2, 4]
- [0, 1]: "d"
- [0, 2]: "e"
-
-if expand_nonconcat_dim = False, this will result in an error. But if
-expand_nonconcat_dim = True, this will result in:
-
- shape = [3, 7]
- [0, 2]: "a"
- [0, 4]: "d"
- [0, 5]: "e"
- [1, 0]: "b"
- [2, 1]: "c"
-
-Graphically this is equivalent to doing
-
- [ a] concat [ d e ] = [ a d e ]
- [b ] [ ] [b ]
- [ c ] [ c ]
-
-
-##### Args:
-
-
-* <b>`axis`</b>: Dimension to concatenate along. Must be in range [-rank, rank),
- where rank is the number of dimensions in each input `SparseTensor`.
-* <b>`sp_inputs`</b>: List of `SparseTensor` to concatenate.
-* <b>`name`</b>: A name prefix for the returned tensors (optional).
-* <b>`expand_nonconcat_dim`</b>: Whether to allow the expansion in the non-concat
- dimensions. Defaulted to False.
-* <b>`concat_dim`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- A `SparseTensor` with the concatenated output.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_inputs` is not a list of `SparseTensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_softmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_softmax.md
deleted file mode 100644
index b2b5d4b9c3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_softmax.md
+++ /dev/null
@@ -1,52 +0,0 @@
-### `tf.sparse_softmax(sp_input, name=None)` {#sparse_softmax}
-
-Applies softmax to a batched N-D `SparseTensor`.
-
-The inputs represent an N-D SparseTensor with logical shape `[..., B, C]`
-(where `N >= 2`), and with indices sorted in the canonical lexicographic
-order.
-
-This op is equivalent to applying the normal `tf.nn.softmax()` to each
-innermost logical submatrix with shape `[B, C]`, but with the catch that *the
-implicitly zero elements do not participate*. Specifically, the algorithm is
-equivalent to:
-
- (1) Applies `tf.nn.softmax()` to a densified view of each innermost
- submatrix with shape `[B, C]`, along the size-C dimension;
- (2) Masks out the original implicitly-zero locations;
- (3) Renormalizes the remaining elements.
-
-Hence, the `SparseTensor` result has exactly the same non-zero indices and
-shape.
-
-Example:
-
-```python
-# First batch:
-# [? e.]
-# [1. ? ]
-# Second batch:
-# [e ? ]
-# [e e ]
-shape = [2, 2, 2] # 3-D SparseTensor
-values = np.asarray([[[0., np.e], [1., 0.]], [[np.e, 0.], [np.e, np.e]]])
-indices = np.vstack(np.where(values)).astype(np.int64).T
-
-result = tf.sparse_softmax(tf.SparseTensor(indices, values, shape))
-# ...returning a 3-D SparseTensor, equivalent to:
-# [? 1.] [1 ?]
-# [1. ? ] and [.5 .5]
-# where ? means implicitly zero.
-```
-
-##### Args:
-
-
-* <b>`sp_input`</b>: N-D `SparseTensor`, where `N >= 2`.
-* <b>`name`</b>: optional name of the operation.
-
-##### Returns:
-
-
-* <b>`output`</b>: N-D `SparseTensor` representing the results.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_split.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_split.md
deleted file mode 100644
index 11fa3f4465..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.sparse_split.md
+++ /dev/null
@@ -1,43 +0,0 @@
-### `tf.sparse_split(keyword_required=KeywordRequired(), sp_input=None, num_split=None, axis=None, name=None, split_dim=None)` {#sparse_split}
-
-Split a `SparseTensor` into `num_split` tensors along `axis`.
-
-If the `sp_input.dense_shape[axis]` is not an integer multiple of `num_split`
-each slice starting from 0:`shape[axis] % num_split` gets extra one
-dimension. For example, if `axis = 1` and `num_split = 2` and the
-input is:
-
- input_tensor = shape = [2, 7]
- [ a d e ]
- [b c ]
-
-Graphically the output tensors are:
-
- output_tensor[0] =
- [ a ]
- [b c ]
-
- output_tensor[1] =
- [ d e ]
- [ ]
-
-##### Args:
-
-
-* <b>`keyword_required`</b>: Python 2 standin for * (temporary for argument reorder)
-* <b>`sp_input`</b>: The `SparseTensor` to split.
-* <b>`num_split`</b>: A Python integer. The number of ways to split.
-* <b>`axis`</b>: A 0-D `int32` `Tensor`. The dimension along which to split.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`split_dim`</b>: Deprecated old name for axis.
-
-##### Returns:
-
- `num_split` `SparseTensor` objects resulting from splitting `value`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-* <b>`ValueError`</b>: If the deprecated `split_dim` and `axis` are both non None.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.stop_gradient.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.stop_gradient.md
deleted file mode 100644
index 53759f49ff..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.stop_gradient.md
+++ /dev/null
@@ -1,34 +0,0 @@
-### `tf.stop_gradient(input, name=None)` {#stop_gradient}
-
-Stops gradient computation.
-
-When executed in a graph, this op outputs its input tensor as-is.
-
-When building ops to compute gradients, this op prevents the contribution of
-its inputs to be taken into account. Normally, the gradient generator adds ops
-to a graph to compute the derivatives of a specified 'loss' by recursively
-finding out inputs that contributed to its computation. If you insert this op
-in the graph it inputs are masked from the gradient generator. They are not
-taken into account for computing gradients.
-
-This is useful any time you want to compute a value with TensorFlow but need
-to pretend that the value was a constant. Some examples include:
-
-* The *EM* algorithm where the *M-step* should not involve backpropagation
- through the output of the *E-step*.
-* Contrastive divergence training of Boltzmann machines where, when
- differentiating the energy function, the training must not backpropagate
- through the graph that generated the samples from the model.
-* Adversarial training, where no backprop should happen through the adversarial
- example generation process.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.substr.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.substr.md
deleted file mode 100644
index 0f5a21cc14..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.substr.md
+++ /dev/null
@@ -1,92 +0,0 @@
-### `tf.substr(input, pos, len, name=None)` {#substr}
-
-Return substrings from `Tensor` of strings.
-
-For each string in the input `Tensor`, creates a substring starting at index
-`pos` with a total length of `len`.
-
-If `len` defines a substring that would extend beyond the length of the input
-string, then as many characters as possible are used.
-
-If `pos` is negative or specifies a character index larger than any of the input
-strings, then an `InvalidArgumentError` is thrown.
-
-`pos` and `len` must have the same shape, otherwise a `ValueError` is thrown on
-Op creation.
-
-*NOTE*: `Substr` supports broadcasting up to two dimensions. More about
-broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
----
-
-Examples
-
-Using scalar `pos` and `len`:
-
-```
-input = [b'Hello', b'World']
-position = 1
-length = 3
-
-output = [b'ell', b'orl']
-```
-
-Using `pos` and `len` with same shape as `input`:
-
-```
-input = [[b'ten', b'eleven', b'twelve'],
- [b'thirteen', b'fourteen', b'fifteen'],
- [b'sixteen', b'seventeen', b'eighteen']]
-position = [[1, 2, 3],
- [1, 2, 3],
- [1, 2, 3]]
-length = [[2, 3, 4],
- [4, 3, 2],
- [5, 5, 5]]
-
-output = [[b'en', b'eve', b'lve'],
- [b'hirt', b'urt', b'te'],
- [b'ixtee', b'vente', b'hteen']]
-```
-
-Broadcasting `pos` and `len` onto `input`:
-
-```
-input = [[b'ten', b'eleven', b'twelve'],
- [b'thirteen', b'fourteen', b'fifteen'],
- [b'sixteen', b'seventeen', b'eighteen'],
- [b'nineteen', b'twenty', b'twentyone']]
-position = [1, 2, 3]
-length = [1, 2, 3]
-
-output = [[b'e', b'ev', b'lve'],
- [b'h', b'ur', b'tee'],
- [b'i', b've', b'hte'],
- [b'i', b'en', b'nty']]
-```
-
-Broadcasting `input` onto `pos` and `len`:
-
-```
-input = b'thirteen'
-position = [1, 5, 7]
-length = [3, 2, 1]
-
-output = [b'hir', b'ee', b'n"]
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `string`. Tensor of strings
-* <b>`pos`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- Scalar defining the position of first character in each substring
-* <b>`len`</b>: A `Tensor`. Must have the same type as `pos`.
- Scalar defining the number of characters to include in each substring
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`. Tensor of substrings
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.svd.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.svd.md
deleted file mode 100644
index 74185ba7c9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.svd.md
+++ /dev/null
@@ -1,47 +0,0 @@
-### `tf.svd(tensor, full_matrices=False, compute_uv=True, name=None)` {#svd}
-
-Computes the singular value decompositions of one or more matrices.
-
-Computes the SVD of each inner matrix in `tensor` such that
-`tensor[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * transpose(v[..., :,
-:])`
-
-```prettyprint
-# a is a tensor.
-# s is a tensor of singular values.
-# u is a tensor of left singular vectors.
-#v is a tensor of right singular vectors.
-s, u, v = svd(a)
-s = svd(a, compute_uv=False)
-```
-
-##### Args:
-
-
-* <b>`tensor`</b>: `Tensor` of shape `[..., M, N]`. Let `P` be the minimum of `M` and
- `N`.
-* <b>`full_matrices`</b>: If true, compute full-sized `u` and `v`. If false
- (the default), compute only the leading `P` singular vectors.
- Ignored if `compute_uv` is `False`.
-* <b>`compute_uv`</b>: If `True` then left and right singular vectors will be
- computed and returned in `u` and `v`, respectively. Otherwise, only the
- singular values will be computed, which can be significantly faster.
-* <b>`name`</b>: string, optional name of the operation.
-
-##### Returns:
-
-
-* <b>`s`</b>: Singular values. Shape is `[..., P]`.
-* <b>`u`</b>: Right singular vectors. If `full_matrices` is `False` (default) then
- shape is `[..., M, P]`; if `full_matrices` is `True` then shape is
- `[..., M, M]`. Not returned if `compute_uv` is `False`.
-* <b>`v`</b>: Left singular vectors. If `full_matrices` is `False` (default) then
- shape is `[..., N, P]`. If `full_matrices` is `True` then shape is
- `[..., N, N]`. Not returned if `compute_uv` is `False`.
-
-@compatibility(numpy)
-Mostly equivalent to numpy.linalg.svd, except that the order of output
-arguments here is `s`, `u`, `v` when `compute_uv` is `True`, as opposed to
-`u`, `s`, `v` for numpy.linalg.svd.
-@end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.compute_gradient_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.compute_gradient_error.md
deleted file mode 100644
index d7175f3239..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.compute_gradient_error.md
+++ /dev/null
@@ -1,38 +0,0 @@
-### `tf.test.compute_gradient_error(x, x_shape, y, y_shape, x_init_value=None, delta=0.001, init_targets=None, extra_feed_dict=None)` {#compute_gradient_error}
-
-Computes the gradient error.
-
-Computes the maximum error for dy/dx between the computed Jacobian and the
-numerically estimated Jacobian.
-
-This function will modify the tensors passed in as it adds more operations
-and hence changing the consumers of the operations of the input tensors.
-
-This function adds operations to the current session. To compute the error
-using a particular device, such as a GPU, use the standard methods for
-setting a device (e.g. using with sess.graph.device() or setting a device
-function in the session constructor).
-
-##### Args:
-
-
-* <b>`x`</b>: a tensor or list of tensors
-* <b>`x_shape`</b>: the dimensions of x as a tuple or an array of ints. If x is a list,
- then this is the list of shapes.
-
-* <b>`y`</b>: a tensor
-* <b>`y_shape`</b>: the dimensions of y as a tuple or an array of ints.
-* <b>`x_init_value`</b>: (optional) a numpy array of the same shape as "x"
- representing the initial value of x. If x is a list, this should be a list
- of numpy arrays. If this is none, the function will pick a random tensor
- as the initial value.
-* <b>`delta`</b>: (optional) the amount of perturbation.
-* <b>`init_targets`</b>: list of targets to run to initialize model params.
- TODO(mrry): Remove this argument.
-* <b>`extra_feed_dict`</b>: dict that allows fixing specified tensor values
- during the Jacobian calculation.
-
-##### Returns:
-
- The maximum error in between the two Jacobians.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.gpu_device_name.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.gpu_device_name.md
deleted file mode 100644
index f950d8e1f0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.gpu_device_name.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.test.gpu_device_name()` {#gpu_device_name}
-
-Returns the name of a GPU device if available or the empty string.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.test_src_dir_path.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.test_src_dir_path.md
deleted file mode 100644
index 7811f29fac..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.test.test_src_dir_path.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.test.test_src_dir_path(relative_path)` {#test_src_dir_path}
-
-Creates an absolute test srcdir path given a relative path.
-
-##### Args:
-
-
-* <b>`relative_path`</b>: a path relative to tensorflow root.
- e.g. "core/platform".
-
-##### Returns:
-
- An absolute path to the linked in runfiles.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.to_double.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.to_double.md
deleted file mode 100644
index 0cabea178e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.to_double.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.to_double(x, name='ToDouble')` {#to_double}
-
-Casts a tensor to type `float64`.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` with same shape as `x` with type `float64`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` cannot be cast to the `float64`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.trace.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.trace.md
deleted file mode 100644
index 666cb43a54..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.trace.md
+++ /dev/null
@@ -1,41 +0,0 @@
-### `tf.trace(x, name=None)` {#trace}
-
-Compute the trace of a tensor `x`.
-
-`trace(x)` returns the sum along the main diagonal of each inner-most matrix
-in x. If x is of rank `k` with shape `[I, J, K, ..., L, M, N]`, then output
-is a tensor of rank `k-2` with dimensions `[I, J, K, ..., L]` where
-
-`output[i, j, k, ..., l] = trace(x[i, j, i, ..., l, :, :])`
-
-For example:
-
-```python
-# 'x' is [[1, 2],
-# [3, 4]]
-tf.trace(x) ==> 5
-
-# 'x' is [[1,2,3],
-# [4,5,6],
-# [7,8,9]]
-tf.trace(x) ==> 15
-
-# 'x' is [[[1,2,3],
-# [4,5,6],
-# [7,8,9]],
-# [[-1,-2,-3],
-# [-4,-5,-6],
-# [-7,-8,-9]]]
-tf.trace(x) ==> [15,-15]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The trace of input tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.AdagradOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.AdagradOptimizer.md
deleted file mode 100644
index 75ed61cc9a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.AdagradOptimizer.md
+++ /dev/null
@@ -1,181 +0,0 @@
-Optimizer that implements the Adagrad algorithm.
-
-See this [paper](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf)
-or this
-[intro](http://cs.stanford.edu/~ppasupat/a9online/uploads/proximal_notes.pdf).
-- - -
-
-#### `tf.train.AdagradOptimizer.__init__(learning_rate, initial_accumulator_value=0.1, use_locking=False, name='Adagrad')` {#AdagradOptimizer.__init__}
-
-Construct a new Adagrad optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A `Tensor` or a floating point value. The learning rate.
-* <b>`initial_accumulator_value`</b>: A floating point value.
- Starting value for the accumulators, must be positive.
-* <b>`use_locking`</b>: If `True` use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "Adagrad".
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `initial_accumulator_value` is invalid.
-
-
-- - -
-
-#### `tf.train.AdagradOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdagradOptimizer.apply_gradients}
-
-Apply gradients to variables.
-
-This is the second part of `minimize()`. It returns an `Operation` that
-applies gradients.
-
-##### Args:
-
-
-* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
- `compute_gradients()`.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`name`</b>: Optional name for the returned operation. Default to the
- name passed to the `Optimizer` constructor.
-
-##### Returns:
-
- An `Operation` that applies the specified gradients. If `global_step`
- was not None, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
-* <b>`ValueError`</b>: If none of the variables have gradients.
-
-
-- - -
-
-#### `tf.train.AdagradOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdagradOptimizer.compute_gradients}
-
-Compute gradients of `loss` for the variables in `var_list`.
-
-This is the first part of `minimize()`. It returns a list
-of (gradient, variable) pairs where "gradient" is the gradient
-for "variable". Note that "gradient" can be a `Tensor`, an
-`IndexedSlices`, or `None` if there is no gradient for the
-given variable.
-
-##### Args:
-
-
-* <b>`loss`</b>: A Tensor containing the value to minimize.
-* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKey.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- A list of (gradient, variable) pairs. Variable is always present, but
- gradient can be `None`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
-* <b>`ValueError`</b>: If some arguments are invalid.
-
-
-- - -
-
-#### `tf.train.AdagradOptimizer.get_name()` {#AdagradOptimizer.get_name}
-
-
-
-
-- - -
-
-#### `tf.train.AdagradOptimizer.get_slot(var, name)` {#AdagradOptimizer.get_slot}
-
-Return a slot named `name` created for `var` by the Optimizer.
-
-Some `Optimizer` subclasses use additional variables. For example
-`Momentum` and `Adagrad` use variables to accumulate updates. This method
-gives access to these `Variable` objects if for some reason you need them.
-
-Use `get_slot_names()` to get the list of slot names created by the
-`Optimizer`.
-
-##### Args:
-
-
-* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
-* <b>`name`</b>: A string.
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-- - -
-
-#### `tf.train.AdagradOptimizer.get_slot_names()` {#AdagradOptimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-See `get_slot()`.
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.train.AdagradOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdagradOptimizer.minimize}
-
-Add operations to minimize `loss` by updating `var_list`.
-
-This method simply combines calls `compute_gradients()` and
-`apply_gradients()`. If you want to process the gradient before applying
-them call `compute_gradients()` and `apply_gradients()` explicitly instead
-of using this function.
-
-##### Args:
-
-
-* <b>`loss`</b>: A `Tensor` containing the value to minimize.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`name`</b>: Optional name for the returned operation.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- An Operation that updates the variables in `var_list`. If `global_step`
- was not `None`, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.QueueRunner.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.QueueRunner.md
deleted file mode 100644
index ea6f7cadbf..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.QueueRunner.md
+++ /dev/null
@@ -1,175 +0,0 @@
-Holds a list of enqueue operations for a queue, each to be run in a thread.
-
-Queues are a convenient TensorFlow mechanism to compute tensors
-asynchronously using multiple threads. For example in the canonical 'Input
-Reader' setup one set of threads generates filenames in a queue; a second set
-of threads read records from the files, processes them, and enqueues tensors
-on a second queue; a third set of threads dequeues these input records to
-construct batches and runs them through training operations.
-
-There are several delicate issues when running multiple threads that way:
-closing the queues in sequence as the input is exhausted, correctly catching
-and reporting exceptions, etc.
-
-The `QueueRunner`, combined with the `Coordinator`, helps handle these issues.
-- - -
-
-#### `tf.train.QueueRunner.__init__(queue=None, enqueue_ops=None, close_op=None, cancel_op=None, queue_closed_exception_types=None, queue_runner_def=None, import_scope=None)` {#QueueRunner.__init__}
-
-Create a QueueRunner.
-
-On construction the `QueueRunner` adds an op to close the queue. That op
-will be run if the enqueue ops raise exceptions.
-
-When you later call the `create_threads()` method, the `QueueRunner` will
-create one thread for each op in `enqueue_ops`. Each thread will run its
-enqueue op in parallel with the other threads. The enqueue ops do not have
-to all be the same op, but it is expected that they all enqueue tensors in
-`queue`.
-
-##### Args:
-
-
-* <b>`queue`</b>: A `Queue`.
-* <b>`enqueue_ops`</b>: List of enqueue ops to run in threads later.
-* <b>`close_op`</b>: Op to close the queue. Pending enqueue ops are preserved.
-* <b>`cancel_op`</b>: Op to close the queue and cancel pending enqueue ops.
-* <b>`queue_closed_exception_types`</b>: Optional tuple of Exception types that
- indicate that the queue has been closed when raised during an enqueue
- operation. Defaults to `(tf.errors.OutOfRangeError,)`. Another common
- case includes `(tf.errors.OutOfRangeError, tf.errors.CancelledError)`,
- when some of the enqueue ops may dequeue from other Queues.
-* <b>`queue_runner_def`</b>: Optional `QueueRunnerDef` protocol buffer. If specified,
- recreates the QueueRunner from its contents. `queue_runner_def` and the
- other arguments are mutually exclusive.
-* <b>`import_scope`</b>: Optional `string`. Name scope to add. Only used when
- initializing from protocol buffer.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both `queue_runner_def` and `queue` are both specified.
-* <b>`ValueError`</b>: If `queue` or `enqueue_ops` are not provided when not
- restoring from `queue_runner_def`.
-
-
-- - -
-
-#### `tf.train.QueueRunner.cancel_op` {#QueueRunner.cancel_op}
-
-
-
-
-- - -
-
-#### `tf.train.QueueRunner.close_op` {#QueueRunner.close_op}
-
-
-
-
-- - -
-
-#### `tf.train.QueueRunner.create_threads(sess, coord=None, daemon=False, start=False)` {#QueueRunner.create_threads}
-
-Create threads to run the enqueue ops for the given session.
-
-This method requires a session in which the graph was launched. It creates
-a list of threads, optionally starting them. There is one thread for each
-op passed in `enqueue_ops`.
-
-The `coord` argument is an optional coordinator that the threads will use
-to terminate together and report exceptions. If a coordinator is given,
-this method starts an additional thread to close the queue when the
-coordinator requests a stop.
-
-If previously created threads for the given session are still running, no
-new threads will be created.
-
-##### Args:
-
-
-* <b>`sess`</b>: A `Session`.
-* <b>`coord`</b>: Optional `Coordinator` object for reporting errors and checking
- stop conditions.
-* <b>`daemon`</b>: Boolean. If `True` make the threads daemon threads.
-* <b>`start`</b>: Boolean. If `True` starts the threads. If `False` the
- caller must call the `start()` method of the returned threads.
-
-##### Returns:
-
- A list of threads.
-
-
-- - -
-
-#### `tf.train.QueueRunner.enqueue_ops` {#QueueRunner.enqueue_ops}
-
-
-
-
-- - -
-
-#### `tf.train.QueueRunner.exceptions_raised` {#QueueRunner.exceptions_raised}
-
-Exceptions raised but not handled by the `QueueRunner` threads.
-
-Exceptions raised in queue runner threads are handled in one of two ways
-depending on whether or not a `Coordinator` was passed to
-`create_threads()`:
-
-* With a `Coordinator`, exceptions are reported to the coordinator and
- forgotten by the `QueueRunner`.
-* Without a `Coordinator`, exceptions are captured by the `QueueRunner` and
- made available in this `exceptions_raised` property.
-
-##### Returns:
-
- A list of Python `Exception` objects. The list is empty if no exception
- was captured. (No exceptions are captured when using a Coordinator.)
-
-
-- - -
-
-#### `tf.train.QueueRunner.from_proto(queue_runner_def, import_scope=None)` {#QueueRunner.from_proto}
-
-Returns a `QueueRunner` object created from `queue_runner_def`.
-
-
-- - -
-
-#### `tf.train.QueueRunner.name` {#QueueRunner.name}
-
-The string name of the underlying Queue.
-
-
-- - -
-
-#### `tf.train.QueueRunner.queue` {#QueueRunner.queue}
-
-
-
-
-- - -
-
-#### `tf.train.QueueRunner.queue_closed_exception_types` {#QueueRunner.queue_closed_exception_types}
-
-
-
-
-- - -
-
-#### `tf.train.QueueRunner.to_proto(export_scope=None)` {#QueueRunner.to_proto}
-
-Converts this `QueueRunner` to a `QueueRunnerDef` protocol buffer.
-
-##### Args:
-
-
-* <b>`export_scope`</b>: Optional `string`. Name scope to remove.
-
-##### Returns:
-
- A `QueueRunnerDef` protocol buffer, or `None` if the `Variable` is not in
- the specified name scope.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Server.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Server.md
deleted file mode 100644
index a7113297ce..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.Server.md
+++ /dev/null
@@ -1,129 +0,0 @@
-An in-process TensorFlow server, for use in distributed training.
-
-A `tf.train.Server` instance encapsulates a set of devices and a
-[`tf.Session`](../../api_docs/python/client.md#Session) target that
-can participate in distributed training. A server belongs to a
-cluster (specified by a [`tf.train.ClusterSpec`](#ClusterSpec)), and
-corresponds to a particular task in a named job. The server can
-communicate with any other server in the same cluster.
-
-- - -
-
-#### `tf.train.Server.__init__(server_or_cluster_def, job_name=None, task_index=None, protocol=None, config=None, start=True)` {#Server.__init__}
-
-Creates a new server with the given definition.
-
-The `job_name`, `task_index`, and `protocol` arguments are optional, and
-override any information provided in `server_or_cluster_def`.
-
-##### Args:
-
-
-* <b>`server_or_cluster_def`</b>: A `tf.train.ServerDef` or
- `tf.train.ClusterDef` protocol buffer, or a
- `tf.train.ClusterSpec` object, describing the server to be
- created and/or the cluster of which it is a member.
-* <b>`job_name`</b>: (Optional.) Specifies the name of the job of which the server
- is a member. Defaults to the value in `server_or_cluster_def`, if
- specified.
-* <b>`task_index`</b>: (Optional.) Specifies the task index of the server in its
- job. Defaults to the value in `server_or_cluster_def`, if specified.
- Otherwise defaults to 0 if the server's job has only one task.
-* <b>`protocol`</b>: (Optional.) Specifies the protocol to be used by the server.
- Acceptable values include `"grpc"`. Defaults to the value in
- `server_or_cluster_def`, if specified. Otherwise defaults to `"grpc"`.
-* <b>`config`</b>: (Options.) A `tf.ConfigProto` that specifies default
- configuration options for all sessions that run on this server.
-* <b>`start`</b>: (Optional.) Boolean, indicating whether to start the server
- after creating it. Defaults to `True`.
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses if an error occurs while
- creating the TensorFlow server.
-
-
-- - -
-
-#### `tf.train.Server.create_local_server(config=None, start=True)` {#Server.create_local_server}
-
-Creates a new single-process cluster running on the local host.
-
-This method is a convenience wrapper for creating a
-`tf.train.Server` with a `tf.train.ServerDef` that specifies a
-single-process cluster containing a single task in a job called
-`"local"`.
-
-##### Args:
-
-
-* <b>`config`</b>: (Options.) A `tf.ConfigProto` that specifies default
- configuration options for all sessions that run on this server.
-* <b>`start`</b>: (Optional.) Boolean, indicating whether to start the server after
- creating it. Defaults to `True`.
-
-##### Returns:
-
- A local `tf.train.Server`.
-
-
-- - -
-
-#### `tf.train.Server.target` {#Server.target}
-
-Returns the target for a `tf.Session` to connect to this server.
-
-To create a
-[`tf.Session`](../../api_docs/python/client.md#Session) that
-connects to this server, use the following snippet:
-
-```python
-server = tf.train.Server(...)
-with tf.Session(server.target):
- # ...
-```
-
-##### Returns:
-
- A string containing a session target for this server.
-
-
-- - -
-
-#### `tf.train.Server.server_def` {#Server.server_def}
-
-Returns the `tf.train.ServerDef` for this server.
-
-##### Returns:
-
- A `tf.train.ServerDef` protocol buffer that describes the configuration
- of this server.
-
-
-
-- - -
-
-#### `tf.train.Server.start()` {#Server.start}
-
-Starts this server.
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses if an error occurs while
- starting the TensorFlow server.
-
-
-- - -
-
-#### `tf.train.Server.join()` {#Server.join}
-
-Blocks until the server has shut down.
-
-This method currently blocks forever.
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses if an error occurs while
- joining the TensorFlow server.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.SyncReplicasOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.SyncReplicasOptimizer.md
deleted file mode 100644
index 84f1099ffe..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.SyncReplicasOptimizer.md
+++ /dev/null
@@ -1,268 +0,0 @@
-Class to synchronize, aggregate gradients and pass them to the optimizer.
-
-In a typical asynchronous training environment, it's common to have some
-stale gradients. For example, with a N-replica asynchronous training,
-gradients will be applied to the variables N times independently. Depending
-on each replica's training speed, some gradients might be calculated from
-copies of the variable from several steps back (N-1 steps on average). This
-optimizer avoids stale gradients by collecting gradients from all replicas,
-averaging them, then applying them to the variables in one shot, after
-which replicas can fetch the new variables and continue.
-
-The following accumulators/queue are created:
-<empty line>
-* N `gradient accumulators`, one per variable to train. Gradients are pushed
- to them and the chief worker will wait until enough gradients are collected
- and then average them before applying to variables. The accumulator will
- drop all stale gradients (more details in the accumulator op).
-* 1 `token` queue where the optimizer pushes the new global_step value after
- all variables are updated.
-
-The following local variable is created:
-* `sync_rep_local_step`, one per replica. Compared against the global_step in
- each accumulator to check for staleness of the gradients.
-
-The optimizer adds nodes to the graph to collect gradients and pause the
-trainers until variables are updated.
-For the Parameter Server job:
-<empty line>
-1. An accumulator is created for each variable, and each replica pushes the
- gradients into the accumulators instead of directly applying them to the
- variables.
-2. Each accumulator averages once enough gradients (replicas_to_aggregate)
- have been accumulated.
-3. Apply the averaged gradients to the variables.
-4. Only after all variables have been updated, increment the global step.
-5. Only after step 4, pushes `global_step` in the `token_queue`, once for
- each worker replica. The workers can now fetch the global step, use it to
- update its local_step variable and start the next batch.
-
-For the replicas:
-<empty line>
-1. Start a step: fetch variables and compute gradients.
-2. Once the gradients have been computed, push them into gradient
- accumulators. Each accumulator will check the staleness and drop the stale.
-3. After pushing all the gradients, dequeue an updated value of global_step
- from the token queue and record that step to its local_step variable. Note
- that this is effectively a barrier.
-4. Start the next batch.
-
-### Usage
-
-```python
-# Create any optimizer to update the variables, say a simple SGD:
-opt = GradientDescentOptimizer(learning_rate=0.1)
-
-# Wrap the optimizer with sync_replicas_optimizer with 50 replicas: at each
-# step the optimizer collects 50 gradients before applying to variables.
-# Note that if you want to have 2 backup replicas, you can change
-# total_num_replicas=52 and make sure this number matches how many physical
-# replicas you started in your job.
-opt = tf.SyncReplicasOptimizer(opt, replicas_to_aggregate=50,
- total_num_replicas=50)
-
-# Some models have startup_delays to help stabilize the model but when using
-# sync_replicas training, set it to 0.
-
-# Now you can call `minimize()` or `compute_gradients()` and
-# `apply_gradients()` normally
-training_op = opt.minimize(total_loss, global_step=self.global_step)
-
-
-# You can create the hook which handles initialization and queues.
-sync_replicas_hook = opt.make_session_run_hook(is_chief)
-```
-
-In the training program, every worker will run the train_op as if not
-synchronized.
-
-```python
-with training.MonitoredTrainingSession(
- master=workers[worker_id].target, is_chief=is_chief,
- hooks=[sync_replicas_hook]) as mon_sess:
- while not mon_sess.should_stop():
- mon_sess.run(training_op)
-```
-
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.__init__(opt, replicas_to_aggregate, total_num_replicas=None, variable_averages=None, variables_to_average=None, use_locking=False, name='sync_replicas')` {#SyncReplicasOptimizer.__init__}
-
-Construct a sync_replicas optimizer.
-
-##### Args:
-
-
-* <b>`opt`</b>: The actual optimizer that will be used to compute and apply the
- gradients. Must be one of the Optimizer classes.
-* <b>`replicas_to_aggregate`</b>: number of replicas to aggregate for each variable
- update.
-* <b>`total_num_replicas`</b>: Total number of tasks/workers/replicas, could be
- different from replicas_to_aggregate.
- If total_num_replicas > replicas_to_aggregate: it is backup_replicas +
- replicas_to_aggregate.
- If total_num_replicas < replicas_to_aggregate: Replicas compute
- multiple batches per update to variables.
-* <b>`variable_averages`</b>: Optional `ExponentialMovingAverage` object, used to
- maintain moving averages for the variables passed in
- `variables_to_average`.
-* <b>`variables_to_average`</b>: a list of variables that need to be averaged. Only
- needed if variable_averages is passed in.
-* <b>`use_locking`</b>: If True use locks for update operation.
-* <b>`name`</b>: string. Optional name of the returned operation.
-
-
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.compute_gradients(*args, **kwargs)` {#SyncReplicasOptimizer.compute_gradients}
-
-Compute gradients of "loss" for the variables in "var_list".
-
-This simply wraps the compute_gradients() from the real optimizer. The
-gradients will be aggregated in the apply_gradients() so that user can
-modify the gradients like clipping with per replica global norm if needed.
-The global norm with aggregated gradients can be bad as one replica's huge
-gradients can hurt the gradients from other replicas.
-
-##### Args:
-
-
-* <b>`*args`</b>: Arguments for compute_gradients().
-* <b>`**kwargs`</b>: Keyword arguments for compute_gradients().
-
-##### Returns:
-
- A list of (gradient, variable) pairs.
-
-
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#SyncReplicasOptimizer.apply_gradients}
-
-Apply gradients to variables.
-
-This contains most of the synchronization implementation and also wraps the
-apply_gradients() from the real optimizer.
-
-##### Args:
-
-
-* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
- compute_gradients().
-* <b>`global_step`</b>: Optional Variable to increment by one after the
- variables have been updated.
-* <b>`name`</b>: Optional name for the returned operation. Default to the
- name passed to the Optimizer constructor.
-
-##### Returns:
-
-
-* <b>`train_op`</b>: The op to dequeue a token so the replicas can exit this batch
- and start the next one. This is executed by each replica.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the grads_and_vars is empty.
-* <b>`ValueError`</b>: If global step is not provided, the staleness cannot be
- checked.
-
-
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.get_chief_queue_runner()` {#SyncReplicasOptimizer.get_chief_queue_runner}
-
-Returns the QueueRunner for the chief to execute.
-
-This includes the operations to synchronize replicas: aggregate gradients,
-apply to variables, increment global step, insert tokens to token queue.
-
-Note that this can only be called after calling apply_gradients() which
-actually generates this queuerunner.
-
-##### Returns:
-
- A `QueueRunner` for chief to execute.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If this is called before apply_gradients().
-
-
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.get_init_tokens_op(num_tokens=-1)` {#SyncReplicasOptimizer.get_init_tokens_op}
-
-Returns the op to fill the sync_token_queue with the tokens.
-
-This is supposed to be executed in the beginning of the chief/sync thread
-so that even if the total_num_replicas is less than replicas_to_aggregate,
-the model can still proceed as the replicas can compute multiple steps per
-variable update. Make sure:
-`num_tokens >= replicas_to_aggregate - total_num_replicas`.
-
-##### Args:
-
-
-* <b>`num_tokens`</b>: Number of tokens to add to the queue.
-
-##### Returns:
-
- An op for the chief/sync replica to fill the token queue.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If this is called before apply_gradients().
-* <b>`ValueError`</b>: If num_tokens are smaller than replicas_to_aggregate -
- total_num_replicas.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.get_slot(*args, **kwargs)` {#SyncReplicasOptimizer.get_slot}
-
-Return a slot named "name" created for "var" by the Optimizer.
-
-This simply wraps the get_slot() from the actual optimizer.
-
-##### Args:
-
-
-* <b>`*args`</b>: Arguments for get_slot().
-* <b>`**kwargs`</b>: Keyword arguments for get_slot().
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.get_slot_names(*args, **kwargs)` {#SyncReplicasOptimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-This simply wraps the get_slot_names() from the actual optimizer.
-
-##### Args:
-
-
-* <b>`*args`</b>: Arguments for get_slot().
-* <b>`**kwargs`</b>: Keyword arguments for get_slot().
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.make_session_run_hook(is_chief, num_tokens=-1)` {#SyncReplicasOptimizer.make_session_run_hook}
-
-Creates a hook to handle SyncReplicasHook ops such as initialization.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.assert_global_step.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.assert_global_step.md
deleted file mode 100644
index 2bc8feb0c2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.assert_global_step.md
+++ /dev/null
@@ -1,9 +0,0 @@
-### `tf.train.assert_global_step(global_step_tensor)` {#assert_global_step}
-
-Asserts `global_step_tensor` is a scalar int `Variable` or `Tensor`.
-
-##### Args:
-
-
-* <b>`global_step_tensor`</b>: `Tensor` to test.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.batch_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.batch_join.md
deleted file mode 100644
index d49358f4b5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.batch_join.md
+++ /dev/null
@@ -1,88 +0,0 @@
-### `tf.train.batch_join(tensors_list, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#batch_join}
-
-Runs a list of tensors to fill a queue to create batches of examples.
-
-The `tensors_list` argument is a list of tuples of tensors, or a list of
-dictionaries of tensors. Each element in the list is treated similarly
-to the `tensors` argument of `tf.train.batch()`.
-
-Enqueues a different list of tensors in different threads.
-Implemented using a queue -- a `QueueRunner` for the queue
-is added to the current `Graph`'s `QUEUE_RUNNER` collection.
-
-`len(tensors_list)` threads will be started,
-with thread `i` enqueuing the tensors from
-`tensors_list[i]`. `tensors_list[i1][j]` must match
-`tensors_list[i2][j]` in type and shape, except in the first
-dimension if `enqueue_many` is true.
-
-If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
-to represent a single example. An input tensor `x` will be output as a
-tensor with shape `[batch_size] + x.shape`.
-
-If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
-represent a batch of examples, where the first dimension is indexed
-by example, and all members of `tensors_list[i]` should have the
-same size in the first dimension. The slices of any input tensor
-`x` are treated as examples, and the output tensors will have shape
-`[batch_size] + x.shape[1:]`.
-
-The `capacity` argument controls the how long the prefetching is allowed to
-grow the queues.
-
-The returned operation is a dequeue operation and will throw
-`tf.errors.OutOfRangeError` if the input queue is exhausted. If this
-operation is feeding another input queue, its queue runner will catch
-this exception, however, if this operation is used in your main thread
-you are responsible for catching this yourself.
-
-*N.B.:* If `dynamic_pad` is `False`, you must ensure that either
-(i) the `shapes` argument is passed, or (ii) all of the tensors in
-`tensors_list` must have fully-defined shapes. `ValueError` will be
-raised if neither of these conditions holds.
-
-If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
-tensors is known, but individual dimensions may have value `None`.
-In this case, for each enqueue the dimensions with value `None`
-may have a variable length; upon dequeue, the output tensors will be padded
-on the right to the maximum shape of the tensors in the current minibatch.
-For numbers, this padding takes value 0. For strings, this padding is
-the empty string. See `PaddingFIFOQueue` for more info.
-
-If `allow_smaller_final_batch` is `True`, a smaller batch value than
-`batch_size` is returned when the queue is closed and there are not enough
-elements to fill the batch, otherwise the pending elements are discarded.
-In addition, all output tensors' static shapes, as accessed via the
-`get_shape` method will have a first `Dimension` value of `None`, and
-operations that depend on fixed batch_size would fail.
-
-##### Args:
-
-
-* <b>`tensors_list`</b>: A list of tuples or dictionaries of tensors to enqueue.
-* <b>`batch_size`</b>: An integer. The new batch size pulled from the queue.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list_list` is a single
- example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensor_list_list[i]`.
-* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
- The given dimensions are padded upon dequeue so that tensors within a
- batch have the same shapes.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (Optional) If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the same number and types as
- `tensors_list[i]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensor_list_list`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.get_checkpoint_mtimes.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.get_checkpoint_mtimes.md
deleted file mode 100644
index 0586e55851..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.get_checkpoint_mtimes.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.train.get_checkpoint_mtimes(checkpoint_prefixes)` {#get_checkpoint_mtimes}
-
-Returns the mtimes (modification timestamps) of the checkpoints.
-
-Globs for the checkpoints pointed to by `checkpoint_prefixes`. If the files
-exist, collect their mtime. Both V2 and V1 checkpoints are considered, in
-that priority.
-
-This is the recommended way to get the mtimes, since it takes into account
-the naming difference between V1 and V2 formats.
-
-##### Args:
-
-
-* <b>`checkpoint_prefixes`</b>: a list of checkpoint paths, typically the results of
- `Saver.save()` or those of `tf.train.latest_checkpoint()`, regardless of
- sharded/non-sharded or V1/V2.
-
-##### Returns:
-
- A list of mtimes (in microseconds) of the found checkpoints.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.range_input_producer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.range_input_producer.md
deleted file mode 100644
index 51fac958ec..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.train.range_input_producer.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.train.range_input_producer(limit, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None)` {#range_input_producer}
-
-Produces the integers from 0 to limit-1 in a queue.
-
-Note: if `num_epochs` is not `None`, this function creates local counter
-`epochs`. Use `local_variables_initializer()` to initialize local variables.
-
-##### Args:
-
-
-* <b>`limit`</b>: An int32 scalar tensor.
-* <b>`num_epochs`</b>: An integer (optional). If specified, `range_input_producer`
- produces each integer `num_epochs` times before generating an
- OutOfRange error. If not specified, `range_input_producer` can cycle
- through the integers an unlimited number of times.
-* <b>`shuffle`</b>: Boolean. If true, the integers are randomly shuffled within each
- epoch.
-* <b>`seed`</b>: An integer (optional). Seed used if shuffle == True.
-* <b>`capacity`</b>: An integer. Sets the queue capacity.
-* <b>`shared_name`</b>: (optional). If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: A name for the operations (optional).
-
-##### Returns:
-
- A Queue with the output integers. A `QueueRunner` for the Queue
- is added to the current `Graph`'s `QUEUE_RUNNER` collection.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.truncatediv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.truncatediv.md
deleted file mode 100644
index 99c9d55cea..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.truncatediv.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.truncatediv(x, y, name=None)` {#truncatediv}
-
-Returns x / y element-wise for integer types.
-
-Truncation designates that negative numbers will round fractional quantities
-toward zero. I.e. -7 / 5 = 1. This matches C semantics but it is different
-than Python semantics. See `FloorDiv` for a division function that matches
-Python Semantics.
-
-*NOTE*: `TruncateDiv` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.unique_with_counts.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.unique_with_counts.md
deleted file mode 100644
index 0228699c63..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf.unique_with_counts.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.unique_with_counts(x, out_idx=None, name=None)` {#unique_with_counts}
-
-Finds unique elements in a 1-D tensor.
-
-This operation returns a tensor `y` containing all of the unique elements of `x`
-sorted in the same order that they occur in `x`. This operation also returns a
-tensor `idx` the same size as `x` that contains the index of each value of `x`
-in the unique output `y`. Finally, it returns a third tensor `count` that
-contains the count of each element of `y` in `x`. In other words:
-
-`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`
-
-For example:
-
-```prettyprint
-# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
-y, idx, count = unique_with_counts(x)
-y ==> [1, 2, 4, 7, 8]
-idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
-count ==> [2, 1, 3, 1, 2]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. 1-D.
-* <b>`out_idx`</b>: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (y, idx, count).
-
-* <b>`y`</b>: A `Tensor`. Has the same type as `x`. 1-D.
-* <b>`idx`</b>: A `Tensor` of type `out_idx`. 1-D.
-* <b>`count`</b>: A `Tensor` of type `out_idx`. 1-D.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.DebugTensorDatum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.DebugTensorDatum.md
deleted file mode 100644
index 853f2ef5f5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.DebugTensorDatum.md
+++ /dev/null
@@ -1,146 +0,0 @@
-A single tensor dumped by TensorFlow Debugger (tfdbg).
-
-Contains metadata about the dumped tensor, including `timestamp`,
-`node_name`, `output_slot`, `debug_op`, and path to the dump file
-(`file_path`).
-
-This type does not hold the generally space-expensive tensor value (numpy
-array). Instead, it points to the file from which the tensor value can be
-loaded (with the `get_tensor` method) if needed.
-- - -
-
-#### `tf_debug.DebugTensorDatum.__init__(dump_root, debug_dump_rel_path)` {#DebugTensorDatum.__init__}
-
-`DebugTensorDatum` constructor.
-
-##### Args:
-
-
-* <b>`dump_root`</b>: (`str`) Debug dump root directory.
-* <b>`debug_dump_rel_path`</b>: (`str`) Path to a debug dump file, relative to the
- `dump_root`. For example, suppose the debug dump root
- directory is `/tmp/tfdbg_1` and the dump file is at
- `/tmp/tfdbg_1/ns_1/node_a_0_DebugIdentity_123456789`, then
- the value of the debug_dump_rel_path should be
- `ns_1/node_a_0_DebugIdenity_1234456789`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the base file name of the dump file does not conform to
- the dump file naming pattern:
- `node_name`_`output_slot`_`debug_op`_`timestamp`
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.__repr__()` {#DebugTensorDatum.__repr__}
-
-
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.__str__()` {#DebugTensorDatum.__str__}
-
-
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.debug_op` {#DebugTensorDatum.debug_op}
-
-Name of the debug op.
-
-##### Returns:
-
- (`str`) debug op name (e.g., `DebugIdentity`).
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.dump_size_bytes` {#DebugTensorDatum.dump_size_bytes}
-
-Size of the dump file.
-
-Unit: byte.
-
-##### Returns:
-
- If the dump file exists, size of the dump file, in bytes.
- If the dump file does not exist, None.
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.file_path` {#DebugTensorDatum.file_path}
-
-Path to the file which stores the value of the dumped tensor.
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.get_tensor()` {#DebugTensorDatum.get_tensor}
-
-Get tensor from the dump (`Event`) file.
-
-##### Returns:
-
- The tensor loaded from the dump (`Event`) file.
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.node_name` {#DebugTensorDatum.node_name}
-
-Name of the node from which the tensor value was dumped.
-
-##### Returns:
-
- (`str`) name of the node watched by the debug op.
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.output_slot` {#DebugTensorDatum.output_slot}
-
-Output slot index from which the tensor value was dumped.
-
-##### Returns:
-
- (`int`) output slot index watched by the debug op.
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.tensor_name` {#DebugTensorDatum.tensor_name}
-
-Name of the tensor watched by the debug op.
-
-##### Returns:
-
- (`str`) `Tensor` name, in the form of `node_name`:`output_slot`
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.timestamp` {#DebugTensorDatum.timestamp}
-
-Timestamp of when this tensor value was dumped.
-
-##### Returns:
-
- (`int`) The timestamp in microseconds.
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.watch_key` {#DebugTensorDatum.watch_key}
-
-Watch key identities a debug watch on a tensor.
-
-##### Returns:
-
- (`str`) A watch key, in the form of `tensor_name`:`debug_op`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.LocalCLIDebugWrapperSession.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.LocalCLIDebugWrapperSession.md
deleted file mode 100644
index 8194e8ef07..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.LocalCLIDebugWrapperSession.md
+++ /dev/null
@@ -1,207 +0,0 @@
-Concrete subclass of BaseDebugWrapperSession implementing a local CLI.
-
-This class has all the methods that a `session.Session` object has, in order
-to support debugging with minimal code changes. Invoking its `run()` method
-will launch the command-line interface (CLI) of tfdbg.
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.__enter__()` {#LocalCLIDebugWrapperSession.__enter__}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.__exit__(exec_type, exec_value, exec_tb)` {#LocalCLIDebugWrapperSession.__exit__}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.__init__(sess, dump_root=None, log_usage=True, ui_type='curses')` {#LocalCLIDebugWrapperSession.__init__}
-
-Constructor of LocalCLIDebugWrapperSession.
-
-##### Args:
-
-
-* <b>`sess`</b>: The TensorFlow `Session` object being wrapped.
-* <b>`dump_root`</b>: (`str`) optional path to the dump root directory. Must be a
- directory that does not exist or an empty directory. If the directory
- does not exist, it will be created by the debugger core during debug
- `run()` calls and removed afterwards.
-* <b>`log_usage`</b>: (`bool`) whether the usage of this class is to be logged.
-* <b>`ui_type`</b>: (`str`) requested UI type. Currently supported:
- (curses | readline)
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If dump_root is an existing and non-empty directory or if
- dump_root is a file.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.add_tensor_filter(filter_name, tensor_filter)` {#LocalCLIDebugWrapperSession.add_tensor_filter}
-
-Add a tensor filter.
-
-##### Args:
-
-
-* <b>`filter_name`</b>: (`str`) name of the filter.
-* <b>`tensor_filter`</b>: (`callable`) the filter callable. See the doc string of
- `DebugDumpDir.find()` for more details about its signature.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.close()` {#LocalCLIDebugWrapperSession.close}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.graph` {#LocalCLIDebugWrapperSession.graph}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.invoke_node_stepper(node_stepper, restore_variable_values_on_exit=True)` {#LocalCLIDebugWrapperSession.invoke_node_stepper}
-
-Overrides method in base class to implement interactive node stepper.
-
-##### Args:
-
-
-* <b>`node_stepper`</b>: (`stepper.NodeStepper`) The underlying NodeStepper API
- object.
-* <b>`restore_variable_values_on_exit`</b>: (`bool`) Whether any variables whose
- values have been altered during this node-stepper invocation should be
- restored to their old values when this invocation ends.
-
-##### Returns:
-
- The same return values as the `Session.run()` call on the same fetches as
- the NodeStepper.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.on_run_end(request)` {#LocalCLIDebugWrapperSession.on_run_end}
-
-Overrides on-run-end callback.
-
-##### Actions taken:
-
- 1) Load the debug dump.
- 2) Bring up the Analyzer CLI.
-
-##### Args:
-
-
-* <b>`request`</b>: An instance of OnSessionInitRequest.
-
-##### Returns:
-
- An instance of OnSessionInitResponse.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.on_run_start(request)` {#LocalCLIDebugWrapperSession.on_run_start}
-
-Overrides on-run-start callback.
-
-##### Invoke the CLI to let user choose what action to take:
-
- `run` / `invoke_stepper`.
-
-##### Args:
-
-
-* <b>`request`</b>: An instance of `OnSessionInitRequest`.
-
-##### Returns:
-
- An instance of `OnSessionInitResponse`.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If user chooses to prematurely exit the debugger.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.on_session_init(request)` {#LocalCLIDebugWrapperSession.on_session_init}
-
-Overrides on-session-init callback.
-
-##### Args:
-
-
-* <b>`request`</b>: An instance of `OnSessionInitRequest`.
-
-##### Returns:
-
- An instance of `OnSessionInitResponse`.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.partial_run(handle, fetches, feed_dict=None)` {#LocalCLIDebugWrapperSession.partial_run}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.partial_run_setup(fetches, feeds=None)` {#LocalCLIDebugWrapperSession.partial_run_setup}
-
-Sets up the feeds and fetches for partial runs in the session.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#LocalCLIDebugWrapperSession.run}
-
-Wrapper around Session.run() that inserts tensor watch options.
-
-##### Args:
-
-
-* <b>`fetches`</b>: Same as the `fetches` arg to regular `Session.run()`.
-* <b>`feed_dict`</b>: Same as the `feed_dict` arg to regular `Session.run()`.
-* <b>`options`</b>: Same as the `options` arg to regular `Session.run()`.
-* <b>`run_metadata`</b>: Same as the `run_metadata` arg to regular `Session.run()`.
-
-##### Returns:
-
- Simply forwards the output of the wrapped `Session.run()` call.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: On invalid `OnRunStartAction` value.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.sess_str` {#LocalCLIDebugWrapperSession.sess_str}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.session` {#LocalCLIDebugWrapperSession.session}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.add_debug_tensor_watch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.add_debug_tensor_watch.md
deleted file mode 100644
index 1c79b12669..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.add_debug_tensor_watch.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf_debug.add_debug_tensor_watch(run_options, node_name, output_slot=0, debug_ops='DebugIdentity', debug_urls=None, global_step=-1)` {#add_debug_tensor_watch}
-
-Add watch on a `Tensor` to `RunOptions`.
-
-N.B.: Under certain circumstances, the `Tensor` may not be actually watched
- (e.g., if the node of the `Tensor` is constant-folded during runtime).
-
-##### Args:
-
-
-* <b>`run_options`</b>: An instance of `config_pb2.RunOptions` to be modified.
-* <b>`node_name`</b>: (`str`) name of the node to watch.
-* <b>`output_slot`</b>: (`int`) output slot index of the tensor from the watched node.
-* <b>`debug_ops`</b>: (`str` or `list` of `str`) name(s) of the debug op(s). Can be a
- `list` of `str` or a single `str`. The latter case is equivalent to a
- `list` of `str` with only one element.
-* <b>`debug_urls`</b>: (`str` or `list` of `str`) URL(s) to send debug values to,
- e.g., `file:///tmp/tfdbg_dump_1`, `grpc://localhost:12345`.
-* <b>`global_step`</b>: (`int`) Optional global_step count for this debug tensor
- watch.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.load_tensor_from_event_file.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.load_tensor_from_event_file.md
deleted file mode 100644
index 453be17643..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard3/tf_debug.load_tensor_from_event_file.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf_debug.load_tensor_from_event_file(event_file_path)` {#load_tensor_from_event_file}
-
-Load a tensor from an event file.
-
-Assumes that the event file contains a `Event` protobuf and the `Event`
-protobuf contains a `Tensor` value.
-
-##### Args:
-
-
-* <b>`event_file_path`</b>: (`str`) path to the event file.
-
-##### Returns:
-
- The tensor value loaded from the event file, as a `numpy.ndarray`. For
- uninitialized Tensors, returns `None`. For Tensors of data types that
- cannot be converted to `numpy.ndarray` (e.g., `tf.resource`), return
- `None`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.Assert.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.Assert.md
deleted file mode 100644
index 35325fadaa..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.Assert.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.Assert(condition, data, summarize=None, name=None)` {#Assert}
-
-Asserts that the given condition is true.
-
-If `condition` evaluates to false, print the list of tensors in `data`.
-`summarize` determines how many entries of the tensors to print.
-
-NOTE: To ensure that Assert executes, one usually attaches a dependency:
-
-```python
-# Ensure maximum element of x is smaller or equal to 1
-assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x])
-with tf.control_dependencies([assert_op]):
- ... code using x ...
-```
-
-##### Args:
-
-
-* <b>`condition`</b>: The condition to evaluate.
-* <b>`data`</b>: The tensors to print out when condition is false.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
-
-* <b>`assert_op`</b>: An `Operation` that, when executed, raises a
- `tf.errors.InvalidArgumentError` if `condition` is not true.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.ConditionalAccumulator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.ConditionalAccumulator.md
deleted file mode 100644
index e555239caa..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.ConditionalAccumulator.md
+++ /dev/null
@@ -1,136 +0,0 @@
-A conditional accumulator for aggregating gradients.
-
-Up-to-date gradients (i.e., time step at which gradient was computed is
-equal to the accumulator's time step) are added to the accumulator.
-
-Extraction of the average gradient is blocked until the required number of
-gradients has been accumulated.
-- - -
-
-#### `tf.ConditionalAccumulator.__init__(dtype, shape=None, shared_name=None, name='conditional_accumulator')` {#ConditionalAccumulator.__init__}
-
-Creates a new ConditionalAccumulator.
-
-##### Args:
-
-
-* <b>`dtype`</b>: Datatype of the accumulated gradients.
-* <b>`shape`</b>: Shape of the accumulated gradients.
-* <b>`shared_name`</b>: Optional. If non-empty, this accumulator will be shared under
- the given name across multiple sessions.
-* <b>`name`</b>: Optional name for the accumulator.
-
-
-- - -
-
-#### `tf.ConditionalAccumulator.accumulator_ref` {#ConditionalAccumulator.accumulator_ref}
-
-The underlying accumulator reference.
-
-
-- - -
-
-#### `tf.ConditionalAccumulator.apply_grad(grad, local_step=0, name=None)` {#ConditionalAccumulator.apply_grad}
-
-Attempts to apply a gradient to the accumulator.
-
-The attempt is silently dropped if the gradient is stale, i.e., local_step
-is less than the accumulator's global time step.
-
-##### Args:
-
-
-* <b>`grad`</b>: The gradient tensor to be applied.
-* <b>`local_step`</b>: Time step at which the gradient was computed.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- The operation that (conditionally) applies a gradient to the accumulator.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If grad is of the wrong shape
-
-
-- - -
-
-#### `tf.ConditionalAccumulator.dtype` {#ConditionalAccumulator.dtype}
-
-The datatype of the gradients accumulated by this accumulator.
-
-
-- - -
-
-#### `tf.ConditionalAccumulator.name` {#ConditionalAccumulator.name}
-
-The name of the underlying accumulator.
-
-
-- - -
-
-#### `tf.ConditionalAccumulator.num_accumulated(name=None)` {#ConditionalAccumulator.num_accumulated}
-
-Number of gradients that have currently been aggregated in accumulator.
-
-##### Args:
-
-
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- Number of accumulated gradients currently in accumulator.
-
-
-- - -
-
-#### `tf.ConditionalAccumulator.set_global_step(new_global_step, name=None)` {#ConditionalAccumulator.set_global_step}
-
-Sets the global time step of the accumulator.
-
-The operation logs a warning if we attempt to set to a time step that is
-lower than the accumulator's own time step.
-
-##### Args:
-
-
-* <b>`new_global_step`</b>: Value of new time step. Can be a variable or a constant
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- Operation that sets the accumulator's time step.
-
-
-- - -
-
-#### `tf.ConditionalAccumulator.take_grad(num_required, name=None)` {#ConditionalAccumulator.take_grad}
-
-Attempts to extract the average gradient from the accumulator.
-
-The operation blocks until sufficient number of gradients have been
-successfully applied to the accumulator.
-
-Once successful, the following actions are also triggered:
-- Counter of accumulated gradients is reset to 0.
-- Aggregated gradient is reset to 0 tensor.
-- Accumulator's internal time step is incremented by 1.
-
-##### Args:
-
-
-* <b>`num_required`</b>: Number of gradients that needs to have been aggregated
-* <b>`name`</b>: Optional name for the operation
-
-##### Returns:
-
- A tensor holding the value of the average gradient.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: If num_required < 1
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.FixedLengthRecordReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.FixedLengthRecordReader.md
deleted file mode 100644
index 5e3ae19b93..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.FixedLengthRecordReader.md
+++ /dev/null
@@ -1,175 +0,0 @@
-A Reader that outputs fixed-length records from a file.
-
-See ReaderBase for supported methods.
-- - -
-
-#### `tf.FixedLengthRecordReader.__init__(record_bytes, header_bytes=None, footer_bytes=None, name=None)` {#FixedLengthRecordReader.__init__}
-
-Create a FixedLengthRecordReader.
-
-##### Args:
-
-
-* <b>`record_bytes`</b>: An int.
-* <b>`header_bytes`</b>: An optional int. Defaults to 0.
-* <b>`footer_bytes`</b>: An optional int. Defaults to 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.num_records_produced(name=None)` {#FixedLengthRecordReader.num_records_produced}
-
-Returns the number of records this reader has produced.
-
-This is the same as the number of Read executions that have
-succeeded.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.num_work_units_completed(name=None)` {#FixedLengthRecordReader.num_work_units_completed}
-
-Returns the number of work units this reader has finished processing.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.read(queue, name=None)` {#FixedLengthRecordReader.read}
-
-Returns the next record (key, value pair) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g. when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (key, value).
-
-* <b>`key`</b>: A string scalar Tensor.
-* <b>`value`</b>: A string scalar Tensor.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.read_up_to(queue, num_records, name=None)` {#FixedLengthRecordReader.read_up_to}
-
-Returns up to num_records (key, value pairs) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g., when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-It may return less than num_records even before the last batch.
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`num_records`</b>: Number of records to read.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (keys, values).
-
-* <b>`keys`</b>: A 1-D string Tensor.
-* <b>`values`</b>: A 1-D string Tensor.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.reader_ref` {#FixedLengthRecordReader.reader_ref}
-
-Op that implements the reader.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.reset(name=None)` {#FixedLengthRecordReader.reset}
-
-Restore a reader to its initial clean state.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.restore_state(state, name=None)` {#FixedLengthRecordReader.restore_state}
-
-Restore a reader to a previously saved state.
-
-Not all Readers support being restored, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`state`</b>: A string Tensor.
- Result of a SerializeState of a Reader with matching type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.serialize_state(name=None)` {#FixedLengthRecordReader.serialize_state}
-
-Produce a string tensor that encodes the state of a reader.
-
-Not all Readers support being serialized, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A string Tensor.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.supports_serialize` {#FixedLengthRecordReader.supports_serialize}
-
-Whether the Reader implementation can serialize its state.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.argmin.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.argmin.md
deleted file mode 100644
index 344cb01ce9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.argmin.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.argmin(input, axis=None, name=None, dimension=None)` {#argmin}
-
-Returns the index with the smallest value across axes of a tensor.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`axis`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- int32, 0 <= axis < rank(input). Describes which axis
- of the input Tensor to reduce across. For vectors, use axis = 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `int64`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.assert_less.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.assert_less.md
deleted file mode 100644
index b6bc1000c7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.assert_less.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.assert_less(x, y, data=None, summarize=None, message=None, name=None)` {#assert_less}
-
-Assert the condition `x < y` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_less(x, y)]):
- output = tf.reduce_sum(x)
-```
-
-This condition holds if for every pair of (possibly broadcast) elements
-`x[i]`, `y[i]`, we have `x[i] < y[i]`.
-If both `x` and `y` are empty, this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`y`</b>: Numeric `Tensor`, same dtype as and broadcastable to `x`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`, `y`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_less".
-
-##### Returns:
-
- Op that raises `InvalidArgumentError` if `x < y` is False.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.broadcast_static_shape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.broadcast_static_shape.md
deleted file mode 100644
index 3d5e1ea96a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.broadcast_static_shape.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.broadcast_static_shape(shape_x, shape_y)` {#broadcast_static_shape}
-
-Returns the broadcasted static shape between `shape_x` and `shape_y`.
-
-##### Args:
-
-
-* <b>`shape_x`</b>: A `TensorShape`
-* <b>`shape_y`</b>: A `TensorShape`
-
-##### Returns:
-
- A `TensorShape` representing the broadcasted shape.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the two shapes can not be broadcasted.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.clip_by_value.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.clip_by_value.md
deleted file mode 100644
index 7cd7e0311e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.clip_by_value.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.clip_by_value(t, clip_value_min, clip_value_max, name=None)` {#clip_by_value}
-
-Clips tensor values to a specified min and max.
-
-Given a tensor `t`, this operation returns a tensor of the same type and
-shape as `t` with its values clipped to `clip_value_min` and `clip_value_max`.
-Any values less than `clip_value_min` are set to `clip_value_min`. Any values
-greater than `clip_value_max` are set to `clip_value_max`.
-
-##### Args:
-
-
-* <b>`t`</b>: A `Tensor`.
-* <b>`clip_value_min`</b>: A 0-D (scalar) `Tensor`. The minimum value to clip by.
-* <b>`clip_value_max`</b>: A 0-D (scalar) `Tensor`. The maximum value to clip by.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A clipped `Tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.complex.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.complex.md
deleted file mode 100644
index 79809ab1d6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.complex.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.complex(real, imag, name=None)` {#complex}
-
-Converts two real numbers to a complex number.
-
-Given a tensor `real` representing the real part of a complex number, and a
-tensor `imag` representing the imaginary part of a complex number, this
-operation returns complex numbers elementwise of the form \\(a + bj\\), where
-*a* represents the `real` part and *b* represents the `imag` part.
-
-The input tensors `real` and `imag` must have the same shape.
-
-For example:
-
-```
-# tensor 'real' is [2.25, 3.25]
-# tensor `imag` is [4.75, 5.75]
-tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]]
-```
-
-##### Args:
-
-
-* <b>`real`</b>: A `Tensor`. Must be one of the following types: `float32`,
- `float64`.
-* <b>`imag`</b>: A `Tensor`. Must have the same type as `real`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `complex64` or `complex128`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.md
deleted file mode 100644
index 9589179897..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.md
+++ /dev/null
@@ -1,111 +0,0 @@
-StochasticTensor is a BaseStochasticTensor backed by a distribution.
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.__init__(dist, name='StochasticTensor', dist_value_type=None, loss_fn=score_function)` {#StochasticTensor.__init__}
-
-Construct a `StochasticTensor`.
-
-`StochasticTensor` is backed by the `dist` distribution and its `value`
-method will return the same value each time it is called. What `value` is
-returned is controlled by the `dist_value_type` (defaults to
-`SampleValue`).
-
-Some distributions' sample functions are not differentiable (e.g. a sample
-from a discrete distribution like a Bernoulli) and so to differentiate
-wrt parameters upstream of the sample requires a gradient estimator like
-the score function estimator. This is accomplished by passing a
-differentiable `loss_fn` to the `StochasticTensor`, which
-defaults to a function whose derivative is the score function estimator.
-Calling `stochastic_graph.surrogate_loss(final_losses)` will call
-`loss()` on every `StochasticTensor` upstream of final losses.
-
-`loss()` will return None for `StochasticTensor`s backed by
-reparameterized distributions; it will also return None if the value type is
-`MeanValueType` or if `loss_fn=None`.
-
-##### Args:
-
-
-* <b>`dist`</b>: an instance of `Distribution`.
-* <b>`name`</b>: a name for this `StochasticTensor` and its ops.
-* <b>`dist_value_type`</b>: a `_StochasticValueType`, which will determine what the
- `value` of this `StochasticTensor` will be. If not provided, the
- value type set with the `value_type` context manager will be used.
-* <b>`loss_fn`</b>: callable that takes
- `(st, st.value(), influenced_loss)`, where
- `st` is this `StochasticTensor`, and returns a `Tensor` loss. By
- default, `loss_fn` is the `score_function`, or more precisely, the
- integral of the score function, such that when the gradient is taken,
- the score function results. See the `stochastic_gradient_estimators`
- module for additional loss functions and baselines.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `dist` is not an instance of `Distribution`.
-* <b>`TypeError`</b>: if `loss_fn` is not `callable`.
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.distribution` {#StochasticTensor.distribution}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.dtype` {#StochasticTensor.dtype}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.entropy(name='entropy')` {#StochasticTensor.entropy}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.graph` {#StochasticTensor.graph}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.loss(final_loss, name='Loss')` {#StochasticTensor.loss}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.mean(name='mean')` {#StochasticTensor.mean}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.name` {#StochasticTensor.name}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.value(name='value')` {#StochasticTensor.value}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.value_type` {#StochasticTensor.value_type}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.bayesflow.variational_inference.ELBOForms.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.bayesflow.variational_inference.ELBOForms.md
deleted file mode 100644
index 2d488ac3d0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.bayesflow.variational_inference.ELBOForms.md
+++ /dev/null
@@ -1,18 +0,0 @@
-Constants to control the `elbo` calculation.
-
-`analytic_kl` uses the analytic KL divergence between the
-variational distribution(s) and the prior(s).
-
-`analytic_entropy` uses the analytic entropy of the variational
-distribution(s).
-
-`sample` uses the sample KL or the sample entropy is the joint is provided.
-
-See `elbo` for what is used with `default`.
-- - -
-
-#### `tf.contrib.bayesflow.variational_inference.ELBOForms.check_form(form)` {#ELBOForms.check_form}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.crf.viterbi_decode.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.crf.viterbi_decode.md
deleted file mode 100644
index d0ebb5bab6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.crf.viterbi_decode.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.contrib.crf.viterbi_decode(score, transition_params)` {#viterbi_decode}
-
-Decode the highest scoring sequence of tags outside of TensorFlow.
-
-This should only be used at test time.
-
-##### Args:
-
-
-* <b>`score`</b>: A [seq_len, num_tags] matrix of unary potentials.
-* <b>`transition_params`</b>: A [num_tags, num_tags] matrix of binary potentials.
-
-##### Returns:
-
-
-* <b>`viterbi`</b>: A [seq_len] list of integers containing the highest scoring tag
- indicies.
-* <b>`viterbi_score`</b>: A float containing the score for the Viterbi sequence.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.bijector.Invert.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.bijector.Invert.md
deleted file mode 100644
index ceafe4514f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.bijector.Invert.md
+++ /dev/null
@@ -1,307 +0,0 @@
-Bijector which inverts another Bijector.
-
-Example Use: [ExpGammaDistribution (see Background & Context)](
-https://reference.wolfram.com/language/ref/ExpGammaDistribution.html)
-models `Y=log(X)` where `X ~ Gamma`.
-
-```python
-exp_gamma_distribution = TransformedDistribution(
- distribution=Gamma(concentration=1., rate=2.),
- bijector=bijector.Invert(bijector.Exp())
-```
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.__init__(bijector, validate_args=False, name=None)` {#Invert.__init__}
-
-Creates a `Bijector` which swaps the meaning of `inverse` and `forward`.
-
-Note: An inverted bijector's `inverse_log_det_jacobian` is often more
-efficient if the base bijector implements `_forward_log_det_jacobian`. If
-`_forward_log_det_jacobian` is not implemented then the following code is
-used:
-
-```python
-y = self.inverse(x, **kwargs)
-return -self.inverse_log_det_jacobian(y, **kwargs)
-```
-
-##### Args:
-
-
-* <b>`bijector`</b>: Bijector instance.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str`, name given to ops managed by this object.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.bijector` {#Invert.bijector}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.dtype` {#Invert.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.event_ndims` {#Invert.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.forward(x, name='forward')` {#Invert.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.forward_event_shape(input_shape)` {#Invert.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Invert.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Invert.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.graph_parents` {#Invert.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.inverse(y, name='inverse')` {#Invert.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Invert.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.inverse_event_shape(output_shape)` {#Invert.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Invert.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Invert.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.is_constant_jacobian` {#Invert.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.name` {#Invert.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Invert.validate_args` {#Invert.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.bijector.Softplus.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.bijector.Softplus.md
deleted file mode 100644
index 49527a0aea..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.bijector.Softplus.md
+++ /dev/null
@@ -1,295 +0,0 @@
-Bijector which computes `Y = g(X) = Log[1 + exp(X)]`.
-
-The softplus `Bijector` has the following two useful properties:
-
-* The domain is the positive real numbers
-* `softplus(x) approx x`, for large `x`, so it does not overflow as easily as
- the `Exp` `Bijector`.
-
- Example Use:
-
- ```python
- # Create the Y=g(X)=softplus(X) transform which works only on Tensors with 1
- # batch ndim and 2 event ndims (i.e., vector of matrices).
- softplus = Softplus(event_ndims=2)
- x = [[[1., 2],
- [3, 4]],
- [[5, 6],
- [7, 8]]]
- log(1 + exp(x)) == softplus.forward(x)
- log(exp(x) - 1) == softplus.inverse(x)
- ```
-
- Note: log(.) and exp(.) are applied element-wise but the Jacobian is a
- reduction over the event space.
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.__init__(event_ndims=0, validate_args=False, name='softplus')` {#Softplus.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.dtype` {#Softplus.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.event_ndims` {#Softplus.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.forward(x, name='forward')` {#Softplus.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.forward_event_shape(input_shape)` {#Softplus.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Softplus.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Softplus.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.graph_parents` {#Softplus.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.inverse(y, name='inverse')` {#Softplus.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Softplus.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.inverse_event_shape(output_shape)` {#Softplus.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Softplus.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Softplus.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.is_constant_jacobian` {#Softplus.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.name` {#Softplus.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Softplus.validate_args` {#Softplus.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.normal_conjugates_known_scale_predictive.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.normal_conjugates_known_scale_predictive.md
deleted file mode 100644
index fdbae8aaf4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.distributions.normal_conjugates_known_scale_predictive.md
+++ /dev/null
@@ -1,55 +0,0 @@
-### `tf.contrib.distributions.normal_conjugates_known_scale_predictive(prior, scale, s, n)` {#normal_conjugates_known_scale_predictive}
-
-Posterior predictive Normal distribution w. conjugate prior on the mean.
-
-This model assumes that `n` observations (with sum `s`) come from a
-Normal with unknown mean `loc` (described by the Normal `prior`)
-and known variance `scale**2`. The "known scale predictive"
-is the distribution of new observations, conditioned on the existing
-observations and our prior.
-
-Accepts a prior Normal distribution object, having parameters
-`loc0` and `scale0`, as well as known `scale` values of the predictive
-distribution(s) (also assumed Normal),
-and statistical estimates `s` (the sum(s) of the observations) and
-`n` (the number(s) of observations).
-
-Calculates the Normal distribution(s) `p(x | sigma**2)`:
-
-```
-p(x | sigma**2) = int N(x | mu, sigma**2)N(mu | prior.loc, prior.scale**2) dmu
- = N(x | prior.loc, 1 / (sigma**2 + prior.scale**2))
-```
-
-Returns the predictive posterior distribution object, with parameters
-`(loc', scale'**2)`, where:
-
-```
-sigma_n**2 = 1/(1/sigma0**2 + n/sigma**2),
-mu' = (mu0/sigma0**2 + s/sigma**2) * sigma_n**2.
-sigma'**2 = sigma_n**2 + sigma**2,
-```
-
-Distribution parameters from `prior`, as well as `scale`, `s`, and `n`.
-will broadcast in the case of multidimensional sets of parameters.
-
-##### Args:
-
-
-* <b>`prior`</b>: `Normal` object of type `dtype`:
- the prior distribution having parameters `(loc0, scale0)`.
-* <b>`scale`</b>: tensor of type `dtype`, taking values `scale > 0`.
- The known stddev parameter(s).
-* <b>`s`</b>: Tensor of type `dtype`. The sum(s) of observations.
-* <b>`n`</b>: Tensor of type `int`. The number(s) of observations.
-
-##### Returns:
-
- A new Normal predictive distribution object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if dtype of `s` does not match `dtype`, or `prior` is not a
- Normal object.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.framework.VariableDeviceChooser.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.framework.VariableDeviceChooser.md
deleted file mode 100644
index 0b25e06bae..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.framework.VariableDeviceChooser.md
+++ /dev/null
@@ -1,36 +0,0 @@
-Device chooser for variables.
-
-When using a parameter server it will assign them in a round-robin fashion.
-When not using a parameter server it allows GPU or CPU placement.
-- - -
-
-#### `tf.contrib.framework.VariableDeviceChooser.__call__(op)` {#VariableDeviceChooser.__call__}
-
-
-
-
-- - -
-
-#### `tf.contrib.framework.VariableDeviceChooser.__init__(num_tasks=0, job_name='ps', device_type='CPU', device_index=0)` {#VariableDeviceChooser.__init__}
-
-Initialize VariableDeviceChooser.
-
-##### Usage:
-
- To use with 2 parameter servers:
- VariableDeviceChooser(2)
-
- To use without parameter servers:
- VariableDeviceChooser()
- VariableDeviceChooser(device_type='GPU') # For GPU placement
-
-##### Args:
-
-
-* <b>`num_tasks`</b>: number of tasks.
-* <b>`job_name`</b>: String, a name for the parameter server job.
-* <b>`device_type`</b>: Optional device type string (e.g. "CPU" or "GPU")
-* <b>`device_index`</b>: int. Optional device index. If left
- unspecified, device represents 'any' device_index.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.framework.add_arg_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.framework.add_arg_scope.md
deleted file mode 100644
index a726ebad96..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.framework.add_arg_scope.md
+++ /dev/null
@@ -1,13 +0,0 @@
-### `tf.contrib.framework.add_arg_scope(func)` {#add_arg_scope}
-
-Decorates a function with args so it can be used within an arg_scope.
-
-##### Args:
-
-
-* <b>`func`</b>: function to decorate.
-
-##### Returns:
-
- A tuple with the decorated function func_with_args().
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.framework.load_variable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.framework.load_variable.md
deleted file mode 100644
index 410b18e466..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.framework.load_variable.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.contrib.framework.load_variable(checkpoint_dir, name)` {#load_variable}
-
-Returns a Tensor with the contents of the given variable in the checkpoint.
-
-##### Args:
-
-
-* <b>`checkpoint_dir`</b>: Directory with checkpoints file or path to checkpoint.
-* <b>`name`</b>: Name of the tensor to return.
-
-##### Returns:
-
- `Tensor` object.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.OpMatcher.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.OpMatcher.md
deleted file mode 100644
index 61931afde5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.OpMatcher.md
+++ /dev/null
@@ -1,36 +0,0 @@
-Graph match class.
-- - -
-
-#### `tf.contrib.graph_editor.OpMatcher.__call__(op)` {#OpMatcher.__call__}
-
-Evaluate if the op matches or not.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.OpMatcher.__init__(positive_filter)` {#OpMatcher.__init__}
-
-Graph match constructor.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.OpMatcher.control_input_ops(*args)` {#OpMatcher.control_input_ops}
-
-Add input matches.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.OpMatcher.input_ops(*args)` {#OpMatcher.input_ops}
-
-Add input matches.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.OpMatcher.output_ops(*args)` {#OpMatcher.output_ops}
-
-Add output matches.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.get_consuming_ops.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.get_consuming_ops.md
deleted file mode 100644
index 2db80c07ad..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.get_consuming_ops.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.contrib.graph_editor.get_consuming_ops(ts)` {#get_consuming_ops}
-
-Return all the consuming ops of the tensors in ts.
-
-##### Args:
-
-
-* <b>`ts`</b>: a list of `tf.Tensor`
-
-##### Returns:
-
- A list of all the consuming `tf.Operation` of the tensors in `ts`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ts cannot be converted to a list of `tf.Tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.graph_replace.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.graph_replace.md
deleted file mode 100644
index 56143111a5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.graph_replace.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.contrib.graph_editor.graph_replace(target_ts, replacement_ts, dst_scope='', src_scope='', reuse_dst_scope=False)` {#graph_replace}
-
-Create a new graph which compute the targets from the replaced Tensors.
-
-##### Args:
-
-
-* <b>`target_ts`</b>: a single tf.Tensor or an iterable of tf.Tensor.
-* <b>`replacement_ts`</b>: dictionary mapping from original tensors to replaced tensors
-* <b>`dst_scope`</b>: the destination scope.
-* <b>`src_scope`</b>: the source scope.
-* <b>`reuse_dst_scope`</b>: if True the dst_scope is re-used if it already exists.
- Otherwise, the scope is given a unique name based on the one given
- by appending an underscore followed by a digit (default).
-
-##### Returns:
-
- A single tf.Tensor or a list of target tf.Tensor, depending on
- the type of the input argument `target_ts`.
- The returned tensors are recomputed using the tensors from replacement_ts.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the targets are not connected to replacement_ts.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.make_list_of_op.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.make_list_of_op.md
deleted file mode 100644
index 61273bffc3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.make_list_of_op.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.contrib.graph_editor.make_list_of_op(ops, check_graph=True, allow_graph=True, ignore_ts=False)` {#make_list_of_op}
-
-Convert ops to a list of `tf.Operation`.
-
-##### Args:
-
-
-* <b>`ops`</b>: can be an iterable of `tf.Operation`, a `tf.Graph` or a single
- operation.
-* <b>`check_graph`</b>: if `True` check if all the operations belong to the same graph.
-* <b>`allow_graph`</b>: if `False` a `tf.Graph` cannot be converted.
-* <b>`ignore_ts`</b>: if True, silently ignore `tf.Tensor`.
-
-##### Returns:
-
- A newly created list of `tf.Operation`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ops cannot be converted to a list of `tf.Operation` or,
- if `check_graph` is `True`, if all the ops do not belong to the
- same graph.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.reroute_outputs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.reroute_outputs.md
deleted file mode 100644
index a00fe8e589..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.reroute_outputs.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.graph_editor.reroute_outputs(sgv0, sgv1)` {#reroute_outputs}
-
-Re-route all the outputs of sgv0 to sgv1 (see _reroute_outputs).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.sgv_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.sgv_scope.md
deleted file mode 100644
index 3be069140d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.graph_editor.sgv_scope.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.contrib.graph_editor.sgv_scope(scope, graph)` {#sgv_scope}
-
-Make a subgraph from a name scope.
-
-##### Args:
-
-
-* <b>`scope`</b>: the name of the scope.
-* <b>`graph`</b>: the `tf.Graph`.
-
-##### Returns:
-
- A subgraph view representing the given scope.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.avg_pool2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.avg_pool2d.md
deleted file mode 100644
index d5c41576c9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.avg_pool2d.md
+++ /dev/null
@@ -1,32 +0,0 @@
-### `tf.contrib.layers.avg_pool2d(*args, **kwargs)` {#avg_pool2d}
-
-Adds a 2D average pooling op.
-
-It is assumed that the pooling is done per image but not in batch or channels.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A 4-D tensor of shape `[batch_size, height, width, channels]` if
- `data_format` is `NHWC`, and `[batch_size, channels, height, width]` if
- `data_format` is `NCHW`.
-* <b>`kernel_size`</b>: A list of length 2: [kernel_height, kernel_width] of the
- pooling kernel over which the op is computed. Can be an int if both
- values are the same.
-* <b>`stride`</b>: A list of length 2: [stride_height, stride_width].
- Can be an int if both strides are the same. Note that presently
- both strides must have the same value.
-* <b>`padding`</b>: The padding method, either 'VALID' or 'SAME'.
-* <b>`data_format`</b>: A string. `NHWC` (default) and `NCHW` are supported.
-* <b>`outputs_collections`</b>: The collections to which the outputs are added.
-* <b>`scope`</b>: Optional scope for name_scope.
-
-##### Returns:
-
- A `Tensor` representing the results of the pooling operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `data_format` is neither `NHWC` nor `NCHW`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md
deleted file mode 100644
index d6f7271be8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.batch_norm.md
+++ /dev/null
@@ -1,83 +0,0 @@
-### `tf.contrib.layers.batch_norm(*args, **kwargs)` {#batch_norm}
-
-Adds a Batch Normalization layer from http://arxiv.org/abs/1502.03167.
-
- "Batch Normalization: Accelerating Deep Network Training by Reducing
- Internal Covariate Shift"
-
- Sergey Ioffe, Christian Szegedy
-
-Can be used as a normalizer function for conv2d and fully_connected.
-
-Note: When is_training is True the moving_mean and moving_variance need to be
-updated, by default the update_ops are placed in `tf.GraphKeys.UPDATE_OPS` so
-they need to be added as a dependency to the `train_op`, example:
-
- update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
- if update_ops:
- updates = tf.group(*update_ops)
- total_loss = control_flow_ops.with_dependencies([updates], total_loss)
-
-One can set updates_collections=None to force the updates in place, but that
-can have speed penalty, especially in distributed settings.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A tensor with 2 or more dimensions, where the first dimension has
- `batch_size`. The normalization is over all but the last dimension if
- `data_format` is `NHWC` and the second dimension if `data_format` is
- `NCHW`.
-* <b>`decay`</b>: Decay for the moving average. Reasonable values for `decay` are close
- to 1.0, typically in the multiple-nines range: 0.999, 0.99, 0.9, etc.
- Lower `decay` value (recommend trying `decay`=0.9) if model experiences
- reasonably good training performance but poor validation and/or test
- performance. Try zero_debias_moving_mean=True for improved stability.
-* <b>`center`</b>: If True, add offset of `beta` to normalized tensor. If False, `beta`
- is ignored.
-* <b>`scale`</b>: If True, multiply by `gamma`. If False, `gamma` is
- not used. When the next layer is linear (also e.g. `nn.relu`), this can be
- disabled since the scaling can be done by the next layer.
-* <b>`epsilon`</b>: Small float added to variance to avoid dividing by zero.
-* <b>`activation_fn`</b>: Activation function, default set to None to skip it and
- maintain a linear activation.
-* <b>`param_initializers`</b>: Optional initializers for beta, gamma, moving mean and
- moving variance.
-* <b>`updates_collections`</b>: Collections to collect the update ops for computation.
- The updates_ops need to be executed with the train_op.
- If None, a control dependency would be added to make sure the updates are
- computed in place.
-* <b>`is_training`</b>: Whether or not the layer is in training mode. In training mode
- it would accumulate the statistics of the moments into `moving_mean` and
- `moving_variance` using an exponential moving average with the given
- `decay`. When it is not in training mode then it would use the values of
- the `moving_mean` and the `moving_variance`.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional collections for the variables.
-* <b>`outputs_collections`</b>: Collections to add the outputs.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
-* <b>`batch_weights`</b>: An optional tensor of shape `[batch_size]`,
- containing a frequency weight for each batch item. If present,
- then the batch normalization uses weighted mean and
- variance. (This can be used to correct for bias in training
- example selection.)
-* <b>`fused`</b>: Use nn.fused_batch_norm if True, nn.batch_normalization otherwise.
-* <b>`data_format`</b>: A string. `NHWC` (default) and `NCHW` are supported.
-* <b>`zero_debias_moving_mean`</b>: Use zero_debias for moving_mean. It creates a new
- pair of variables 'moving_mean/biased' and 'moving_mean/local_step'.
-* <b>`scope`</b>: Optional scope for `variable_scope`.
-
-##### Returns:
-
- A `Tensor` representing the output of the operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `batch_weights` is not None and `fused` is True.
-* <b>`ValueError`</b>: If `data_format` is neither `NHWC` nor `NCHW`.
-* <b>`ValueError`</b>: If the rank of `inputs` is undefined.
-* <b>`ValueError`</b>: If rank or channels dimension of `inputs` is undefined.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.check_feature_columns.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.check_feature_columns.md
deleted file mode 100644
index 4b725e57d8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.check_feature_columns.md
+++ /dev/null
@@ -1,15 +0,0 @@
-### `tf.contrib.layers.check_feature_columns(feature_columns)` {#check_feature_columns}
-
-Checks the validity of the set of FeatureColumns.
-
-##### Args:
-
-
-* <b>`feature_columns`</b>: An iterable of instances or subclasses of FeatureColumn.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `feature_columns` is a dict.
-* <b>`ValueError`</b>: If there are duplicate feature column keys.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.one_hot_column.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.one_hot_column.md
deleted file mode 100644
index b79159f798..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.one_hot_column.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.contrib.layers.one_hot_column(sparse_id_column)` {#one_hot_column}
-
-Creates an `_OneHotColumn` for a one-hot or multi-hot repr in a DNN.
-
-##### Args:
-
-
-* <b>`sparse_id_column`</b>: A _SparseColumn which is created by
- `sparse_column_with_*`
- or crossed_column functions. Note that `combiner` defined in
- `sparse_id_column` is ignored.
-
-##### Returns:
-
- An _OneHotColumn.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.regression_target.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.regression_target.md
deleted file mode 100644
index a75a8a8f74..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.regression_target.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.contrib.layers.regression_target(*args, **kwargs)` {#regression_target}
-
-Creates a _TargetColumn for linear regression. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-11-12.
-Instructions for updating:
-This file will be removed after the deprecation date.Please switch to third_party/tensorflow/contrib/learn/python/learn/estimators/head.py
-
-##### Args:
-
-
-* <b>`label_name`</b>: String, name of the key in label dict. Can be null if label
- is a tensor (single headed models).
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training. It
- will be multiplied by the loss of the example.
-* <b>`label_dimension`</b>: dimension of the target for multilabels.
-
-##### Returns:
-
- An instance of _TargetColumn
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.unit_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.unit_norm.md
deleted file mode 100644
index 9b0752ff7f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.layers.unit_norm.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.contrib.layers.unit_norm(*args, **kwargs)` {#unit_norm}
-
-Normalizes the given input across the specified dimension to unit length.
-
-Note that the rank of `input` must be known.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A `Tensor` of arbitrary size.
-* <b>`dim`</b>: The dimension along which the input is normalized.
-* <b>`epsilon`</b>: A small value to add to the inputs to avoid dividing by zero.
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- The normalized `Tensor`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If dim is smaller than the number of dimensions in 'inputs'.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.DNNClassifier.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.DNNClassifier.md
deleted file mode 100644
index f58904ed71..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.DNNClassifier.md
+++ /dev/null
@@ -1,467 +0,0 @@
-A classifier for TensorFlow DNN models.
-
-Example:
-
-```python
-sparse_feature_a = sparse_column_with_hash_bucket(...)
-sparse_feature_b = sparse_column_with_hash_bucket(...)
-
-sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a,
- ...)
-sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b,
- ...)
-
-estimator = DNNClassifier(
- feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
- hidden_units=[1024, 512, 256])
-
-# Or estimator using the ProximalAdagradOptimizer optimizer with
-# regularization.
-estimator = DNNClassifier(
- feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
- hidden_units=[1024, 512, 256],
- optimizer=tf.train.ProximalAdagradOptimizer(
- learning_rate=0.1,
- l1_regularization_strength=0.001
- ))
-
-# Input builders
-def input_fn_train: # returns x, y (where y represents label's class index).
- pass
-estimator.fit(input_fn=input_fn_train)
-
-def input_fn_eval: # returns x, y (where y represents label's class index).
- pass
-estimator.evaluate(input_fn=input_fn_eval)
-estimator.predict(x=x) # returns predicted labels (i.e. label's class index).
-```
-
-Input of `fit` and `evaluate` should have following features,
- otherwise there will be a `KeyError`:
-
-* if `weight_column_name` is not `None`, a feature with
- `key=weight_column_name` whose value is a `Tensor`.
-* for each `column` in `feature_columns`:
- - if `column` is a `SparseColumn`, a feature with `key=column.name`
- whose `value` is a `SparseTensor`.
- - if `column` is a `WeightedSparseColumn`, two features: the first with
- `key` the id column name, the second with `key` the weight column name.
- Both features' `value` must be a `SparseTensor`.
- - if `column` is a `RealValuedColumn`, a feature with `key=column.name`
- whose `value` is a `Tensor`.
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.__init__(hidden_units, feature_columns, model_dir=None, n_classes=2, weight_column_name=None, optimizer=None, activation_fn=relu, dropout=None, gradient_clip_norm=None, enable_centered_bias=False, config=None, feature_engineering_fn=None, embedding_lr_multipliers=None, input_layer_min_slice_size=None)` {#DNNClassifier.__init__}
-
-Initializes a DNNClassifier instance.
-
-##### Args:
-
-
-* <b>`hidden_units`</b>: List of hidden units per layer. All layers are fully
- connected. Ex. `[64, 32]` means first layer has 64 nodes and second one
- has 32.
-* <b>`feature_columns`</b>: An iterable containing all the feature columns used by
- the model. All items in the set should be instances of classes derived
- from `FeatureColumn`.
-* <b>`model_dir`</b>: Directory to save model parameters, graph and etc. This can
- also be used to load checkpoints from the directory into a estimator to
- continue training a previously saved model.
-* <b>`n_classes`</b>: number of label classes. Default is binary classification.
- It must be greater than 1. Note: Class labels are integers representing
- the class index (i.e. values from 0 to n_classes-1). For arbitrary
- label values (e.g. string labels), convert to class indices first.
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training. It
- will be multiplied by the loss of the example.
-* <b>`optimizer`</b>: An instance of `tf.Optimizer` used to train the model. If
- `None`, will use an Adagrad optimizer.
-* <b>`activation_fn`</b>: Activation function applied to each layer. If `None`, will
- use `tf.nn.relu`.
-* <b>`dropout`</b>: When not `None`, the probability we will drop out a given
- coordinate.
-* <b>`gradient_clip_norm`</b>: A float > 0. If provided, gradients are
- clipped to their global norm with this clipping ratio. See
- `tf.clip_by_global_norm` for more details.
-* <b>`enable_centered_bias`</b>: A bool. If True, estimator will learn a centered
- bias variable for each class. Rest of the model structure learns the
- residual after centered bias.
-* <b>`config`</b>: `RunConfig` object to configure the runtime settings.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into the model.
-* <b>`embedding_lr_multipliers`</b>: Optional. A dictionary from `EmbeddingColumn` to
- a `float` multiplier. Multiplier will be used to multiply with
- learning rate for the embedding variables.
-* <b>`input_layer_min_slice_size`</b>: Optional. The min slice size of input layer
- partitions. If not provided, will use the default of 64M.
-
-##### Returns:
-
- A `DNNClassifier` estimator.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `n_classes` < 2.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.__repr__()` {#DNNClassifier.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.bias_` {#DNNClassifier.bias_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.config` {#DNNClassifier.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.evaluate(*args, **kwargs)` {#DNNClassifier.evaluate}
-
-See `Evaluable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` or `y` is provided, and at least one of
- `input_fn` or `feed_fn` is provided.
- Or if `metrics` is not `None` or `dict`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.export(export_dir, input_fn=None, input_feature_key=None, use_deprecated_input_fn=True, signature_fn=None, default_batch_size=1, exports_to_keep=None)` {#DNNClassifier.export}
-
-See BaseEstimator.export.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#DNNClassifier.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.fit(*args, **kwargs)` {#DNNClassifier.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.get_params(deep=True)` {#DNNClassifier.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.get_variable_names()` {#DNNClassifier.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.get_variable_value(name)` {#DNNClassifier.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.model_dir` {#DNNClassifier.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.partial_fit(*args, **kwargs)` {#DNNClassifier.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.predict(*args, **kwargs)` {#DNNClassifier.predict}
-
-Returns predictions for given features. (deprecated arguments) (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2017-03-01.
-Instructions for updating:
-Please switch to predict_classes, or set `outputs` argument.
-
-By default, returns predicted classes. But this default will be dropped
-soon. Users should either pass `outputs`, or call `predict_classes` method.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns classes.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted classes with shape [batch_size] (or an iterable
- of predicted classes if as_iterable is True). Each predicted class is
- represented by its class index (i.e. integer from 0 to n_classes-1).
- If `outputs` is set, returns a dict of predictions.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.predict_classes(*args, **kwargs)` {#DNNClassifier.predict_classes}
-
-Returns predicted classes for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted classes with shape [batch_size] (or an iterable
- of predicted classes if as_iterable is True). Each predicted class is
- represented by its class index (i.e. integer from 0 to n_classes-1).
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.predict_proba(*args, **kwargs)` {#DNNClassifier.predict_proba}
-
-Returns predicted probabilities for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x and y must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted probabilities with shape [batch_size, n_classes]
- (or an iterable of predicted probabilities if as_iterable is True).
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.set_params(**params)` {#DNNClassifier.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNClassifier.weights_` {#DNNClassifier.weights_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.DNNLinearCombinedClassifier.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.DNNLinearCombinedClassifier.md
deleted file mode 100644
index 0c82c24515..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.DNNLinearCombinedClassifier.md
+++ /dev/null
@@ -1,493 +0,0 @@
-A classifier for TensorFlow Linear and DNN joined training models.
-
-Example:
-
-```python
-sparse_feature_a = sparse_column_with_hash_bucket(...)
-sparse_feature_b = sparse_column_with_hash_bucket(...)
-
-sparse_feature_a_x_sparse_feature_b = crossed_column(...)
-
-sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a,
- ...)
-sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b,
- ...)
-
-estimator = DNNLinearCombinedClassifier(
- # common settings
- n_classes=n_classes,
- weight_column_name=weight_column_name,
- # wide settings
- linear_feature_columns=[sparse_feature_a_x_sparse_feature_b],
- linear_optimizer=tf.train.FtrlOptimizer(...),
- # deep settings
- dnn_feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
- dnn_hidden_units=[1000, 500, 100],
- dnn_optimizer=tf.train.AdagradOptimizer(...))
-
-# Input builders
-def input_fn_train: # returns x, y (where y represents label's class index).
- ...
-def input_fn_eval: # returns x, y (where y represents label's class index).
- ...
-estimator.fit(input_fn=input_fn_train)
-estimator.evaluate(input_fn=input_fn_eval)
-estimator.predict(x=x) # returns predicted labels (i.e. label's class index).
-```
-
-Input of `fit` and `evaluate` should have following features,
- otherwise there will be a `KeyError`:
- if `weight_column_name` is not `None`, a feature with
- `key=weight_column_name` whose value is a `Tensor`.
- for each `column` in `dnn_feature_columns` + `linear_feature_columns`:
- - if `column` is a `SparseColumn`, a feature with `key=column.name`
- whose `value` is a `SparseTensor`.
- - if `column` is a `WeightedSparseColumn`, two features: the first with
- `key` the id column name, the second with `key` the weight column name.
- Both features' `value` must be a `SparseTensor`.
- - if `column` is a `RealValuedColumn, a feature with `key=column.name`
- whose `value` is a `Tensor`.
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.__init__(model_dir=None, n_classes=2, weight_column_name=None, linear_feature_columns=None, linear_optimizer=None, _joint_linear_weights=False, dnn_feature_columns=None, dnn_optimizer=None, dnn_hidden_units=None, dnn_activation_fn=relu, dnn_dropout=None, gradient_clip_norm=None, enable_centered_bias=False, config=None, feature_engineering_fn=None, embedding_lr_multipliers=None, input_layer_min_slice_size=None)` {#DNNLinearCombinedClassifier.__init__}
-
-Constructs a DNNLinearCombinedClassifier instance.
-
-##### Args:
-
-
-* <b>`model_dir`</b>: Directory to save model parameters, graph and etc. This can
- also be used to load checkpoints from the directory into a estimator
- to continue training a previously saved model.
-* <b>`n_classes`</b>: number of label classes. Default is binary classification.
- Note that class labels are integers representing the class index (i.e.
- values from 0 to n_classes-1). For arbitrary label values (e.g. string
- labels), convert to class indices first.
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training.
- It will be multiplied by the loss of the example.
-* <b>`linear_feature_columns`</b>: An iterable containing all the feature columns
- used by linear part of the model. All items in the set must be
- instances of classes derived from `FeatureColumn`.
-* <b>`linear_optimizer`</b>: An instance of `tf.Optimizer` used to apply gradients to
- the linear part of the model. If `None`, will use a FTRL optimizer.
- _joint_linear_weights: If True a single (possibly partitioned) variable
- will be used to store the linear model weights. It's faster, but
- requires all columns are sparse and have the 'sum' combiner.
-
-* <b>`dnn_feature_columns`</b>: An iterable containing all the feature columns used
- by deep part of the model. All items in the set must be instances of
- classes derived from `FeatureColumn`.
-* <b>`dnn_optimizer`</b>: An instance of `tf.Optimizer` used to apply gradients to
- the deep part of the model. If `None`, will use an Adagrad optimizer.
-* <b>`dnn_hidden_units`</b>: List of hidden units per layer. All layers are fully
- connected.
-* <b>`dnn_activation_fn`</b>: Activation function applied to each layer. If `None`,
- will use `tf.nn.relu`.
-* <b>`dnn_dropout`</b>: When not None, the probability we will drop out
- a given coordinate.
-* <b>`gradient_clip_norm`</b>: A float > 0. If provided, gradients are clipped
- to their global norm with this clipping ratio. See
- tf.clip_by_global_norm for more details.
-* <b>`enable_centered_bias`</b>: A bool. If True, estimator will learn a centered
- bias variable for each class. Rest of the model structure learns the
- residual after centered bias.
-* <b>`config`</b>: RunConfig object to configure the runtime settings.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into the model.
-* <b>`embedding_lr_multipliers`</b>: Optional. A dictionary from `EmbeddingColumn` to
- a `float` multiplier. Multiplier will be used to multiply with
- learning rate for the embedding variables.
-* <b>`input_layer_min_slice_size`</b>: Optional. The min slice size of input layer
- partitions. If not provided, will use the default of 64M.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `n_classes` < 2.
-* <b>`ValueError`</b>: If both `linear_feature_columns` and `dnn_features_columns`
- are empty at the same time.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.__repr__()` {#DNNLinearCombinedClassifier.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.config` {#DNNLinearCombinedClassifier.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.dnn_bias_` {#DNNLinearCombinedClassifier.dnn_bias_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.dnn_weights_` {#DNNLinearCombinedClassifier.dnn_weights_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.evaluate(*args, **kwargs)` {#DNNLinearCombinedClassifier.evaluate}
-
-See `Evaluable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` or `y` is provided, and at least one of
- `input_fn` or `feed_fn` is provided.
- Or if `metrics` is not `None` or `dict`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.export(export_dir, input_fn=None, input_feature_key=None, use_deprecated_input_fn=True, signature_fn=None, default_batch_size=1, exports_to_keep=None)` {#DNNLinearCombinedClassifier.export}
-
-See BasEstimator.export.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#DNNLinearCombinedClassifier.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.fit(*args, **kwargs)` {#DNNLinearCombinedClassifier.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.get_params(deep=True)` {#DNNLinearCombinedClassifier.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.get_variable_names()` {#DNNLinearCombinedClassifier.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.get_variable_value(name)` {#DNNLinearCombinedClassifier.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.linear_bias_` {#DNNLinearCombinedClassifier.linear_bias_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.linear_weights_` {#DNNLinearCombinedClassifier.linear_weights_}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-30.
-Instructions for updating:
-This method will be removed after the deprecation date. To inspect variables, use get_variable_names() and get_variable_value().
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.model_dir` {#DNNLinearCombinedClassifier.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.partial_fit(*args, **kwargs)` {#DNNLinearCombinedClassifier.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.predict(*args, **kwargs)` {#DNNLinearCombinedClassifier.predict}
-
-Returns predictions for given features. (deprecated arguments) (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2017-03-01.
-Instructions for updating:
-Please switch to predict_classes, or set `outputs` argument.
-
-By default, returns predicted classes. But this default will be dropped
-soon. Users should either pass `outputs`, or call `predict_classes` method.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns classes.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted classes with shape [batch_size] (or an iterable
- of predicted classes if as_iterable is True). Each predicted class is
- represented by its class index (i.e. integer from 0 to n_classes-1).
- If `outputs` is set, returns a dict of predictions.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.predict_classes(*args, **kwargs)` {#DNNLinearCombinedClassifier.predict_classes}
-
-Returns predicted classes for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted classes with shape [batch_size] (or an iterable
- of predicted classes if as_iterable is True). Each predicted class is
- represented by its class index (i.e. integer from 0 to n_classes-1).
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.predict_proba(*args, **kwargs)` {#DNNLinearCombinedClassifier.predict_proba}
-
-Returns prediction probabilities for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x and y must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted probabilities with shape [batch_size, n_classes]
- (or an iterable of predicted probabilities if as_iterable is True).
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedClassifier.set_params(**params)` {#DNNLinearCombinedClassifier.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.Evaluable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.Evaluable.md
deleted file mode 100644
index 8024c9f7d0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.Evaluable.md
+++ /dev/null
@@ -1,77 +0,0 @@
-Interface for objects that are evaluatable by, e.g., `Experiment`.
-- - -
-
-#### `tf.contrib.learn.Evaluable.evaluate(x=None, y=None, input_fn=None, feed_fn=None, batch_size=None, steps=None, metrics=None, name=None, checkpoint_path=None, hooks=None)` {#Evaluable.evaluate}
-
-Evaluates given model with provided evaluation data.
-
-Stop conditions - we evaluate on the given input data until one of the
-following:
-- If `steps` is provided, and `steps` batches of size `batch_size` are
-processed.
-- If `input_fn` is provided, and it raises an end-of-input
-exception (`OutOfRangeError` or `StopIteration`).
-- If `x` is provided, and all items in `x` have been processed.
-
-The return value is a dict containing the metrics specified in `metrics`, as
-well as an entry `global_step` which contains the value of the global step
-for which this evaluation was performed.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...] or dictionary of many matrices
- containing the input samples for fitting the model. Can be iterator that returns
- arrays of features or dictionary of array of features. If set, `input_fn` must
- be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs] containing the
- label values (class labels in classification, real numbers in
- regression) or dictionary of multiple vectors/matrices. Can be iterator
- that returns array of targets or dictionary of array of targets. If set,
- `input_fn` must be `None`. Note: For classification, label values must
- be integers representing the class index (i.e. values from 0 to
- n_classes-1).
-* <b>`input_fn`</b>: Input function returning a tuple of:
- features - Dictionary of string feature name to `Tensor` or `Tensor`.
- labels - `Tensor` or dictionary of `Tensor` with labels.
- If input_fn is set, `x`, `y`, and `batch_size` must be `None`. If
- `steps` is not provided, this should raise `OutOfRangeError` or
- `StopIteration` after the desired amount of data (e.g., one epoch) has
- been provided. See "Stop conditions" above for specifics.
-* <b>`feed_fn`</b>: Function creating a feed dict every time it is called. Called
- once per iteration. Must be `None` if `input_fn` is provided.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`, if specified. Must be `None` if `input_fn` is
- provided.
-* <b>`steps`</b>: Number of steps for which to evaluate model. If `None`, evaluate
- until `x` is consumed or `input_fn` raises an end-of-input exception.
- See "Stop conditions" above for specifics.
-* <b>`metrics`</b>: Dict of metrics to run. If None, the default metric functions
- are used; if {}, no metrics are used. Otherwise, `metrics` should map
- friendly names for the metric to a `MetricSpec` object defining which
- model outputs to evaluate against which labels with which metric
- function.
-
- Metric ops should support streaming, e.g., returning `update_op` and
- `value` tensors. For example, see the options defined in
- `../../../metrics/python/ops/metrics_ops.py`.
-
-* <b>`name`</b>: Name of the evaluation if user needs to run multiple evaluations on
- different data sets, such as on training data vs test data.
-* <b>`checkpoint_path`</b>: Path of a specific checkpoint to evaluate. If `None`, the
- latest checkpoint in `model_dir` is used.
-* <b>`hooks`</b>: List of `SessionRunHook` subclass instances. Used for callbacks
- inside the evaluation call.
-
-##### Returns:
-
- Returns `dict` with evaluation results.
-
-
-- - -
-
-#### `tf.contrib.learn.Evaluable.model_dir` {#Evaluable.model_dir}
-
-Returns a path in which the eval process will look for checkpoints.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.ExportStrategy.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.ExportStrategy.md
deleted file mode 100644
index 513bb777b2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.ExportStrategy.md
+++ /dev/null
@@ -1,89 +0,0 @@
-A class representing a type of model export.
-
-Typically constructed by a utility function specific to the exporter, such as
-`saved_model_export_utils.make_export_strategy()`.
-
-The fields are:
- name: The directory name under the export base directory where exports of
- this type will be written.
- export_fn: A function that writes an export, given an estimator, a
- destination path, and optionally a checkpoint path and an evaluation
- result for that checkpoint. This export_fn() may be run repeatedly during
- continuous training, or just once at the end of fixed-length training.
- Note the export_fn() may choose whether or not to export based on the eval
- result or based on an internal timer or any other criterion, if exports
- are not desired for every checkpoint.
-
- The signature of this function must be one of:
- * (estimator, export_path) -> export_path`
- * (estimator, export_path, checkpoint_path) -> export_path`
- * (estimator, export_path, checkpoint_path, eval_result) -> export_path`
-- - -
-
-#### `tf.contrib.learn.ExportStrategy.__getnewargs__()` {#ExportStrategy.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.contrib.learn.ExportStrategy.__getstate__()` {#ExportStrategy.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.contrib.learn.ExportStrategy.__new__(_cls, name, export_fn)` {#ExportStrategy.__new__}
-
-Create new instance of ExportStrategy(name, export_fn)
-
-
-- - -
-
-#### `tf.contrib.learn.ExportStrategy.__repr__()` {#ExportStrategy.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.contrib.learn.ExportStrategy.export(estimator, export_path, checkpoint_path=None, eval_result=None)` {#ExportStrategy.export}
-
-Exports the given Estimator to a specific format.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the Estimator to export.
-* <b>`export_path`</b>: A string containing a directory where to write the export.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the strategy may locate a checkpoint (e.g. the most recent) by itself.
-* <b>`eval_result`</b>: The output of Estimator.evaluate on this checkpoint. This
- should be set only if checkpoint_path is provided (otherwise it is
- unclear which checkpoint this eval refers to).
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the export_fn does not have the required signature
-
-
-- - -
-
-#### `tf.contrib.learn.ExportStrategy.export_fn` {#ExportStrategy.export_fn}
-
-Alias for field number 1
-
-
-- - -
-
-#### `tf.contrib.learn.ExportStrategy.name` {#ExportStrategy.name}
-
-Alias for field number 0
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.RunConfig.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.RunConfig.md
deleted file mode 100644
index c782975fa2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.RunConfig.md
+++ /dev/null
@@ -1,163 +0,0 @@
-This class specifies the configurations for an `Estimator` run.
-
-If you're a Google-internal user using command line flags with
-`learn_runner.py` (for instance, to do distributed training or to use
-parameter servers), you probably want to use `learn_runner.EstimatorConfig`
-instead.
-- - -
-
-#### `tf.contrib.learn.RunConfig.__init__(master=None, num_cores=0, log_device_placement=False, gpu_memory_fraction=1, tf_random_seed=None, save_summary_steps=100, save_checkpoints_secs=600, save_checkpoints_steps=None, keep_checkpoint_max=5, keep_checkpoint_every_n_hours=10000, evaluation_master='')` {#RunConfig.__init__}
-
-Constructor.
-
-Note that the superclass `ClusterConfig` may set properties like
-`cluster_spec`, `is_chief`, `master` (if `None` in the args),
-`num_ps_replicas`, `task_id`, and `task_type` based on the `TF_CONFIG`
-environment variable. See `ClusterConfig` for more details.
-
-##### Args:
-
-
-* <b>`master`</b>: TensorFlow master. Defaults to empty string for local.
-* <b>`num_cores`</b>: Number of cores to be used. If 0, the system picks an
- appropriate number (default: 0).
-* <b>`log_device_placement`</b>: Log the op placement to devices (default: False).
-* <b>`gpu_memory_fraction`</b>: Fraction of GPU memory used by the process on
- each GPU uniformly on the same machine.
-* <b>`tf_random_seed`</b>: Random seed for TensorFlow initializers.
- Setting this value allows consistency between reruns.
-* <b>`save_summary_steps`</b>: Save summaries every this many steps.
-* <b>`save_checkpoints_secs`</b>: Save checkpoints every this many seconds. Can not
- be specified with `save_checkpoints_steps`.
-* <b>`save_checkpoints_steps`</b>: Save checkpoints every this many steps. Can not be
- specified with `save_checkpoints_secs`.
-* <b>`keep_checkpoint_max`</b>: The maximum number of recent checkpoint files to
- keep. As new files are created, older files are deleted. If None or 0,
- all checkpoint files are kept. Defaults to 5 (that is, the 5 most recent
- checkpoint files are kept.)
-* <b>`keep_checkpoint_every_n_hours`</b>: Number of hours between each checkpoint
- to be saved. The default value of 10,000 hours effectively disables
- the feature.
-* <b>`evaluation_master`</b>: the master on which to perform evaluation.
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.cluster_spec` {#RunConfig.cluster_spec}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.environment` {#RunConfig.environment}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.evaluation_master` {#RunConfig.evaluation_master}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.get_task_id()` {#RunConfig.get_task_id}
-
-Returns task index from `TF_CONFIG` environmental variable.
-
-If you have a ClusterConfig instance, you can just access its task_id
-property instead of calling this function and re-parsing the environmental
-variable.
-
-##### Returns:
-
- `TF_CONFIG['task']['index']`. Defaults to 0.
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.is_chief` {#RunConfig.is_chief}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.keep_checkpoint_every_n_hours` {#RunConfig.keep_checkpoint_every_n_hours}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.keep_checkpoint_max` {#RunConfig.keep_checkpoint_max}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.master` {#RunConfig.master}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.num_ps_replicas` {#RunConfig.num_ps_replicas}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.save_checkpoints_secs` {#RunConfig.save_checkpoints_secs}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.save_checkpoints_steps` {#RunConfig.save_checkpoints_steps}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.save_summary_steps` {#RunConfig.save_summary_steps}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.task_id` {#RunConfig.task_id}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.task_type` {#RunConfig.task_type}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.tf_config` {#RunConfig.tf_config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.RunConfig.tf_random_seed` {#RunConfig.tf_random_seed}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.evaluate.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.evaluate.md
deleted file mode 100644
index d98fb5061c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.evaluate.md
+++ /dev/null
@@ -1,57 +0,0 @@
-### `tf.contrib.learn.evaluate(*args, **kwargs)` {#evaluate}
-
-Evaluate a model loaded from a checkpoint. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-02-15.
-Instructions for updating:
-graph_actions.py will be deleted. Use tf.train.* utilities instead. You can use learn/estimators/estimator.py as an example.
-
-Given `graph`, a directory to write summaries to (`output_dir`), a checkpoint
-to restore variables from, and a `dict` of `Tensor`s to evaluate, run an eval
-loop for `max_steps` steps, or until an exception (generally, an
-end-of-input signal from a reader operation) is raised from running
-`eval_dict`.
-
-In each step of evaluation, all tensors in the `eval_dict` are evaluated, and
-every `log_every_steps` steps, they are logged. At the very end of evaluation,
-a summary is evaluated (finding the summary ops using `Supervisor`'s logic)
-and written to `output_dir`.
-
-##### Args:
-
-
-* <b>`graph`</b>: A `Graph` to train. It is expected that this graph is not in use
- elsewhere.
-* <b>`output_dir`</b>: A string containing the directory to write a summary to.
-* <b>`checkpoint_path`</b>: A string containing the path to a checkpoint to restore.
- Can be `None` if the graph doesn't require loading any variables.
-* <b>`eval_dict`</b>: A `dict` mapping string names to tensors to evaluate. It is
- evaluated in every logging step. The result of the final evaluation is
- returned. If `update_op` is None, then it's evaluated in every step. If
- `max_steps` is `None`, this should depend on a reader that will raise an
- end-of-input exception when the inputs are exhausted.
-* <b>`update_op`</b>: A `Tensor` which is run in every step.
-* <b>`global_step_tensor`</b>: A `Variable` containing the global step. If `None`,
- one is extracted from the graph using the same logic as in `Supervisor`.
- Used to place eval summaries on training curves.
-* <b>`supervisor_master`</b>: The master string to use when preparing the session.
-* <b>`log_every_steps`</b>: Integer. Output logs every `log_every_steps` evaluation
- steps. The logs contain the `eval_dict` and timing information.
-* <b>`feed_fn`</b>: A function that is called every iteration to produce a `feed_dict`
- passed to `session.run` calls. Optional.
-* <b>`max_steps`</b>: Integer. Evaluate `eval_dict` this many times.
-
-##### Returns:
-
- A tuple `(eval_results, global_step)`:
-
-* <b>`eval_results`</b>: A `dict` mapping `string` to numeric values (`int`, `float`)
- that are the result of running eval_dict in the last step. `None` if no
- eval steps were run.
-* <b>`global_step`</b>: The global step this evaluation corresponds to.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `output_dir` is empty.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.infer_real_valued_columns_from_input.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.infer_real_valued_columns_from_input.md
deleted file mode 100644
index b9de559b20..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.infer_real_valued_columns_from_input.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.contrib.learn.infer_real_valued_columns_from_input(x)` {#infer_real_valued_columns_from_input}
-
-Creates `FeatureColumn` objects for inputs defined by input `x`.
-
-This interprets all inputs as dense, fixed-length float values.
-
-##### Args:
-
-
-* <b>`x`</b>: Real-valued matrix of shape [n_samples, n_features...]. Can be
- iterator that returns arrays of features.
-
-##### Returns:
-
- List of `FeatureColumn` objects.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.monitors.SummaryWriterCache.clear.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.monitors.SummaryWriterCache.clear.md
deleted file mode 100644
index b77f11673d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.monitors.SummaryWriterCache.clear.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.contrib.learn.monitors.SummaryWriterCache.clear()` {#SummaryWriterCache.clear}
-
-Clear cached summary writers. Currently only used for unit tests.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.monitors.get_default_monitors.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.monitors.get_default_monitors.md
deleted file mode 100644
index 050df4d6a6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.monitors.get_default_monitors.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.contrib.learn.monitors.get_default_monitors(loss_op=None, summary_op=None, save_summary_steps=100, output_dir=None, summary_writer=None)` {#get_default_monitors}
-
-Returns a default set of typically-used monitors.
-
-##### Args:
-
-
-* <b>`loss_op`</b>: `Tensor`, the loss tensor. This will be printed using `PrintTensor`
- at the default interval.
-* <b>`summary_op`</b>: See `SummarySaver`.
-* <b>`save_summary_steps`</b>: See `SummarySaver`.
-* <b>`output_dir`</b>: See `SummarySaver`.
-* <b>`summary_writer`</b>: See `SummarySaver`.
-
-##### Returns:
-
- `list` of monitors.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.read_batch_features.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.read_batch_features.md
deleted file mode 100644
index ca012afd17..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.read_batch_features.md
+++ /dev/null
@@ -1,46 +0,0 @@
-### `tf.contrib.learn.read_batch_features(file_pattern, batch_size, features, reader, randomize_input=True, num_epochs=None, queue_capacity=10000, feature_queue_capacity=100, reader_num_threads=1, parse_fn=None, name=None)` {#read_batch_features}
-
-Adds operations to read, queue, batch and parse `Example` protos.
-
-Given file pattern (or list of files), will setup a queue for file names,
-read `Example` proto using provided `reader`, use batch queue to create
-batches of examples of size `batch_size` and parse example given `features`
-specification.
-
-All queue runners are added to the queue runners collection, and may be
-started via `start_queue_runners`.
-
-All ops are added to the default graph.
-
-##### Args:
-
-
-* <b>`file_pattern`</b>: List of files or pattern of file paths containing
- `Example` records. See `tf.gfile.Glob` for pattern rules.
-* <b>`batch_size`</b>: An int or scalar `Tensor` specifying the batch size to use.
-* <b>`features`</b>: A `dict` mapping feature keys to `FixedLenFeature` or
- `VarLenFeature` values.
-* <b>`reader`</b>: A function or class that returns an object with
- `read` method, (filename tensor) -> (example tensor).
-* <b>`randomize_input`</b>: Whether the input should be randomized.
-* <b>`num_epochs`</b>: Integer specifying the number of times to read through the
- dataset. If None, cycles through the dataset forever. NOTE - If specified,
- creates a variable that must be initialized, so call
- tf.local_variables_initializer() and run the op in a session.
-* <b>`queue_capacity`</b>: Capacity for input queue.
-* <b>`feature_queue_capacity`</b>: Capacity of the parsed features queue. Set this
- value to a small number, for example 5 if the parsed features are large.
-* <b>`reader_num_threads`</b>: The number of threads to read examples.
-* <b>`parse_fn`</b>: Parsing function, takes `Example` Tensor returns parsed
- representation. If `None`, no parsing is done.
-* <b>`name`</b>: Name of resulting op.
-
-##### Returns:
-
- A dict of `Tensor` or `SparseTensor` objects for each in `features`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: for invalid inputs.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.run_feeds.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.run_feeds.md
deleted file mode 100644
index bfbb588773..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.learn.run_feeds.md
+++ /dev/null
@@ -1,8 +0,0 @@
-### `tf.contrib.learn.run_feeds(*args, **kwargs)` {#run_feeds}
-
-See run_feeds_iter(). Returns a `list` instead of an iterator. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-02-15.
-Instructions for updating:
-graph_actions.py will be deleted. Use tf.train.* utilities instead. You can use learn/estimators/estimator.py as an example.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.legacy_seq2seq.one2many_rnn_seq2seq.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.legacy_seq2seq.one2many_rnn_seq2seq.md
deleted file mode 100644
index c4c90cc2be..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.legacy_seq2seq.one2many_rnn_seq2seq.md
+++ /dev/null
@@ -1,51 +0,0 @@
-### `tf.contrib.legacy_seq2seq.one2many_rnn_seq2seq(encoder_inputs, decoder_inputs_dict, enc_cell, dec_cells_dict, num_encoder_symbols, num_decoder_symbols_dict, embedding_size, feed_previous=False, dtype=None, scope=None)` {#one2many_rnn_seq2seq}
-
-One-to-many RNN sequence-to-sequence model (multi-task).
-
-This is a multi-task sequence-to-sequence model with one encoder and multiple
-decoders. Reference to multi-task sequence-to-sequence learning can be found
-here: http://arxiv.org/abs/1511.06114
-
-##### Args:
-
-
-* <b>`encoder_inputs`</b>: A list of 1D int32 Tensors of shape [batch_size].
-* <b>`decoder_inputs_dict`</b>: A dictionany mapping decoder name (string) to
- the corresponding decoder_inputs; each decoder_inputs is a list of 1D
- Tensors of shape [batch_size]; num_decoders is defined as
- len(decoder_inputs_dict).
-* <b>`enc_cell`</b>: core_rnn_cell.RNNCell defining the encoder cell function and size.
-* <b>`dec_cells_dict`</b>: A dictionary mapping encoder name (string) to an
- instance of core_rnn_cell.RNNCell.
-* <b>`num_encoder_symbols`</b>: Integer; number of symbols on the encoder side.
-* <b>`num_decoder_symbols_dict`</b>: A dictionary mapping decoder name (string) to an
- integer specifying number of symbols for the corresponding decoder;
- len(num_decoder_symbols_dict) must be equal to num_decoders.
-* <b>`embedding_size`</b>: Integer, the length of the embedding vector for each symbol.
-* <b>`feed_previous`</b>: Boolean or scalar Boolean Tensor; if True, only the first of
- decoder_inputs will be used (the "GO" symbol), and all other decoder
- inputs will be taken from previous outputs (as in embedding_rnn_decoder).
- If False, decoder_inputs are used as given (the standard decoder case).
-* <b>`dtype`</b>: The dtype of the initial state for both the encoder and encoder
- rnn cells (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "one2many_rnn_seq2seq"
-
-##### Returns:
-
- A tuple of the form (outputs_dict, state_dict), where:
-
-* <b>`outputs_dict`</b>: A mapping from decoder name (string) to a list of the same
- length as decoder_inputs_dict[name]; each element in the list is a 2D
- Tensors with shape [batch_size x num_decoder_symbol_list[name]]
- containing the generated outputs.
-* <b>`state_dict`</b>: A mapping from decoder name (string) to the final state of the
- corresponding decoder RNN; it is a 2D Tensor of shape
- [batch_size x cell.state_size].
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if enc_cell or any of the dec_cells are not instances of RNNCell.
-* <b>`ValueError`</b>: if len(dec_cells) != len(decoder_inputs_dict).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.losses.mean_squared_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.losses.mean_squared_error.md
deleted file mode 100644
index 550dcd9eac..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.losses.mean_squared_error.md
+++ /dev/null
@@ -1,35 +0,0 @@
-### `tf.contrib.losses.mean_squared_error(*args, **kwargs)` {#mean_squared_error}
-
-Adds a Sum-of-Squares loss to the training procedure. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.mean_squared_error instead.
-
-`weights` acts as a coefficient for the loss. If a scalar is provided, then
-the loss is simply scaled by the given value. If `weights` is a tensor of size
-[batch_size], then the total loss for each sample of the batch is rescaled
-by the corresponding element in the `weights` vector. If the shape of
-`weights` matches the shape of `predictions`, then the loss of each
-measurable element of `predictions` is scaled by the corresponding value of
-`weights`.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted outputs.
-* <b>`labels`</b>: The ground truth output tensor, same dimensions as 'predictions'.
-* <b>`weights`</b>: Coefficients for the loss a scalar, a tensor of shape
- [batch_size] or a tensor whose shape matches `predictions`.
-* <b>`scope`</b>: The scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `labels` or
- if the shape of `weights` is invalid.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.set_union.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.set_union.md
deleted file mode 100644
index 1bc3ec99c9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.set_union.md
+++ /dev/null
@@ -1,63 +0,0 @@
-### `tf.contrib.metrics.set_union(a, b, validate_indices=True)` {#set_union}
-
-Compute set union of elements in last dimension of `a` and `b`.
-
-All but the last dimension of `a` and `b` must match.
-
-Example:
-
-```python
- a = [
- [
- [
- [1, 2],
- [3],
- ],
- [
- [4],
- [5, 6],
- ],
- ],
- ]
- b = [
- [
- [
- [1, 3],
- [2],
- ],
- [
- [4, 5],
- [5, 6, 7, 8],
- ],
- ],
- ]
- set_union(a, b) = [
- [
- [
- [1, 2, 3],
- [2, 3],
- ],
- [
- [4, 5],
- [5, 6, 7, 8],
- ],
- ],
- ]
-```
-
-##### Args:
-
-
-* <b>`a`</b>: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices
- must be sorted in row-major order.
-* <b>`b`</b>: `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices
- must be sorted in row-major order.
-* <b>`validate_indices`</b>: Whether to validate the order and range of sparse indices
- in `a` and `b`.
-
-##### Returns:
-
- A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but
- the last dimension the same. Elements along the last dimension contain the
- unions.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_precision_at_thresholds.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_precision_at_thresholds.md
deleted file mode 100644
index 6c0a5f9220..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_precision_at_thresholds.md
+++ /dev/null
@@ -1,50 +0,0 @@
-### `tf.contrib.metrics.streaming_precision_at_thresholds(predictions, labels, thresholds, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_precision_at_thresholds}
-
-Computes precision values for different `thresholds` on `predictions`.
-
-The `streaming_precision_at_thresholds` function creates four local variables,
-`true_positives`, `true_negatives`, `false_positives` and `false_negatives`
-for various values of thresholds. `precision[i]` is defined as the total
-weight of values in `predictions` above `thresholds[i]` whose corresponding
-entry in `labels` is `True`, divided by the total weight of values in
-`predictions` above `thresholds[i]` (`true_positives[i] / (true_positives[i] +
-false_positives[i])`).
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`precision`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A floating point `Tensor` of arbitrary shape and whose values
- are in the range `[0, 1]`.
-* <b>`labels`</b>: A `bool` `Tensor` whose shape matches `predictions`.
-* <b>`thresholds`</b>: A python list or tuple of float thresholds in `[0, 1]`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `auc` should be
- added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`precision`</b>: A float `Tensor` of shape `[len(thresholds)]`.
-* <b>`update_op`</b>: An operation that increments the `true_positives`,
- `true_negatives`, `false_positives` and `false_negatives` variables that
- are used in the computation of `precision`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_recall_at_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_recall_at_k.md
deleted file mode 100644
index 1ddafd7da6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_recall_at_k.md
+++ /dev/null
@@ -1,55 +0,0 @@
-### `tf.contrib.metrics.streaming_recall_at_k(*args, **kwargs)` {#streaming_recall_at_k}
-
-Computes the recall@k of the predictions with respect to dense labels. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-11-08.
-Instructions for updating:
-Please use `streaming_sparse_recall_at_k`, and reshape labels from [batch_size] to [batch_size, 1].
-
-The `streaming_recall_at_k` function creates two local variables, `total` and
-`count`, that are used to compute the recall@k frequency. This frequency is
-ultimately returned as `recall_at_<k>`: an idempotent operation that simply
-divides `total` by `count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`recall_at_<k>`. Internally, an `in_top_k` operation computes a `Tensor` with
-shape [batch_size] whose elements indicate whether or not the corresponding
-label is in the top `k` `predictions`. Then `update_op` increments `total`
-with the reduced sum of `weights` where `in_top_k` is `True`, and it
-increments `count` with the reduced sum of `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A float `Tensor` of dimension [batch_size, num_classes].
-* <b>`labels`</b>: A `Tensor` of dimension [batch_size] whose type is in `int32`,
- `int64`.
-* <b>`k`</b>: The number of top elements to look at for computing recall.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `recall_at_k`
- should be added to.
-* <b>`updates_collections`</b>: An optional list of collections `update_op` should be
- added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`recall_at_k`</b>: A `Tensor` representing the recall@k, the fraction of labels
- which fall into the top `k` predictions.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `recall_at_k`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_sparse_precision_at_top_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_sparse_precision_at_top_k.md
deleted file mode 100644
index bff835747e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_sparse_precision_at_top_k.md
+++ /dev/null
@@ -1,75 +0,0 @@
-### `tf.contrib.metrics.streaming_sparse_precision_at_top_k(top_k_predictions, labels, class_id=None, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_sparse_precision_at_top_k}
-
-Computes precision@k of top-k predictions with respect to sparse labels.
-
-If `class_id` is not specified, we calculate precision as the ratio of
- true positives (i.e., correct predictions, items in `top_k_predictions`
- that are found in the corresponding row in `labels`) to positives (all
- `top_k_predictions`).
-If `class_id` is specified, we calculate precision by considering only the
- rows in the batch for which `class_id` is in the top `k` highest
- `predictions`, and computing the fraction of them for which `class_id` is
- in the corresponding row in `labels`.
-
-We expect precision to decrease as `k` increases.
-
-`streaming_sparse_precision_at_top_k` creates two local variables,
-`true_positive_at_k` and `false_positive_at_k`, that are used to compute
-the precision@k frequency. This frequency is ultimately returned as
-`precision_at_k`: an idempotent operation that simply divides
-`true_positive_at_k` by total (`true_positive_at_k` + `false_positive_at_k`).
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`precision_at_k`. Internally, set operations applied to `top_k_predictions`
-and `labels` calculate the true positives and false positives weighted by
-`weights`. Then `update_op` increments `true_positive_at_k` and
-`false_positive_at_k` using these values.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`top_k_predictions`</b>: Integer `Tensor` with shape [D1, ... DN, k] where
- N >= 1. Commonly, N=1 and top_k_predictions has shape [batch size, k].
- The final dimension contains the indices of top-k labels. [D1, ... DN]
- must match `labels`.
-* <b>`labels`</b>: `int64` `Tensor` or `SparseTensor` with shape
- [D1, ... DN, num_labels], where N >= 1 and num_labels is the number of
- target classes for the associated prediction. Commonly, N=1 and `labels`
- has shape [batch_size, num_labels]. [D1, ... DN] must match
- `top_k_predictions`. Values should be in range [0, num_classes), where
- num_classes is the last dimension of `predictions`. Values outside this
- range are ignored.
-* <b>`class_id`</b>: Integer class ID for which we want binary metrics. This should be
- in range [0, num_classes), where num_classes is the last dimension of
- `predictions`. If `class_id` is outside this range, the method returns
- NAN.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or n-1, where n is the rank of
- `labels`. If the latter, it must be broadcastable to `labels` (i.e., all
- dimensions must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that values should
- be added to.
-* <b>`updates_collections`</b>: An optional list of collections that updates should
- be added to.
-* <b>`name`</b>: Name of new update operation, and namespace for other dependent ops.
-
-##### Returns:
-
-
-* <b>`precision`</b>: Scalar `float64` `Tensor` with the value of `true_positives`
- divided by the sum of `true_positives` and `false_positives`.
-* <b>`update_op`</b>: `Operation` that increments `true_positives` and
- `false_positives` variables appropriately, and whose value matches
- `precision`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is not `None` and its shape doesn't match
- `predictions`, or if either `metrics_collections` or `updates_collections`
- are not a list or tuple.
-* <b>`ValueError`</b>: If `top_k_predictions` has rank < 2.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_true_positives.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_true_positives.md
deleted file mode 100644
index a022639c94..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.metrics.streaming_true_positives.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.contrib.metrics.streaming_true_positives(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_true_positives}
-
-Sum the weights of true_positives.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted values, a `Tensor` of arbitrary dimensions. Will
- be cast to `bool`.
-* <b>`labels`</b>: The ground truth values, a `Tensor` whose dimensions must match
- `predictions`. Will be cast to `bool`.
-* <b>`weights`</b>: Optional `Tensor` whose rank is either 0, or the same rank as
- `labels`, and must be broadcastable to `labels` (i.e., all dimensions
- must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`value_tensor`</b>: A `Tensor` representing the current value of the metric.
-* <b>`update_op`</b>: An operation that accumulates the error from a batch of data.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.DeviceWrapper.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.DeviceWrapper.md
deleted file mode 100644
index 7375b99f80..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.DeviceWrapper.md
+++ /dev/null
@@ -1,62 +0,0 @@
-Operator that ensures an RNNCell runs on a particular device.
-- - -
-
-#### `tf.contrib.rnn.DeviceWrapper.__call__(inputs, state, scope=None)` {#DeviceWrapper.__call__}
-
-Run the cell on specified device.
-
-
-- - -
-
-#### `tf.contrib.rnn.DeviceWrapper.__init__(cell, device)` {#DeviceWrapper.__init__}
-
-Construct a `DeviceWrapper` for `cell` with device `device`.
-
-Ensures the wrapped `cell` is called with `tf.device(device)`.
-
-##### Args:
-
-
-* <b>`cell`</b>: An instance of `RNNCell`.
-* <b>`device`</b>: A device string or function, for passing to `tf.device`.
-
-
-- - -
-
-#### `tf.contrib.rnn.DeviceWrapper.output_size` {#DeviceWrapper.output_size}
-
-Integer or TensorShape: size of outputs produced by this cell.
-
-
-- - -
-
-#### `tf.contrib.rnn.DeviceWrapper.state_size` {#DeviceWrapper.state_size}
-
-size(s) of state(s) used by this cell.
-
-It can be represented by an Integer, a TensorShape or a tuple of Integers
-or TensorShapes.
-
-
-- - -
-
-#### `tf.contrib.rnn.DeviceWrapper.zero_state(batch_size, dtype)` {#DeviceWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.LSTMBlockCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.LSTMBlockCell.md
deleted file mode 100644
index 467baa90cb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.LSTMBlockCell.md
+++ /dev/null
@@ -1,67 +0,0 @@
-Basic LSTM recurrent network cell.
-
-The implementation is based on: http://arxiv.org/abs/1409.2329.
-
-We add `forget_bias` (default: 1) to the biases of the forget gate in order to
-reduce the scale of forgetting in the beginning of the training.
-
-Unlike `core_rnn_cell.LSTMCell`, this is a monolithic op and should be much
-faster. The weight and bias matrixes should be compatible as long as the
-variable scope matches.
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockCell.__call__(x, states_prev, scope=None)` {#LSTMBlockCell.__call__}
-
-Long short-term memory cell (LSTM).
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockCell.__init__(num_units, forget_bias=1.0, use_peephole=False)` {#LSTMBlockCell.__init__}
-
-Initialize the basic LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell.
-* <b>`forget_bias`</b>: float, The bias added to forget gates (see above).
-* <b>`use_peephole`</b>: Whether to use peephole connections or not.
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockCell.output_size` {#LSTMBlockCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockCell.state_size` {#LSTMBlockCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockCell.zero_state(batch_size, dtype)` {#LSTMBlockCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.TimeFreqLSTMCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.TimeFreqLSTMCell.md
deleted file mode 100644
index 575d927cfe..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.rnn.TimeFreqLSTMCell.md
+++ /dev/null
@@ -1,100 +0,0 @@
-Time-Frequency Long short-term memory unit (LSTM) recurrent network cell.
-
-This implementation is based on:
-
- Tara N. Sainath and Bo Li
- "Modeling Time-Frequency Patterns with LSTM vs. Convolutional Architectures
- for LVCSR Tasks." submitted to INTERSPEECH, 2016.
-
-It uses peep-hole connections and optional cell clipping.
-- - -
-
-#### `tf.contrib.rnn.TimeFreqLSTMCell.__call__(inputs, state, scope=None)` {#TimeFreqLSTMCell.__call__}
-
-Run one step of LSTM.
-
-##### Args:
-
-
-* <b>`inputs`</b>: input Tensor, 2D, batch x num_units.
-* <b>`state`</b>: state Tensor, 2D, batch x state_size.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "TimeFreqLSTMCell".
-
-##### Returns:
-
- A tuple containing:
- - A 2D, batch x output_dim, Tensor representing the output of the LSTM
- after reading "inputs" when previous state was "state".
- Here output_dim is num_units.
- - A 2D, batch x state_size, Tensor representing the new state of LSTM
- after reading "inputs" when previous state was "state".
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an input_size was specified and the provided inputs have
- a different dimension.
-
-
-- - -
-
-#### `tf.contrib.rnn.TimeFreqLSTMCell.__init__(num_units, use_peepholes=False, cell_clip=None, initializer=None, num_unit_shards=1, forget_bias=1.0, feature_size=None, frequency_skip=None)` {#TimeFreqLSTMCell.__init__}
-
-Initialize the parameters for an LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell
-* <b>`use_peepholes`</b>: bool, set True to enable diagonal/peephole connections.
-* <b>`cell_clip`</b>: (optional) A float value, if provided the cell state is clipped
- by this value prior to the cell output activation.
-* <b>`initializer`</b>: (optional) The initializer to use for the weight and
- projection matrices.
-* <b>`num_unit_shards`</b>: int, How to split the weight matrix. If >1, the weight
- matrix is stored across num_unit_shards.
-* <b>`forget_bias`</b>: float, Biases of the forget gate are initialized by default
- to 1 in order to reduce the scale of forgetting at the beginning
- of the training.
-* <b>`feature_size`</b>: int, The size of the input feature the LSTM spans over.
-* <b>`frequency_skip`</b>: int, The amount the LSTM filter is shifted by in
- frequency.
-
-
-- - -
-
-#### `tf.contrib.rnn.TimeFreqLSTMCell.output_size` {#TimeFreqLSTMCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.TimeFreqLSTMCell.state_size` {#TimeFreqLSTMCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.TimeFreqLSTMCell.zero_state(batch_size, dtype)` {#TimeFreqLSTMCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.training.NextQueuedSequenceBatch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.training.NextQueuedSequenceBatch.md
deleted file mode 100644
index fa1095c17b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.training.NextQueuedSequenceBatch.md
+++ /dev/null
@@ -1,265 +0,0 @@
-NextQueuedSequenceBatch stores deferred SequenceQueueingStateSaver data.
-
-This class is instantiated by `SequenceQueueingStateSaver` and is accessible
-via its `next_batch` property.
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.__init__(state_saver)` {#NextQueuedSequenceBatch.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.batch_size` {#NextQueuedSequenceBatch.batch_size}
-
-The batch_size of the given batch.
-
-Usually, this is the batch_size requested when initializing the SQSS, but
-if allow_small_batch=True this will become smaller when inputs are
-exhausted.
-
-##### Returns:
-
- A scalar integer tensor, the batch_size
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.context` {#NextQueuedSequenceBatch.context}
-
-A dict mapping keys of `input_context` to batched context.
-
-##### Returns:
-
- A dict mapping keys of `input_context` to tensors.
- If we had at input:
-
- ```python
- context["name"].get_shape() == [d1, d2, ...]
- ```
-
- then for this property:
-
- ```python
- context["name"].get_shape() == [batch_size, d1, d2, ...]
- ```
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.insertion_index` {#NextQueuedSequenceBatch.insertion_index}
-
-The insertion indices of the examples (when they were first added).
-
-These indices start with the value -2**63 and increase with every
-call to the prefetch op. Each whole example gets its own insertion
-index, and this is used to prioritize the example so that its truncated
-segments appear in adjacent iterations, even if new examples are inserted
-by the prefetch op between iterations.
-
-##### Returns:
-
- An int64 vector of length `batch_size`, the insertion indices.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.key` {#NextQueuedSequenceBatch.key}
-
-The key names of the given truncated unrolled examples.
-
-The format of the key is:
-
-```python
-"%05d_of_%05d:%s" % (sequence, sequence_count, original_key)
-```
-
-where `original_key` is the unique key read in by the prefetcher.
-
-##### Returns:
-
- A string vector of length `batch_size`, the keys.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.length` {#NextQueuedSequenceBatch.length}
-
-The lengths of the given truncated unrolled examples.
-
-For initial iterations, for which `sequence * num_unroll < length`,
-this number is `num_unroll`. For the remainder,
-this number is between `0` and `num_unroll`.
-
-##### Returns:
-
- An integer vector of length `batch_size`, the lengths.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.next_key` {#NextQueuedSequenceBatch.next_key}
-
-The key names of the next (in iteration) truncated unrolled examples.
-
-The format of the key is:
-
-```python
-"%05d_of_%05d:%s" % (sequence + 1, sequence_count, original_key)
-```
-
-if `sequence + 1 < sequence_count`, otherwise:
-
-```python
-"STOP:%s" % original_key
-```
-
-where `original_key` is the unique key read in by the prefetcher.
-
-##### Returns:
-
- A string vector of length `batch_size`, the keys.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.save_state(state_name, value, name=None)` {#NextQueuedSequenceBatch.save_state}
-
-Returns an op to save the current batch of state `state_name`.
-
-##### Args:
-
-
-* <b>`state_name`</b>: string, matches a key provided in `initial_states`.
-* <b>`value`</b>: A `Tensor`.
- Its type must match that of `initial_states[state_name].dtype`.
- If we had at input:
-
- ```python
- initial_states[state_name].get_shape() == [d1, d2, ...]
- ```
-
- then the shape of `value` must match:
-
- ```python
- tf.shape(value) == [batch_size, d1, d2, ...]
- ```
-
-
-* <b>`name`</b>: string (optional). The name scope for newly created ops.
-
-##### Returns:
-
- A control flow op that stores the new state of each entry into
- the state saver. This op must be run for every iteration that
- accesses data from the state saver (otherwise the state saver
- will never progress through its states and run out of capacity).
-
-##### Raises:
-
-
-* <b>`KeyError`</b>: if `state_name` does not match any of the initial states
- declared in `initial_states`.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.sequence` {#NextQueuedSequenceBatch.sequence}
-
-An int32 vector, length `batch_size`: the sequence index of each entry.
-
-When an input is split up, the sequence values
-```
-0, 1, ..., sequence_count - 1
-```
-are assigned to each split.
-
-##### Returns:
-
- An int32 vector `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.sequence_count` {#NextQueuedSequenceBatch.sequence_count}
-
-An int32 vector, length `batch_size`: the sequence count of each entry.
-
-When an input is split up, the number of splits is equal to:
-`padded_length / num_unroll`. This is the sequence_count.
-
-##### Returns:
-
- An int32 vector `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.sequences` {#NextQueuedSequenceBatch.sequences}
-
-A dict mapping keys of `input_sequences` to split and rebatched data.
-
-##### Returns:
-
- A dict mapping keys of `input_sequences` to tensors.
- If we had at input:
-
- ```python
- sequences["name"].get_shape() == [None, d1, d2, ...]
- ```
-
- where `None` meant the sequence time was dynamic, then for this property:
-
- ```python
- sequences["name"].get_shape() == [batch_size, num_unroll, d1, d2, ...].
- ```
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.state(state_name)` {#NextQueuedSequenceBatch.state}
-
-Returns batched state tensors.
-
-##### Args:
-
-
-* <b>`state_name`</b>: string, matches a key provided in `initial_states`.
-
-##### Returns:
-
- A `Tensor`: a batched set of states, either initial states (if this is
- the first run of the given example), or a value as stored during
- a previous iteration via `save_state` control flow.
- Its type is the same as `initial_states["state_name"].dtype`.
- If we had at input:
-
- ```python
- initial_states[state_name].get_shape() == [d1, d2, ...],
- ```
-
- then
-
- ```python
- state(state_name).get_shape() == [batch_size, d1, d2, ...]
- ```
-
-##### Raises:
-
-
-* <b>`KeyError`</b>: if `state_name` does not match any of the initial states
- declared in `initial_states`.
-
-
-- - -
-
-#### `tf.contrib.training.NextQueuedSequenceBatch.total_length` {#NextQueuedSequenceBatch.total_length}
-
-The lengths of the original (non-truncated) unrolled examples.
-
-##### Returns:
-
- An integer vector of length `batch_size`, the total lengths.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.training.batch_sequences_with_states.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.training.batch_sequences_with_states.md
deleted file mode 100644
index 63c3a47229..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.training.batch_sequences_with_states.md
+++ /dev/null
@@ -1,167 +0,0 @@
-### `tf.contrib.training.batch_sequences_with_states(input_key, input_sequences, input_context, input_length, initial_states, num_unroll, batch_size, num_threads=3, capacity=1000, allow_small_batch=True, pad=True, make_keys_unique=False, make_keys_unique_seed=None, name=None)` {#batch_sequences_with_states}
-
-Creates batches of segments of sequential input.
-
-This method creates a `SequenceQueueingStateSaver` (SQSS) and adds it to
-the queuerunners. It returns a `NextQueuedSequenceBatch`.
-
-It accepts one example at a time identified by a unique `input_key`.
-`input_sequence` is a dict with values that are tensors with time as first
-dimension. This time dimension must be the same across those tensors of an
-example. It can vary across examples. Although it always has to be a multiple
-of `num_unroll`. Hence, padding may be necessary and it is turned on by
-default by `pad=True`.
-
-`input_length` is a Tensor scalar or an int recording the time dimension prior
-to padding. It should be between 0 and the time dimension. One reason we want
-to keep track of it is so that we can take it into consideration when
-computing the loss. If `pad=True` then `input_length` can be `None` and will
-be inferred.
-
-This methods segments `input_sequence` into segments of length `num_unroll`.
-It batches input sequences from `batch_size` many examples. These mini-batches
-are available through the `sequence` property of the output. Moreover, for
-each entry in the batch we can access its original `input_key` in `key` and
-its input length in `total_length`. `length` records within this segment how
-many non-padded time steps there are.
-
-Static features of an example that do not vary across time can be part of the
-`input_context`, a dict with Tensor values. This method copies the context for
-each segment and makes it available in the `context` of the output.
-
-This method can maintain and update a state for each example. It accepts some
-initial_states as a dict with Tensor values. The first mini-batch an example
-is contained has initial_states as entry of the `state`. If save_state is
-called then the next segment will have the updated entry of the `state`.
-See `NextQueuedSequenceBatch` for a complete list of properties and methods.
-
-Example usage:
-
-```python
-batch_size = 32
-num_unroll = 20
-num_enqueue_threads = 3
-lstm_size = 8
-cell = tf.contrib.rnn.BasicLSTMCell(num_units=lstm_size)
-
-key, sequences, context = my_parser(raw_data)
-initial_state_values = tf.zeros((state_size,), dtype=tf.float32)
-initial_states = {"lstm_state": initial_state_values}
-batch = tf.batch_sequences_with_states(
- input_key=key,
- input_sequences=sequences,
- input_context=context,
- initial_states=initial_states,
- num_unroll=num_unroll,
- batch_size=batch_size,
- num_threads=num_enqueue_threads,
- capacity=batch_size * num_enqueue_threads * 2)
-
-inputs = batch.sequences["input"]
-context_label = batch.context["label"]
-
-inputs_by_time = tf.split(value=inputs, num_or_size_splits=num_unroll, axis=1)
-assert len(inputs_by_time) == num_unroll
-
-lstm_output, _ = tf.contrib.rnn.static_state_saving_rnn(
- cell,
- inputs_by_time,
- state_saver=batch,
- state_name="lstm_state")
-
-# Start a prefetcher in the background
-sess = tf.Session()
-
-tf.train.start_queue_runners(sess=session)
-
-while True:
- # Step through batches, perform training or inference...
- session.run([lstm_output])
-```
-
-##### Args:
-
-
-* <b>`input_key`</b>: A string scalar `Tensor`, the **unique** key for the given
- input example. This is used to keep track of the split minibatch elements
- of this input. Batched keys of the current iteration are made
- accessible via the `key` property. The shape of `input_key` (scalar) must
- be fully specified. Consider setting `make_keys_unique` to True when
- iterating over the same input multiple times.
-
- **Note**: if `make_keys_unique=False` then `input_key`s must be unique.
-
-* <b>`input_sequences`</b>: A dict mapping string names to `Tensor` values. The values
- must all have matching first dimension, called `value_length`. They may
- vary from input to input. The remainder of the shape (other than the first
- dimension) must be fully specified.
- The `SequenceQueueingStateSaver` will split these tensors along
- this first dimension into minibatch elements of dimension `num_unrolled`.
- Batched and segmented sequences of the current iteration are made
- accessible via the `sequences` property.
-
- **Note**: if `pad=False`, then `value_length` must always be a multiple
- of `num_unroll`.
-
-* <b>`input_context`</b>: A dict mapping string names to `Tensor` values. The values
- are treated as "global" across all time splits of the given input example,
- and will be copied across for all minibatch elements accordingly.
- Batched and copied context of the current iteration are made
- accessible via the `context` property.
-
- **Note**: All input_context values must have fully defined shapes.
-
-* <b>`input_length`</b>: None or an int32 scalar `Tensor`, the length of the sequence
- prior to padding. If `input_length=None` and `pad=True` then the length
- will be inferred and will be equal to `value_length`. If `pad=False` then
- `input_length` cannot be `None`: `input_length` must be specified. Its
- shape of `input_length` (scalar) must be fully specified. Its value may be
- at most `value_length` for any given input (see above for the definition
- of `value_length`). Batched and total lengths of the current iteration are
- made accessible via the `length` and `total_length` properties.
-* <b>`initial_states`</b>: A dict mapping string state names to multi-dimensional
- values (e.g. constants or tensors). This input defines the set of
- states that will be kept track of during computing iterations, and
- which can be accessed via the `state` and `save_state` methods.
-
- **Note**: All initial_state values must have fully defined shapes.
-
-* <b>`num_unroll`</b>: Python integer, how many time steps to unroll at a time.
- The input sequences of length k are then split into k / num_unroll many
- segments.
-* <b>`batch_size`</b>: int or int32 scalar `Tensor`, how large minibatches should
- be when accessing the `state()` method and `context`, `sequences`, etc,
- properties.
-* <b>`num_threads`</b>: The int number of threads enqueuing input examples into a
- queue.
-* <b>`capacity`</b>: The max capacity of the queue in number of examples. Needs to be
- at least `batch_size`. Defaults to 1000. When iterating over the same
- input example multiple times reusing their keys the `capacity` must be
- smaller than the number of examples.
-* <b>`allow_small_batch`</b>: If true, the queue will return smaller batches when
- there aren't enough input examples to fill a whole batch and the end of
- the input has been reached.
-* <b>`pad`</b>: If `True`, `input_sequences` will be padded to multiple of
- `num_unroll`. In that case `input_length` may be `None` and is assumed to
- be the length of first dimension of values in `input_sequences`
- (i.e. `value_length`).
-* <b>`make_keys_unique`</b>: Whether to append a random integer to the `input_key` in
- an effort to make it unique. The seed can be set via
- `make_keys_unique_seed`.
-* <b>`make_keys_unique_seed`</b>: If `make_keys_unique=True` this fixes the seed with
- which a random postfix is generated.
-* <b>`name`</b>: An op name string (optional).
-
-##### Returns:
-
- A NextQueuedSequenceBatch with segmented and batched inputs and their
- states.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if any of the inputs is not an expected type.
-* <b>`ValueError`</b>: if any of the input values is inconsistent, e.g. if
- not enough shape information is available from inputs to build
- the state saver.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.util.stripped_op_list_for_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.util.stripped_op_list_for_graph.md
deleted file mode 100644
index 23bfb28542..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.contrib.util.stripped_op_list_for_graph.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.contrib.util.stripped_op_list_for_graph(graph_def)` {#stripped_op_list_for_graph}
-
-Collect the stripped OpDefs for ops used by a graph.
-
-This function computes the `stripped_op_list` field of `MetaGraphDef` and
-similar protos. The result can be communicated from the producer to the
-consumer, which can then use the C++ function
-`RemoveNewDefaultAttrsFromGraphDef` to improve forwards compatibility.
-
-##### Args:
-
-
-* <b>`graph_def`</b>: A `GraphDef` proto, as from `graph.as_graph_def()`.
-
-##### Returns:
-
- An `OpList` of ops used by the graph.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If an unregistered op is used.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.decode_base64.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.decode_base64.md
deleted file mode 100644
index 0d490e313b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.decode_base64.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.decode_base64(input, name=None)` {#decode_base64}
-
-Decode web-safe base64-encoded strings.
-
-Input may or may not have padding at the end. See EncodeBase64 for padding.
-Web-safe means that input must use - and _ instead of + and /.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `string`. Base64 strings to decode.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`. Decoded strings.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.FailedPreconditionError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.FailedPreconditionError.md
deleted file mode 100644
index 1cbd338bf9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.errors.FailedPreconditionError.md
+++ /dev/null
@@ -1,13 +0,0 @@
-Operation was rejected because the system is not in a state to execute it.
-
-This exception is most commonly raised when running an operation
-that reads a [`tf.Variable`](../../api_docs/python/state_ops.md#Variable)
-before it has been initialized.
-
-- - -
-
-#### `tf.errors.FailedPreconditionError.__init__(node_def, op, message)` {#FailedPreconditionError.__init__}
-
-Creates a `FailedPreconditionError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.extract_image_patches.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.extract_image_patches.md
deleted file mode 100644
index 9732ad8de4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.extract_image_patches.md
+++ /dev/null
@@ -1,40 +0,0 @@
-### `tf.extract_image_patches(images, ksizes, strides, rates, padding, name=None)` {#extract_image_patches}
-
-Extract `patches` from `images` and put them in the "depth" output dimension.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
- 4-D Tensor with shape `[batch, in_rows, in_cols, depth]`.
-* <b>`ksizes`</b>: A list of `ints` that has length `>= 4`.
- The size of the sliding window for each dimension of `images`.
-* <b>`strides`</b>: A list of `ints` that has length `>= 4`.
- 1-D of length 4. How far the centers of two consecutive patches are in
- the images. Must be: `[1, stride_rows, stride_cols, 1]`.
-* <b>`rates`</b>: A list of `ints` that has length `>= 4`.
- 1-D of length 4. Must be: `[1, rate_rows, rate_cols, 1]`. This is the
- input stride, specifying how far two consecutive patch samples are in the
- input. Equivalent to extracting patches with
- `patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1)`, followed by
- subsampling them spatially by a factor of `rates`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-
- We specify the size-related attributes as:
-
- ```python
- ksizes = [1, ksize_rows, ksize_cols, 1]
- strides = [1, strides_rows, strides_cols, 1]
- rates = [1, rates_rows, rates_cols, 1]
- ```
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `images`.
- 4-D Tensor with shape `[batch, out_rows, out_cols, ksize_rows *
- ksize_cols * depth]` containing image patches with size
- `ksize_rows x ksize_cols x depth` vectorized in the "depth" dimension.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.fixed_size_partitioner.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.fixed_size_partitioner.md
deleted file mode 100644
index fdeea7f207..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.fixed_size_partitioner.md
+++ /dev/null
@@ -1,15 +0,0 @@
-### `tf.fixed_size_partitioner(num_shards, axis=0)` {#fixed_size_partitioner}
-
-Partitioner to specify a fixed number of shards along given axis.
-
-##### Args:
-
-
-* <b>`num_shards`</b>: `int`, number of shards to partition variable.
-* <b>`axis`</b>: `int`, axis to partition on.
-
-##### Returns:
-
- A partition function usable as the `partitioner` argument to
- `variable_scope`, `get_variable`, and `get_partitioned_variable_list`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.floor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.floor.md
deleted file mode 100644
index 4aadcff6ef..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.floor.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.floor(x, name=None)` {#floor}
-
-Returns element-wise largest integer not greater than x.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.greater.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.greater.md
deleted file mode 100644
index 99b34aaca4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.greater.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.greater(x, y, name=None)` {#greater}
-
-Returns the truth value of (x > y) element-wise.
-
-*NOTE*: `Greater` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.histogram_fixed_width.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.histogram_fixed_width.md
deleted file mode 100644
index 3334d6d09d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.histogram_fixed_width.md
+++ /dev/null
@@ -1,38 +0,0 @@
-### `tf.histogram_fixed_width(values, value_range, nbins=100, dtype=tf.int32, name=None)` {#histogram_fixed_width}
-
-Return histogram of values.
-
-Given the tensor `values`, this operation returns a rank 1 histogram counting
-the number of entries in `values` that fell into every bin. The bins are
-equal width and determined by the arguments `value_range` and `nbins`.
-
-##### Args:
-
-
-* <b>`values`</b>: Numeric `Tensor`.
-* <b>`value_range`</b>: Shape [2] `Tensor`. new_values <= value_range[0] will be
- mapped to hist[0], values >= value_range[1] will be mapped to hist[-1].
- Must be same dtype as new_values.
-* <b>`nbins`</b>: Scalar `int32 Tensor`. Number of histogram bins.
-* <b>`dtype`</b>: dtype for returned histogram.
-* <b>`name`</b>: A name for this operation (defaults to 'histogram_fixed_width').
-
-##### Returns:
-
- A 1-D `Tensor` holding histogram of values.
-
-
-* <b>`Examples`</b>:
-
-```python
-# Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf)
-nbins = 5
-value_range = [0.0, 5.0]
-new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15]
-
-with tf.default_session() as sess:
- hist = tf.histogram_fixed_width(new_values, value_range, nbins=5)
- variables.global_variables_initializer().run()
- sess.run(hist) => [2, 1, 1, 0, 2]
-```
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.hsv_to_rgb.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.hsv_to_rgb.md
deleted file mode 100644
index 9bb9c51198..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.image.hsv_to_rgb.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.image.hsv_to_rgb(images, name=None)` {#hsv_to_rgb}
-
-Convert one or more images from HSV to RGB.
-
-Outputs a tensor of the same shape as the `images` tensor, containing the RGB
-value of the pixels. The output is only well defined if the value in `images`
-are in `[0,1]`.
-
-See `rgb_to_hsv` for a description of the HSV encoding.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
- 1-D or higher rank. HSV data to convert. Last dimension must be size 3.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `images`. `images` converted to RGB.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.initialize_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.initialize_variables.md
deleted file mode 100644
index 3ab51c4b3c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.initialize_variables.md
+++ /dev/null
@@ -1,8 +0,0 @@
-### `tf.initialize_variables(*args, **kwargs)` {#initialize_variables}
-
-See `tf.variables_initializer`. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02.
-Instructions for updating:
-Use `tf.variables_initializer` instead.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.log.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.log.md
deleted file mode 100644
index a6c085b5cf..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.log.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.log(x, name=None)` {#log}
-
-Computes natural logarithm of x element-wise.
-
-I.e., \\(y = \log_e x\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.conv2d_backprop_input.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.conv2d_backprop_input.md
deleted file mode 100644
index dc223e8ec1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.conv2d_backprop_input.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.nn.conv2d_backprop_input(input_sizes, filter, out_backprop, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)` {#conv2d_backprop_input}
-
-Computes the gradients of convolution with respect to the input.
-
-##### Args:
-
-
-* <b>`input_sizes`</b>: A `Tensor` of type `int32`.
- An integer vector representing the shape of `input`,
- where `input` is a 4-D `[batch, height, width, channels]` tensor.
-* <b>`filter`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
- 4-D with shape
- `[filter_height, filter_width, in_channels, out_channels]`.
-* <b>`out_backprop`</b>: A `Tensor`. Must have the same type as `filter`.
- 4-D with shape `[batch, out_height, out_width, out_channels]`.
- Gradients w.r.t. the output of the convolution.
-* <b>`strides`</b>: A list of `ints`.
- The stride of the sliding window for each dimension of the input
- of the convolution. Must be in the same order as the dimension specified with
- format.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`use_cudnn_on_gpu`</b>: An optional `bool`. Defaults to `True`.
-* <b>`data_format`</b>: An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`.
- Specify the data format of the input and output data. With the
- default format "NHWC", the data is stored in the order of:
- [batch, in_height, in_width, in_channels].
- Alternatively, the format could be "NCHW", the data storage order of:
- [batch, in_channels, in_height, in_width].
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `filter`.
- 4-D with shape `[batch, in_height, in_width, in_channels]`. Gradient
- w.r.t. the input of the convolution.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.conv2d_transpose.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.conv2d_transpose.md
deleted file mode 100644
index b5a2ed50de..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.conv2d_transpose.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.nn.conv2d_transpose(value, filter, output_shape, strides, padding='SAME', data_format='NHWC', name=None)` {#conv2d_transpose}
-
-The transpose of `conv2d`.
-
-This operation is sometimes called "deconvolution" after [Deconvolutional
-Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is
-actually the transpose (gradient) of `conv2d` rather than an actual
-deconvolution.
-
-##### Args:
-
-
-* <b>`value`</b>: A 4-D `Tensor` of type `float` and shape
- `[batch, height, width, in_channels]` for `NHWC` data format or
- `[batch, in_channels, height, width]` for `NCHW` data format.
-* <b>`filter`</b>: A 4-D `Tensor` with the same type as `value` and shape
- `[height, width, output_channels, in_channels]`. `filter`'s
- `in_channels` dimension must match that of `value`.
-* <b>`output_shape`</b>: A 1-D `Tensor` representing the output shape of the
- deconvolution op.
-* <b>`strides`</b>: A list of ints. The stride of the sliding window for each
- dimension of the input tensor.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
- See the [comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`data_format`</b>: A string. 'NHWC' and 'NCHW' are supported.
-* <b>`name`</b>: Optional name for the returned tensor.
-
-##### Returns:
-
- A `Tensor` with the same type as `value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If input/output depth does not match `filter`'s shape, or if
- padding is other than `'VALID'` or `'SAME'`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.ctc_beam_search_decoder.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.ctc_beam_search_decoder.md
deleted file mode 100644
index e02a076cb3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.ctc_beam_search_decoder.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.nn.ctc_beam_search_decoder(inputs, sequence_length, beam_width=100, top_paths=1, merge_repeated=True)` {#ctc_beam_search_decoder}
-
-Performs beam search decoding on the logits given in input.
-
-**Note** The `ctc_greedy_decoder` is a special case of the
-`ctc_beam_search_decoder` with `top_paths=1` and `beam_width=1` (but
-that decoder is faster for this special case).
-
-If `merge_repeated` is `True`, merge repeated classes in the output beams.
-This means that if consecutive entries in a beam are the same,
-only the first of these is emitted. That is, when the top path
-is `A B B B B`, the return value is:
-
- * `A B` if `merge_repeated = True`.
- * `A B B B B` if `merge_repeated = False`.
-
-##### Args:
-
-
-* <b>`inputs`</b>: 3-D `float` `Tensor`, size
- `[max_time x batch_size x num_classes]`. The logits.
-* <b>`sequence_length`</b>: 1-D `int32` vector containing sequence lengths,
- having size `[batch_size]`.
-* <b>`beam_width`</b>: An int scalar >= 0 (beam search beam width).
-* <b>`top_paths`</b>: An int scalar >= 0, <= beam_width (controls output size).
-* <b>`merge_repeated`</b>: Boolean. Default: True.
-
-##### Returns:
-
- A tuple `(decoded, log_probabilities)` where
-
-* <b>`decoded`</b>: A list of length top_paths, where `decoded[j]`
- is a `SparseTensor` containing the decoded outputs:
- `decoded[j].indices`: Indices matrix `(total_decoded_outputs[j] x 2)`
- The rows store: [batch, time].
- `decoded[j].values`: Values vector, size `(total_decoded_outputs[j])`.
- The vector stores the decoded classes for beam j.
- `decoded[j].shape`: Shape vector, size `(2)`.
- The shape values are: `[batch_size, max_decoded_length[j]]`.
-* <b>`log_probability`</b>: A `float` matrix `(batch_size x top_paths)` containing
- sequence log-probabilities.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d.md
deleted file mode 100644
index fbb0a9f09a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d.md
+++ /dev/null
@@ -1,45 +0,0 @@
-### `tf.nn.depthwise_conv2d(input, filter, strides, padding, rate=None, name=None)` {#depthwise_conv2d}
-
-Depthwise 2-D convolution.
-
-Given an input tensor of shape `[batch, in_height, in_width, in_channels]`
-and a filter tensor of shape
-`[filter_height, filter_width, in_channels, channel_multiplier]`
-containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d`
-applies a different filter to each input channel (expanding from 1 channel
-to `channel_multiplier` channels for each), then concatenates the results
-together. The output has `in_channels * channel_multiplier` channels.
-
-In detail,
-
- output[b, i, j, k * channel_multiplier + q] = sum_{di, dj}
- filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di,
- strides[2] * j + rate[1] * dj, k]
-
-Must have `strides[0] = strides[3] = 1`. For the most common case of the
-same horizontal and vertical strides, `strides = [1, stride, stride, 1]`.
-If any value in `rate` is greater than 1, we perform atrous depthwise
-convolution, in which case all values in the `strides` tensor must be equal
-to 1.
-
-##### Args:
-
-
-* <b>`input`</b>: 4-D with shape `[batch, in_height, in_width, in_channels]`.
-* <b>`filter`</b>: 4-D with shape
- `[filter_height, filter_width, in_channels, channel_multiplier]`.
-* <b>`strides`</b>: 1-D of size 4. The stride of the sliding window for each
- dimension of `input`.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
- See the [comment
- here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`rate`</b>: 1-D of size 2. The dilation rate in which we sample input values
- across the `height` and `width` dimensions in atrous convolution. If it is
- greater than 1, then all values of strides must be 1.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A 4-D `Tensor` of shape
- `[batch, out_height, out_width, in_channels * channel_multiplier].`
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d_native.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d_native.md
deleted file mode 100644
index c2736f1ba9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.depthwise_conv2d_native.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.nn.depthwise_conv2d_native(input, filter, strides, padding, name=None)` {#depthwise_conv2d_native}
-
-Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.
-
-Given an input tensor of shape `[batch, in_height, in_width, in_channels]`
-and a filter / kernel tensor of shape
-`[filter_height, filter_width, in_channels, channel_multiplier]`, containing
-`in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies
-a different filter to each input channel (expanding from 1 channel to
-`channel_multiplier` channels for each), then concatenates the results
-together. Thus, the output has `in_channels * channel_multiplier` channels.
-
-for k in 0..in_channels-1
- for q in 0..channel_multiplier-1
- output[b, i, j, k * channel_multiplier + q] =
- sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] *
- filter[di, dj, k, q]
-
-Must have `strides[0] = strides[3] = 1`. For the most common case of the same
-horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`filter`</b>: A `Tensor`. Must have the same type as `input`.
-* <b>`strides`</b>: A list of `ints`.
- 1-D of length 4. The stride of the sliding window for each dimension
- of `input`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.dilation2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.dilation2d.md
deleted file mode 100644
index b9cf01da19..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.dilation2d.md
+++ /dev/null
@@ -1,50 +0,0 @@
-### `tf.nn.dilation2d(input, filter, strides, rates, padding, name=None)` {#dilation2d}
-
-Computes the grayscale dilation of 4-D `input` and 3-D `filter` tensors.
-
-The `input` tensor has shape `[batch, in_height, in_width, depth]` and the
-`filter` tensor has shape `[filter_height, filter_width, depth]`, i.e., each
-input channel is processed independently of the others with its own structuring
-function. The `output` tensor has shape
-`[batch, out_height, out_width, depth]`. The spatial dimensions of the output
-tensor depend on the `padding` algorithm. We currently only support the default
-"NHWC" `data_format`.
-
-In detail, the grayscale morphological 2-D dilation is the max-sum correlation
-(for consistency with `conv2d`, we use unmirrored filters):
-
- output[b, y, x, c] =
- max_{dy, dx} input[b,
- strides[1] * y + rates[1] * dy,
- strides[2] * x + rates[2] * dx,
- c] +
- filter[dy, dx, c]
-
-Max-pooling is a special case when the filter has size equal to the pooling
-kernel size and contains all zeros.
-
-Note on duality: The dilation of `input` by the `filter` is equal to the
-negation of the erosion of `-input` by the reflected `filter`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
- 4-D with shape `[batch, in_height, in_width, depth]`.
-* <b>`filter`</b>: A `Tensor`. Must have the same type as `input`.
- 3-D with shape `[filter_height, filter_width, depth]`.
-* <b>`strides`</b>: A list of `ints` that has length `>= 4`.
- The stride of the sliding window for each dimension of the input
- tensor. Must be: `[1, stride_height, stride_width, 1]`.
-* <b>`rates`</b>: A list of `ints` that has length `>= 4`.
- The input stride for atrous morphological dilation. Must be:
- `[1, rate_height, rate_width, 1]`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- 4-D with shape `[batch, out_height, out_width, depth]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.l2_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.l2_loss.md
deleted file mode 100644
index fd648ca642..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.l2_loss.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.nn.l2_loss(t, name=None)` {#l2_loss}
-
-L2 Loss.
-
-Computes half the L2 norm of a tensor without the `sqrt`:
-
- output = sum(t ** 2) / 2
-
-##### Args:
-
-
-* <b>`t`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Typically 2-D, but may have any dimensions.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `t`. 0-D.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.log_poisson_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.log_poisson_loss.md
deleted file mode 100644
index cf5cbe4740..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.log_poisson_loss.md
+++ /dev/null
@@ -1,44 +0,0 @@
-### `tf.nn.log_poisson_loss(targets, log_input, compute_full_loss=False, name=None)` {#log_poisson_loss}
-
-Computes log Poisson loss given `log_input`.
-
-Gives the log-likelihood loss between the prediction and the target under the
-assumption that the target has a Poisson distribution.
-Caveat: By default, this is not the exact loss, but the loss minus a
- constant term [log(z!)]. That has no effect for optimization, but
- does not play well with relative loss comparisons. To compute an
- approximation of the log factorial term, specify
- compute_full_loss=True to enable Stirling's Approximation.
-
-For brevity, let `c = log(x) = log_input`, `z = targets`. The log Poisson
-loss is
-
- -log(exp(-x) * (x^z) / z!)
- = -log(exp(-x) * (x^z)) + log(z!)
- ~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
- [ Note the second term is the Stirling's Approximation for log(z!).
- It is invariant to x and does not affect optimization, though
- important for correct relative loss comparisons. It is only
- computed when compute_full_loss == True. ]
- = x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
- = exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
-
-##### Args:
-
-
-* <b>`targets`</b>: A `Tensor` of the same type and shape as `log_input`.
-* <b>`log_input`</b>: A `Tensor` of type `float32` or `float64`.
-* <b>`compute_full_loss`</b>: whether to compute the full loss. If false, a constant
- term is dropped in favor of more efficient optimization.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of the same shape as `log_input` with the componentwise
- logistic losses.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `log_input` and `targets` do not have the same shape.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.max_pool3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.max_pool3d.md
deleted file mode 100644
index 960b322c6c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.max_pool3d.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.nn.max_pool3d(input, ksize, strides, padding, name=None)` {#max_pool3d}
-
-Performs 3D max pooling on the input.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Shape `[batch, depth, rows, cols, channels]` tensor to pool over.
-* <b>`ksize`</b>: A list of `ints` that has length `>= 5`.
- 1-D tensor of length 5. The size of the window for each dimension of
- the input tensor. Must have `ksize[0] = ksize[4] = 1`.
-* <b>`strides`</b>: A list of `ints` that has length `>= 5`.
- 1-D tensor of length 5. The stride of the sliding window for each
- dimension of `input`. Must have `strides[0] = strides[4] = 1`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`. The max pooled output tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.nce_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.nce_loss.md
deleted file mode 100644
index 7cb440b70d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.nce_loss.md
+++ /dev/null
@@ -1,58 +0,0 @@
-### `tf.nn.nce_loss(weights, biases, labels, inputs, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=False, partition_strategy='mod', name='nce_loss')` {#nce_loss}
-
-Computes and returns the noise-contrastive estimation training loss.
-
-See [Noise-contrastive estimation: A new estimation principle for
-unnormalized statistical
-models](http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf).
-Also see our [Candidate Sampling Algorithms
-Reference](../../extras/candidate_sampling.pdf)
-
-Note: By default this uses a log-uniform (Zipfian) distribution for sampling,
-so your labels must be sorted in order of decreasing frequency to achieve
-good results. For more details, see
-[log_uniform_candidate_sampler](#log_uniform_candidate_sampler).
-
-Note: In the case where `num_true` > 1, we assign to each target class
-the target probability 1 / `num_true` so that the target probabilities
-sum to 1 per-example.
-
-Note: It would be useful to allow a variable number of target classes per
-example. We hope to provide this functionality in a future release.
-For now, if you have a variable number of target classes, you can pad them
-out to a constant number by either repeating them or by padding
-with an otherwise unused class.
-
-##### Args:
-
-
-* <b>`weights`</b>: A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor`
- objects whose concatenation along dimension 0 has shape
- [num_classes, dim]. The (possibly-partitioned) class embeddings.
-* <b>`biases`</b>: A `Tensor` of shape `[num_classes]`. The class biases.
-* <b>`labels`</b>: A `Tensor` of type `int64` and shape `[batch_size,
- num_true]`. The target classes.
-* <b>`inputs`</b>: A `Tensor` of shape `[batch_size, dim]`. The forward
- activations of the input network.
-* <b>`num_sampled`</b>: An `int`. The number of classes to randomly sample per batch.
-* <b>`num_classes`</b>: An `int`. The number of possible classes.
-* <b>`num_true`</b>: An `int`. The number of target classes per training example.
-* <b>`sampled_values`</b>: a tuple of (`sampled_candidates`, `true_expected_count`,
- `sampled_expected_count`) returned by a `*_candidate_sampler` function.
- (if None, we default to `log_uniform_candidate_sampler`)
-* <b>`remove_accidental_hits`</b>: A `bool`. Whether to remove "accidental hits"
- where a sampled class equals one of the target classes. If set to
- `True`, this is a "Sampled Logistic" loss instead of NCE, and we are
- learning to generate log-odds instead of log probabilities. See
- our [Candidate Sampling Algorithms Reference]
- (../../extras/candidate_sampling.pdf).
- Default is False.
-* <b>`partition_strategy`</b>: A string specifying the partitioning strategy, relevant
- if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported.
- Default is `"mod"`. See `tf.nn.embedding_lookup` for more details.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `batch_size` 1-D tensor of per-example NCE losses.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.softplus.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.softplus.md
deleted file mode 100644
index c0faef9687..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.softplus.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.nn.softplus(features, name=None)` {#softplus}
-
-Computes softplus: `log(exp(features) + 1)`.
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `features`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.sparse_softmax_cross_entropy_with_logits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.sparse_softmax_cross_entropy_with_logits.md
deleted file mode 100644
index 0aa696ba2f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.nn.sparse_softmax_cross_entropy_with_logits.md
+++ /dev/null
@@ -1,50 +0,0 @@
-### `tf.nn.sparse_softmax_cross_entropy_with_logits(_sentinel=None, labels=None, logits=None, name=None)` {#sparse_softmax_cross_entropy_with_logits}
-
-Computes sparse softmax cross entropy between `logits` and `labels`.
-
-Measures the probability error in discrete classification tasks in which the
-classes are mutually exclusive (each entry is in exactly one class). For
-example, each CIFAR-10 image is labeled with one and only one label: an image
-can be a dog or a truck, but not both.
-
-**NOTE:** For this operation, the probability of a given label is considered
-exclusive. That is, soft classes are not allowed, and the `labels` vector
-must provide a single specific index for the true class for each row of
-`logits` (each minibatch entry). For soft softmax classification with
-a probability distribution for each entry, see
-`softmax_cross_entropy_with_logits`.
-
-**WARNING:** This op expects unscaled logits, since it performs a softmax
-on `logits` internally for efficiency. Do not call this op with the
-output of `softmax`, as it will produce incorrect results.
-
-A common use case is to have logits of shape `[batch_size, num_classes]` and
-labels of shape `[batch_size]`. But higher dimensions are supported.
-
-**Note that to avoid confusion, it is required to pass only named arguments to
-this function.**
-
-##### Args:
-
- _sentinel: Used to prevent positional parameters. Internal, do not use.
-
-* <b>`labels`</b>: `Tensor` of shape `[d_0, d_1, ..., d_{r-1}]` (where `r` is rank of
- `labels` and result) and dtype `int32` or `int64`. Each entry in `labels`
- must be an index in `[0, num_classes)`. Other values will raise an
- exception when this op is run on CPU, and return `NaN` for corresponding
- loss and gradient rows on GPU.
-* <b>`logits`</b>: Unscaled log probabilities of shape
- `[d_0, d_1, ..., d_{r-1}, num_classes]` and dtype `float32` or `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of the same shape as `labels` and of the same type as `logits`
- with the softmax cross entropy loss.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If logits are scalars (need to have rank >= 1) or if the rank
- of the labels is not equal to the rank of the labels minus one.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.placeholder_with_default.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.placeholder_with_default.md
deleted file mode 100644
index 2f3cdd593c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.placeholder_with_default.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.placeholder_with_default(input, shape, name=None)` {#placeholder_with_default}
-
-A placeholder op that passes through `input` when its output is not fed.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. The default value to produce when `output` is not fed.
-* <b>`shape`</b>: A `tf.TensorShape` or list of `ints`.
- The (possibly partial) shape of the tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- A placeholder tensor that defaults to `input` if it is not fed.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.python_io.TFRecordCompressionType.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.python_io.TFRecordCompressionType.md
deleted file mode 100644
index 8b9cbe0445..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.python_io.TFRecordCompressionType.md
+++ /dev/null
@@ -1 +0,0 @@
-The type of compression for the record.
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.reduce_all.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.reduce_all.md
deleted file mode 100644
index 7ce5d55ccc..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.reduce_all.md
+++ /dev/null
@@ -1,40 +0,0 @@
-### `tf.reduce_all(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_all}
-
-Computes the "logical and" of elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-For example:
-
-```python
-# 'x' is [[True, True]
-# [False, False]]
-tf.reduce_all(x) ==> False
-tf.reduce_all(x, 0) ==> [False, False]
-tf.reduce_all(x, 1) ==> [True, False]
-```
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The boolean tensor to reduce.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
-@compatibility(numpy)
-Equivalent to np.all
-@end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.reduce_mean.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.reduce_mean.md
deleted file mode 100644
index 1c6948ffa3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.reduce_mean.md
+++ /dev/null
@@ -1,40 +0,0 @@
-### `tf.reduce_mean(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_mean}
-
-Computes the mean of elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-For example:
-
-```python
-# 'x' is [[1., 1.]
-# [2., 2.]]
-tf.reduce_mean(x) ==> 1.5
-tf.reduce_mean(x, 0) ==> [1.5, 1.5]
-tf.reduce_mean(x, 1) ==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The tensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
-@compatibility(numpy)
-Equivalent to np.mean
-@end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.segment_max.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.segment_max.md
deleted file mode 100644
index c9d7a28900..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.segment_max.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.segment_max(data, segment_ids, name=None)` {#segment_max}
-
-Computes the maximum along segments of a tensor.
-
-Read [the section on Segmentation](../../api_docs/python/math_ops.md#segmentation)
-for an explanation of segments.
-
-Computes a tensor such that
-\\(output_i = \max_j(data_j)\\) where `max` is over `j` such
-that `segment_ids[j] == i`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/SegmentMax.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`segment_ids`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor whose rank is equal to the rank of `data`'s
- first dimension. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.self_adjoint_eigvals.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.self_adjoint_eigvals.md
deleted file mode 100644
index c52a82a49a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.self_adjoint_eigvals.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.self_adjoint_eigvals(tensor, name=None)` {#self_adjoint_eigvals}
-
-Computes the eigenvalues of one or more self-adjoint matrices.
-
-##### Args:
-
-
-* <b>`tensor`</b>: `Tensor` of shape `[..., N, N]`.
-* <b>`name`</b>: string, optional name of the operation.
-
-##### Returns:
-
-
-* <b>`e`</b>: Eigenvalues. Shape is `[..., N]`. The vector `e[..., :]` contains the `N`
- eigenvalues of `tensor[..., :, :]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_add.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_add.md
deleted file mode 100644
index 3a3c88db49..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_add.md
+++ /dev/null
@@ -1,55 +0,0 @@
-### `tf.sparse_add(a, b, thresh=0)` {#sparse_add}
-
-Adds two tensors, at least one of each is a `SparseTensor`.
-
-If one `SparseTensor` and one `Tensor` are passed in, returns a `Tensor`. If
-both arguments are `SparseTensor`s, this returns a `SparseTensor`. The order
-of arguments does not matter. Use vanilla `tf.add()` for adding two dense
-`Tensor`s.
-
-The indices of any input `SparseTensor` are assumed ordered in standard
-lexicographic order. If this is not the case, before this step run
-`SparseReorder` to restore index ordering.
-
-If both arguments are sparse, we perform "clipping" as follows. By default,
-if two values sum to zero at some index, the output `SparseTensor` would still
-include that particular location in its index, storing a zero in the
-corresponding value slot. To override this, callers can specify `thresh`,
-indicating that if the sum has a magnitude strictly smaller than `thresh`, its
-corresponding value and index would then not be included. In particular,
-`thresh == 0.0` (default) means everything is kept and actual thresholding
-happens only for a positive value.
-
-For example, suppose the logical sum of two sparse operands is (densified):
-
- [ 2]
- [.1 0]
- [ 6 -.2]
-
-Then,
-
- * `thresh == 0` (the default): all 5 index/value pairs will be returned.
- * `thresh == 0.11`: only .1 and 0 will vanish, and the remaining three
- index/value pairs will be returned.
- * `thresh == 0.21`: .1, 0, and -.2 will vanish.
-
-##### Args:
-
-
-* <b>`a`</b>: The first operand; `SparseTensor` or `Tensor`.
-* <b>`b`</b>: The second operand; `SparseTensor` or `Tensor`. At least one operand
- must be sparse.
-* <b>`thresh`</b>: A 0-D `Tensor`. The magnitude threshold that determines if an
- output value/index pair takes space. Its dtype should match that of the
- values if they are real; if the latter are complex64/complex128, then the
- dtype should be float32/float64, correspondingly.
-
-##### Returns:
-
- A `SparseTensor` or a `Tensor`, representing the sum.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If both `a` and `b` are `Tensor`s. Use `tf.add()` instead.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_to_indicator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_to_indicator.md
deleted file mode 100644
index ede12c08fe..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.sparse_to_indicator.md
+++ /dev/null
@@ -1,52 +0,0 @@
-### `tf.sparse_to_indicator(sp_input, vocab_size, name=None)` {#sparse_to_indicator}
-
-Converts a `SparseTensor` of ids into a dense bool indicator tensor.
-
-The last dimension of `sp_input.indices` is discarded and replaced with
-the values of `sp_input`. If `sp_input.dense_shape = [D0, D1, ..., Dn, K]`,
-then `output.shape = [D0, D1, ..., Dn, vocab_size]`, where
-
- output[d_0, d_1, ..., d_n, sp_input[d_0, d_1, ..., d_n, k]] = True
-
-and False elsewhere in `output`.
-
-For example, if `sp_input.dense_shape = [2, 3, 4]` with non-empty values:
-
- [0, 0, 0]: 0
- [0, 1, 0]: 10
- [1, 0, 3]: 103
- [1, 1, 2]: 150
- [1, 1, 3]: 149
- [1, 1, 4]: 150
- [1, 2, 1]: 121
-
-and `vocab_size = 200`, then the output will be a `[2, 3, 200]` dense bool
-tensor with False everywhere except at positions
-
- (0, 0, 0), (0, 1, 10), (1, 0, 103), (1, 1, 149), (1, 1, 150),
- (1, 2, 121).
-
-Note that repeats are allowed in the input SparseTensor.
-This op is useful for converting `SparseTensor`s into dense formats for
-compatibility with ops that expect dense tensors.
-
-The input `SparseTensor` must be in row-major order.
-
-##### Args:
-
-
-* <b>`sp_input`</b>: A `SparseTensor` with `values` property of type `int32` or
- `int64`.
-* <b>`vocab_size`</b>: A scalar int64 Tensor (or Python int) containing the new size
- of the last dimension, `all(0 <= sp_input.values < vocab_size)`.
-* <b>`name`</b>: A name prefix for the returned tensors (optional)
-
-##### Returns:
-
- A dense bool indicator tensor representing the indices with specified value.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.stack.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.stack.md
deleted file mode 100644
index 51f81c1000..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.stack.md
+++ /dev/null
@@ -1,44 +0,0 @@
-### `tf.stack(values, axis=0, name='stack')` {#stack}
-
-Stacks a list of rank-`R` tensors into one rank-`(R+1)` tensor.
-
-Packs the list of tensors in `values` into a tensor with rank one higher than
-each tensor in `values`, by packing them along the `axis` dimension.
-Given a list of length `N` of tensors of shape `(A, B, C)`;
-
-if `axis == 0` then the `output` tensor will have the shape `(N, A, B, C)`.
-if `axis == 1` then the `output` tensor will have the shape `(A, N, B, C)`.
-Etc.
-
-For example:
-
-```prettyprint
-# 'x' is [1, 4]
-# 'y' is [2, 5]
-# 'z' is [3, 6]
-stack([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim.
-stack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]]
-```
-
-This is the opposite of unstack. The numpy equivalent is
-
- tf.stack([x, y, z]) = np.asarray([x, y, z])
-
-##### Args:
-
-
-* <b>`values`</b>: A list of `Tensor` objects with the same shape and type.
-* <b>`axis`</b>: An `int`. The axis to stack along. Defaults to the first dimension.
- Supports negative indexes.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
-
-* <b>`output`</b>: A stacked `Tensor` with the same type as `values`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `axis` is out of the range [-(R+1), R+1).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.string_to_hash_bucket_fast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.string_to_hash_bucket_fast.md
deleted file mode 100644
index e684058326..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.string_to_hash_bucket_fast.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.string_to_hash_bucket_fast(input, num_buckets, name=None)` {#string_to_hash_bucket_fast}
-
-Converts each string in the input Tensor to its hash mod by a number of buckets.
-
-The hash function is deterministic on the content of the string within the
-process and will never change. However, it is not suitable for cryptography.
-This function may be used when CPU time is scarce and inputs are trusted or
-unimportant. There is a risk of adversaries constructing inputs that all hash
-to the same bucket. To prevent this problem, use a strong hash function with
-`tf.string_to_hash_bucket_strong`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `string`. The strings to assign a hash bucket.
-* <b>`num_buckets`</b>: An `int` that is `>= 1`. The number of buckets.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `int64`.
- A Tensor of the same shape as the input `string_tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.summary.SummaryDescription.RegisterExtension.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.summary.SummaryDescription.RegisterExtension.md
deleted file mode 100644
index 3cfd7103d7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.summary.SummaryDescription.RegisterExtension.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.summary.SummaryDescription.RegisterExtension(extension_handle)` {#SummaryDescription.RegisterExtension}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.tile.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.tile.md
deleted file mode 100644
index 0c31e73c98..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.tile.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.tile(input, multiples, name=None)` {#tile}
-
-Constructs a tensor by tiling a given tensor.
-
-This operation creates a new tensor by replicating `input` `multiples` times.
-The output tensor's i'th dimension has `input.dims(i) * multiples[i]` elements,
-and the values of `input` are replicated `multiples[i]` times along the 'i'th
-dimension. For example, tiling `[a b c d]` by `[2]` produces
-`[a b c d a b c d]`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. 1-D or higher.
-* <b>`multiples`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 1-D. Length must be the same as the number of dimensions in `input`
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.SingularMonitoredSession.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.SingularMonitoredSession.md
deleted file mode 100644
index 09271a91a1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.SingularMonitoredSession.md
+++ /dev/null
@@ -1,131 +0,0 @@
-Session-like object that handles initialization, restoring, and hooks.
-
-Please note that this utility is not recommended for distributed settings.
-For distributed settings, please use `tf.train.MonitoredSession`. The
-differences between `MonitoredSession` and `SingularMonitoredSession` are:
-* `MonitoredSession` handles `AbortedError` for distributed settings,
- but `SingularMonitoredSession` does not.
-* `MonitoredSession` can be created in `chief` or `worker` modes.
- `SingularMonitoredSession` is always created as `chief`.
-* You can access the raw `tf.Session` object used by
- `SingularMonitoredSession`, whereas in MonitoredSession the raw session is
- private. This can be used:
- - To `run` without hooks.
- - To save and restore.
-* All other functionality is identical.
-
-Example usage:
-```python
-saver_hook = CheckpointSaverHook(...)
-summary_hook = SummaryHook(...)
-with SingularMonitoredSession(hooks=[saver_hook, summary_hook]) as sess:
- while not sess.should_stop():
- sess.run(train_op)
-```
-
-Initialization: At creation time the hooked session does following things
-in given order:
-
-* calls `hook.begin()` for each given hook
-* finalizes the graph via `scaffold.finalize()`
-* create session
-* initializes the model via initialization ops provided by `Scaffold`
-* restores variables if a checkpoint exists
-* launches queue runners
-
-Run: When `run()` is called, the hooked session does following things:
-
-* calls `hook.before_run()`
-* calls TensorFlow `session.run()` with merged fetches and feed_dict
-* calls `hook.after_run()`
-* returns result of `session.run()` asked by user
-
-Exit: At the `close()`, the hooked session does following things in order:
-
-* calls `hook.end()`
-* closes the queue runners and the session
-* surpresses `OutOfRange` error which indicates that all inputs have been
- processed if the `SingularMonitoredSession` is used as a context.
-- - -
-
-#### `tf.train.SingularMonitoredSession.__enter__()` {#SingularMonitoredSession.__enter__}
-
-
-
-
-- - -
-
-#### `tf.train.SingularMonitoredSession.__exit__(exception_type, exception_value, traceback)` {#SingularMonitoredSession.__exit__}
-
-
-
-
-- - -
-
-#### `tf.train.SingularMonitoredSession.__init__(hooks=None, scaffold=None, master='', config=None, checkpoint_dir=None, stop_grace_period_secs=120)` {#SingularMonitoredSession.__init__}
-
-Creates a SingularMonitoredSession.
-
-##### Args:
-
-
-* <b>`hooks`</b>: An iterable of `SessionRunHook' objects.
-* <b>`scaffold`</b>: A `Scaffold` used for gathering or building supportive ops. If
- not specified a default one is created. It's used to finalize the graph.
-* <b>`master`</b>: `String` representation of the TensorFlow master to use.
-* <b>`config`</b>: `ConfigProto` proto used to configure the session.
-* <b>`checkpoint_dir`</b>: A string. Optional path to a directory where to restore
- variables.
-* <b>`stop_grace_period_secs`</b>: Number of seconds given to threads to stop after
- `close()` has been called.
-
-
-- - -
-
-#### `tf.train.SingularMonitoredSession.close()` {#SingularMonitoredSession.close}
-
-
-
-
-- - -
-
-#### `tf.train.SingularMonitoredSession.graph` {#SingularMonitoredSession.graph}
-
-The graph that was launched in this session.
-
-
-- - -
-
-#### `tf.train.SingularMonitoredSession.raw_session()` {#SingularMonitoredSession.raw_session}
-
-Returns underlying `TensorFlow.Session` object.
-
-
-- - -
-
-#### `tf.train.SingularMonitoredSession.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#SingularMonitoredSession.run}
-
-Run ops in the monitored session.
-
-This method is completely compatible with the `tf.Session.run()` method.
-
-##### Args:
-
-
-* <b>`fetches`</b>: Same as `tf.Session.run()`.
-* <b>`feed_dict`</b>: Same as `tf.Session.run()`.
-* <b>`options`</b>: Same as `tf.Session.run()`.
-* <b>`run_metadata`</b>: Same as `tf.Session.run()`.
-
-##### Returns:
-
- Same as `tf.Session.run()`.
-
-
-- - -
-
-#### `tf.train.SingularMonitoredSession.should_stop()` {#SingularMonitoredSession.should_stop}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.WorkerSessionCreator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.WorkerSessionCreator.md
deleted file mode 100644
index 9ba1affc6b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.WorkerSessionCreator.md
+++ /dev/null
@@ -1,23 +0,0 @@
-Creates a tf.Session for a worker.
-- - -
-
-#### `tf.train.WorkerSessionCreator.__init__(scaffold=None, master='', config=None)` {#WorkerSessionCreator.__init__}
-
-Initializes a worker session creator.
-
-##### Args:
-
-
-* <b>`scaffold`</b>: A `Scaffold` used for gathering or building supportive ops. If
- not specified a default one is created. It's used to finalize the graph.
-* <b>`master`</b>: `String` representation of the TensorFlow master to use.
-* <b>`config`</b>: `ConfigProto` proto used to configure the session.
-
-
-- - -
-
-#### `tf.train.WorkerSessionCreator.create_session()` {#WorkerSessionCreator.create_session}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.match_filenames_once.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.match_filenames_once.md
deleted file mode 100644
index db62d20223..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.match_filenames_once.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.train.match_filenames_once(pattern, name=None)` {#match_filenames_once}
-
-Save the list of files matching pattern, so it is only computed once.
-
-##### Args:
-
-
-* <b>`pattern`</b>: A file pattern (glob), or 1D tensor of file patterns.
-* <b>`name`</b>: A name for the operations (optional).
-
-##### Returns:
-
- A variable that is initialized to the list of files matching the pattern(s).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.maybe_batch_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.maybe_batch_join.md
deleted file mode 100644
index 477c0e7bd9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.maybe_batch_join.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.train.maybe_batch_join(tensors_list, keep_input, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_batch_join}
-
-Runs a list of tensors to conditionally fill a queue to create batches.
-
-See docstring in `batch_join` for more details.
-
-##### Args:
-
-
-* <b>`tensors_list`</b>: A list of tuples or dictionaries of tensors to enqueue.
-* <b>`keep_input`</b>: A `bool` Tensor. This tensor controls whether the input is
- added to the queue or not. If it is a scalar and evaluates `True`, then
- `tensors` are all added to the queue. If it is a vector and `enqueue_many`
- is `True`, then each example is added to the queue only if the
- corresonding value in `keep_input` is `True`. This tensor essentially acts
- as a filtering mechanism.
-* <b>`batch_size`</b>: An integer. The new batch size pulled from the queue.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list_list` is a single
- example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensor_list_list[i]`.
-* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
- The given dimensions are padded upon dequeue so that tensors within a
- batch have the same shapes.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (Optional) If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the same number and types as
- `tensors_list[i]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensor_list_list`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.update_checkpoint_state.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.update_checkpoint_state.md
deleted file mode 100644
index 68747fc0c7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.update_checkpoint_state.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.train.update_checkpoint_state(save_dir, model_checkpoint_path, all_model_checkpoint_paths=None, latest_filename=None)` {#update_checkpoint_state}
-
-Updates the content of the 'checkpoint' file.
-
-This updates the checkpoint file containing a CheckpointState
-proto.
-
-##### Args:
-
-
-* <b>`save_dir`</b>: Directory where the model was saved.
-* <b>`model_checkpoint_path`</b>: The checkpoint file.
-* <b>`all_model_checkpoint_paths`</b>: List of strings. Paths to all not-yet-deleted
- checkpoints, sorted from oldest to newest. If this is a non-empty list,
- the last element must be equal to model_checkpoint_path. These paths
- are also saved in the CheckpointState proto.
-* <b>`latest_filename`</b>: Optional name of the checkpoint file. Default to
- 'checkpoint'.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If the save paths conflict.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.write_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.write_graph.md
deleted file mode 100644
index 33e1f1c591..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.train.write_graph.md
+++ /dev/null
@@ -1,33 +0,0 @@
-### `tf.train.write_graph(graph_or_graph_def, logdir, name, as_text=True)` {#write_graph}
-
-Writes a graph proto to a file.
-
-The graph is written as a binary proto unless `as_text` is `True`.
-
-```python
-v = tf.Variable(0, name='my_variable')
-sess = tf.Session()
-tf.train.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt')
-```
-
-or
-
-```python
-v = tf.Variable(0, name='my_variable')
-sess = tf.Session()
-tf.train.write_graph(sess.graph, '/tmp/my-model', 'train.pbtxt')
-```
-
-##### Args:
-
-
-* <b>`graph_or_graph_def`</b>: A `Graph` or a `GraphDef` protocol buffer.
-* <b>`logdir`</b>: Directory where to write the graph. This can refer to remote
- filesystems, such as Google Cloud Storage (GCS).
-* <b>`name`</b>: Filename for the graph.
-* <b>`as_text`</b>: If `True`, writes the graph as an ASCII proto.
-
-##### Returns:
-
- The path of the output proto file.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.unsorted_segment_sum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.unsorted_segment_sum.md
deleted file mode 100644
index c02d39e96a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.unsorted_segment_sum.md
+++ /dev/null
@@ -1,38 +0,0 @@
-### `tf.unsorted_segment_sum(data, segment_ids, num_segments, name=None)` {#unsorted_segment_sum}
-
-Computes the sum along segments of a tensor.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-Computes a tensor such that
-`(output[i] = sum_{j...} data[j...]` where the sum is over tuples `j...` such
-that `segment_ids[j...] == i`. Unlike `SegmentSum`, `segment_ids`
-need not be sorted and need not cover all values in the full
-range of valid values.
-
-If the sum is empty for a given segment ID `i`, `output[i] = 0`.
-
-`num_segments` should equal the number of distinct segment IDs.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/UnsortedSegmentSum.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`segment_ids`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A tensor whose shape is a prefix of `data.shape`.
-* <b>`num_segments`</b>: A `Tensor` of type `int32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for the first `segment_ids.rank`
- dimensions, which are replaced with a single dimension which has size
- `num_segments`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.while_loop.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.while_loop.md
deleted file mode 100644
index da39478841..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf.while_loop.md
+++ /dev/null
@@ -1,117 +0,0 @@
-### `tf.while_loop(cond, body, loop_vars, shape_invariants=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#while_loop}
-
-Repeat `body` while the condition `cond` is true.
-
-`cond` is a callable returning a boolean scalar tensor. `body` is a callable
-returning a (possibly nested) tuple, namedtuple or list of tensors of the same
-arity (length and structure) and types as `loop_vars`. `loop_vars` is a
-(possibly nested) tuple, namedtuple or list of tensors that is passed to both
-`cond` and `body`. `cond` and `body` both take as many arguments as there are
-`loop_vars`.
-
-While `cond` evaluates to true, `body` is executed.
-
-In addition to regular Tensors or IndexedSlices, the body may accept and
-return TensorArray objects. The flows of the TensorArray objects will
-be appropriately forwarded between loops and during gradient calculations.
-
-For correctness, `tf.while_loop()` strictly enforces shape invariants for
-the loop variables. A shape invariant is a (possibly partial) shape that
-is unchanged across the iterations of the loop. An error will be raised
-if the shape of a loop variable after an iteration is determined to be more
-general than or incompatible with its shape invariant. For example, a shape
-of [11, None] is more general than a shape of [11, 17], and [11, 21] is not
-compatible with [11, 17]. By default (if the argument `shape_invariants` is
-not specified), it is assumed that the initial shape of each tensor in
-`loop_vars` is the same in every iteration. The `shape_invariants` argument
-allows the caller to specify a less specific shape invariant for each loop
-variable, which is needed if the shape varies between iterations. The
-[`Tensor.set_shape()`](../../api_docs/python/framework.md#Tensor.set_shape)
-function may also be used in the `body` function to indicate that
-the output loop variable has a particular shape. The shape invariant for
-SparseTensor and IndexedSlices are treated specially as follows:
-
-a) If a loop variable is a SparseTensor, the shape invariant must be
-TensorShape([r]) where r is the rank of the dense tensor represented
-by the sparse tensor. It means the shapes of the three tensors of the
-SparseTensor are ([None], [None, r], [r]). NOTE: The shape invariant here
-is the shape of the SparseTensor.dense_shape property. It must be the shape of
-a vector.
-
-b) If a loop variable is an IndexedSlices, the shape invariant must be
-a shape invariant of the values tensor of the IndexedSlices. It means
-the shapes of the three tensors of the IndexedSlices are (shape, [shape[0]],
-[shape.ndims]).
-
-`while_loop` implements non-strict semantics, enabling multiple iterations
-to run in parallel. The maximum number of parallel iterations can be
-controlled by `parallel_iterations`, which gives users some control over
-memory consumption and execution order. For correct programs, `while_loop`
-should return the same result for any parallel_iterations > 0.
-
-For training, TensorFlow remembers the tensors that are produced in the
-forward inference but needed in back propagation. These tensors can be a
-main source of memory consumption and often cause OOM problems when training
-on GPUs. When the flag swap_memory is true, we swap out these tensors from
-GPU to CPU. This for example allows us to train RNN models with very long
-sequences and large batches.
-
-##### Args:
-
-
-* <b>`cond`</b>: A callable that represents the termination condition of the loop.
-* <b>`body`</b>: A callable that represents the loop body.
-* <b>`loop_vars`</b>: A (possibly nested) tuple, namedtuple or list of numpy array,
- `Tensor`, and `TensorArray` objects.
-* <b>`shape_invariants`</b>: The shape invariants for the loop variables.
-* <b>`parallel_iterations`</b>: The number of iterations allowed to run in parallel.
- It must be a positive integer.
-* <b>`back_prop`</b>: Whether backprop is enabled for this while loop.
-* <b>`swap_memory`</b>: Whether GPU-CPU memory swap is enabled for this loop.
-* <b>`name`</b>: Optional name prefix for the returned tensors.
-
-##### Returns:
-
- The output tensors for the loop variables after the loop. When the length
- of `loop_vars` is 1 this is a Tensor, TensorArray or IndexedSlice and when
- the length of `loop_vars` is greater than 1 it returns a list.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `cond` or `body` is not callable.
-* <b>`ValueError`</b>: if `loop_vars` is empty.
-
-
-* <b>`Example`</b>:
-
- ```python
- i = tf.constant(0)
- c = lambda i: tf.less(i, 10)
- b = lambda i: tf.add(i, 1)
- r = tf.while_loop(c, b, [i])
- ```
-
-Example with nesting and a namedtuple:
-
- ```python
- import collections
- Pair = collections.namedtuple('Pair', 'j, k')
- ijk_0 = (tf.constant(0), Pair(tf.constant(1), tf.constant(2)))
- c = lambda i, p: i < 10
- b = lambda i, p: (i + 1, Pair((p.j + p.k), (p.j - p.k)))
- ijk_final = tf.while_loop(c, b, ijk_0)
- ```
-
-Example using shape_invariants:
-
- ```python
- i0 = tf.constant(0)
- m0 = tf.ones([2, 2])
- c = lambda i, m: i < 10
- b = lambda i, m: [i+1, tf.concat([m, m], axis=0)]
- tf.while_loop(
- c, b, loop_vars=[i0, m0],
- shape_invariants=[i0.get_shape(), tf.TensorShape([None, 2])])
- ```
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf_debug.DebugDumpDir.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf_debug.DebugDumpDir.md
deleted file mode 100644
index 4b60562c32..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf_debug.DebugDumpDir.md
+++ /dev/null
@@ -1,548 +0,0 @@
-Data set from a debug-dump directory on filesystem.
-
-An instance of `DebugDumpDir` contains all `DebugTensorDatum` instances
-in a tfdbg dump root directory.
-- - -
-
-#### `tf_debug.DebugDumpDir.__init__(dump_root, partition_graphs=None, validate=True)` {#DebugDumpDir.__init__}
-
-`DebugDumpDir` constructor.
-
-##### Args:
-
-
-* <b>`dump_root`</b>: (`str`) path to the dump root directory.
-* <b>`partition_graphs`</b>: A repeated field of GraphDefs representing the
- partition graphs executed by the TensorFlow runtime.
-* <b>`validate`</b>: (`bool`) whether the dump files are to be validated against the
- partition graphs.
-
-##### Raises:
-
-
-* <b>`IOError`</b>: If dump_root does not exist as a directory.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.core_metadata` {#DebugDumpDir.core_metadata}
-
-Metadata about the `Session.run()` call from the core runtime.
-
-Of the three counters available in the return value, `global_step` is
-supplied by the caller of the debugged `Session.run()`, while
-`session_run_count` and `executor_step_count` are determined by the state
-of the core runtime, automatically. For the same fetch list, feed keys and
-debug tensor watch options, the same executor will be used and
-`executor_step_count` should increase by one at a time. However, runs with
-different fetch lists, feed keys and debug_tensor watch options that all
-share the same `Session` object can lead to gaps in `session_run_count`.
-
-##### Returns:
-
- If core metadata are loaded, a `namedtuple` with the fields:
- `global_step`: A global step count supplied by the caller of
- `Session.run()`. It is optional to the caller. If the caller did not
- supply this parameter, its value will be -1.
- `session_run_count`: A counter for Run() calls to the underlying
- TensorFlow `Session` object.
- `executor_step_count`: A counter for invocations of a given runtime
- executor. The same executor is re-used for the same fetched tensors,
- target nodes, input feed keys and debug tensor watch options.
- `input_names`: Names of the input (feed) Tensors.
- `output_names`: Names of the output (fetched) Tensors.
- `target_nodes`: Names of the target nodes.
- If the core metadata have not been loaded, `None`.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.debug_watch_keys(node_name)` {#DebugDumpDir.debug_watch_keys}
-
-Get all tensor watch keys of given node according to partition graphs.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node.
-
-##### Returns:
-
- (`list` of `str`) all debug tensor watch keys. Returns an empty list if
- the node name does not correspond to any debug watch keys.
-
-##### Raises:
-
- `LookupError`: If debug watch information has not been loaded from
- partition graphs yet.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.devices()` {#DebugDumpDir.devices}
-
-Get the list of devices.
-
-##### Returns:
-
- (`list` of `str`) names of the devices.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If node inputs and control inputs have not been loaded
- from partition graphs yet.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.dumped_tensor_data` {#DebugDumpDir.dumped_tensor_data}
-
-
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.find(predicate, first_n=0)` {#DebugDumpDir.find}
-
-Find dumped tensor data by a certain predicate.
-
-##### Args:
-
-
-* <b>`predicate`</b>: A callable that takes two input arguments:
-
- ```python
- def predicate(debug_tensor_datum, tensor):
- # returns a bool
- ```
-
- where `debug_tensor_datum` is an instance of `DebugTensorDatum`, which
- carries the metadata, such as the `Tensor`'s node name, output slot
- timestamp, debug op name, etc.; and `tensor` is the dumped tensor value
- as a `numpy.ndarray`.
-
-* <b>`first_n`</b>: (`int`) return only the first n `DebugTensotDatum` instances (in
- time order) for which the predicate returns True. To return all the
- `DebugTensotDatum` instances, let first_n be <= 0.
-
-##### Returns:
-
- A list of all `DebugTensorDatum` objects in this `DebugDumpDir` object
- for which predicate returns True, sorted in ascending order of the
- timestamp.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.get_dump_sizes_bytes(node_name, output_slot, debug_op)` {#DebugDumpDir.get_dump_sizes_bytes}
-
-Get the sizes of the dump files for a debug-dumped tensor.
-
-Unit of the file size: byte.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node that the tensor is produced by.
-* <b>`output_slot`</b>: (`int`) output slot index of tensor.
-* <b>`debug_op`</b>: (`str`) name of the debug op.
-
-##### Returns:
-
- (`list` of `int`): list of dump file sizes in bytes.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the tensor watch key does not exist in the debug dump data.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.get_rel_timestamps(node_name, output_slot, debug_op)` {#DebugDumpDir.get_rel_timestamps}
-
-Get the relative timestamp from for a debug-dumped tensor.
-
-Relative timestamp means (absolute timestamp - `t0`), where `t0` is the
-absolute timestamp of the first dumped tensor in the dump root. The tensor
-may be dumped multiple times in the dump root directory, so a list of
-relative timestamps (`numpy.ndarray`) is returned.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node that the tensor is produced by.
-* <b>`output_slot`</b>: (`int`) output slot index of tensor.
-* <b>`debug_op`</b>: (`str`) name of the debug op.
-
-##### Returns:
-
- (`list` of `int`) list of relative timestamps.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the tensor watch key does not exist in the debug dump data.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.get_tensor_file_paths(node_name, output_slot, debug_op)` {#DebugDumpDir.get_tensor_file_paths}
-
-Get the file paths from a debug-dumped tensor.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node that the tensor is produced by.
-* <b>`output_slot`</b>: (`int`) output slot index of tensor.
-* <b>`debug_op`</b>: (`str`) name of the debug op.
-
-##### Returns:
-
- List of file path(s) loaded. This is a list because each debugged tensor
- may be dumped multiple times.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the tensor does not exist in the debug-dump data.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.get_tensors(node_name, output_slot, debug_op)` {#DebugDumpDir.get_tensors}
-
-Get the tensor value from for a debug-dumped tensor.
-
-The tensor may be dumped multiple times in the dump root directory, so a
-list of tensors (`numpy.ndarray`) is returned.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node that the tensor is produced by.
-* <b>`output_slot`</b>: (`int`) output slot index of tensor.
-* <b>`debug_op`</b>: (`str`) name of the debug op.
-
-##### Returns:
-
- List of tensors (`numpy.ndarray`) loaded from the debug-dump file(s).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the tensor does not exist in the debug-dump data.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.loaded_partition_graphs()` {#DebugDumpDir.loaded_partition_graphs}
-
-Test whether partition graphs have been loaded.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.node_attributes(node_name)` {#DebugDumpDir.node_attributes}
-
-Get the attributes of a node.
-
-##### Args:
-
-
-* <b>`node_name`</b>: Name of the node in question.
-
-##### Returns:
-
- Attributes of the node.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If no partition graphs have been loaded.
-* <b>`ValueError`</b>: If no node named node_name exists.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.node_device(node_name)` {#DebugDumpDir.node_device}
-
-Get the device of a node.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node.
-
-##### Returns:
-
- (`str`) name of the device on which the node is placed.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If node inputs and control inputs have not been loaded
- from partition graphs yet.
-* <b>`ValueError`</b>: If the node does not exist in partition graphs.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.node_exists(node_name)` {#DebugDumpDir.node_exists}
-
-Test if a node exists in the partition graphs.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node to be checked.
-
-##### Returns:
-
- A boolean indicating whether the node exists.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If no partition graphs have been loaded yet.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.node_inputs(node_name, is_control=False)` {#DebugDumpDir.node_inputs}
-
-Get the inputs of given node according to partition graphs.
-
-##### Args:
-
-
-* <b>`node_name`</b>: Name of the node.
-* <b>`is_control`</b>: (`bool`) Whether control inputs, rather than non-control
- inputs, are to be returned.
-
-##### Returns:
-
- (`list` of `str`) inputs to the node, as a list of node names.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If node inputs and control inputs have not been loaded
- from partition graphs yet.
-* <b>`ValueError`</b>: If the node does not exist in partition graphs.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.node_op_type(node_name)` {#DebugDumpDir.node_op_type}
-
-Get the op type of given node.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node.
-
-##### Returns:
-
- (`str`) op type of the node.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If node op types have not been loaded
- from partition graphs yet.
-* <b>`ValueError`</b>: If the node does not exist in partition graphs.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.node_recipients(node_name, is_control=False)` {#DebugDumpDir.node_recipients}
-
-Get recipient of the given node's output according to partition graphs.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node.
-* <b>`is_control`</b>: (`bool`) whether control outputs, rather than non-control
- outputs, are to be returned.
-
-##### Returns:
-
- (`list` of `str`) all inputs to the node, as a list of node names.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If node inputs and control inputs have not been loaded
- from partition graphs yet.
-* <b>`ValueError`</b>: If the node does not exist in partition graphs.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.node_traceback(element_name)` {#DebugDumpDir.node_traceback}
-
-Try to retrieve the Python traceback of node's construction.
-
-##### Args:
-
-
-* <b>`element_name`</b>: (`str`) Name of a graph element (node or tensor).
-
-##### Returns:
-
- (list) The traceback list object as returned by the `extract_trace`
- method of Python's traceback module.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If Python graph is not available for traceback lookup.
-* <b>`KeyError`</b>: If the node cannot be found in the Python graph loaded.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.nodes()` {#DebugDumpDir.nodes}
-
-Get a list of all nodes from the partition graphs.
-
-##### Returns:
-
- All nodes' names, as a list of str.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If no partition graphs have been loaded.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.partition_graphs()` {#DebugDumpDir.partition_graphs}
-
-Get the partition graphs.
-
-##### Returns:
-
- Partition graphs as repeated fields of GraphDef.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If no partition graphs have been loaded.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.run_feed_keys_info` {#DebugDumpDir.run_feed_keys_info}
-
-Get a str representation of the feed_dict used in the Session.run() call.
-
-##### Returns:
-
- If the information is available, a `str` obtained from `repr(feed_dict)`.
- If the information is not available, `None`.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.run_fetches_info` {#DebugDumpDir.run_fetches_info}
-
-Get a str representation of the fetches used in the Session.run() call.
-
-##### Returns:
-
- If the information is available, a `str` obtained from `repr(fetches)`.
- If the information is not available, `None`.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.set_python_graph(python_graph)` {#DebugDumpDir.set_python_graph}
-
-Provide Python `Graph` object to the wrapper.
-
-Unlike the partition graphs, which are protobuf `GraphDef` objects, `Graph`
-is a Python object and carries additional information such as the traceback
-of the construction of the nodes in the graph.
-
-##### Args:
-
-
-* <b>`python_graph`</b>: (ops.Graph) The Python Graph object.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.size` {#DebugDumpDir.size}
-
-Total number of dumped tensors in the dump root directory.
-
-##### Returns:
-
- (`int`) total number of dumped tensors in the dump root directory.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.t0` {#DebugDumpDir.t0}
-
-Absolute timestamp of the first dumped tensor.
-
-##### Returns:
-
- (`int`) absolute timestamp of the first dumped tensor, in microseconds.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.transitive_inputs(node_name, include_control=True)` {#DebugDumpDir.transitive_inputs}
-
-Get the transitive inputs of given node according to partition graphs.
-
-##### Args:
-
-
-* <b>`node_name`</b>: Name of the node
-* <b>`include_control`</b>: Include control inputs (True by default).
-
-##### Returns:
-
- (`list` of `str`) all transitive inputs to the node, as a list of node
- names.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If node inputs and control inputs have not been loaded
- from partition graphs yet.
-* <b>`ValueError`</b>: If the node does not exist in partition graphs.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.watch_key_to_data(debug_watch_key)` {#DebugDumpDir.watch_key_to_data}
-
-Get all `DebugTensorDatum` instances corresponding to a debug watch key.
-
-##### Args:
-
-
-* <b>`debug_watch_key`</b>: (`str`) debug watch key.
-
-##### Returns:
-
- A list of `DebugTensorDatum` instances that correspond to the debug watch
- key. If the watch key does not exist, returns an empty list.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the debug watch key does not exist.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf_debug.DumpingDebugHook.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf_debug.DumpingDebugHook.md
deleted file mode 100644
index 7a2b8936b3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf_debug.DumpingDebugHook.md
+++ /dev/null
@@ -1,185 +0,0 @@
-A debugger hook that dumps debug data to filesystem.
-
-Can be used as a monitor/hook for `tf.train.MonitoredSession`s and
-`tf.contrib.learn`'s `Estimator`s and `Experiment`s.
-- - -
-
-#### `tf_debug.DumpingDebugHook.__enter__()` {#DumpingDebugHook.__enter__}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.__exit__(exec_type, exec_value, exec_tb)` {#DumpingDebugHook.__exit__}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.__init__(session_root, watch_fn=None, log_usage=True)` {#DumpingDebugHook.__init__}
-
-Create a local debugger command-line interface (CLI) hook.
-
-##### Args:
-
-
-* <b>`session_root`</b>: See doc of
- `dumping_wrapper.DumpingDebugWrapperSession.__init__`.
-* <b>`watch_fn`</b>: See doc of
- `dumping_wrapper.DumpingDebugWrapperSession.__init__`.
-* <b>`log_usage`</b>: (bool) Whether usage is to be logged.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.after_create_session(session, coord)` {#DumpingDebugHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.after_run(run_context, run_values)` {#DumpingDebugHook.after_run}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.before_run(run_context)` {#DumpingDebugHook.before_run}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.begin()` {#DumpingDebugHook.begin}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.close()` {#DumpingDebugHook.close}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.end(session)` {#DumpingDebugHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.graph` {#DumpingDebugHook.graph}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.invoke_node_stepper(node_stepper, restore_variable_values_on_exit=True)` {#DumpingDebugHook.invoke_node_stepper}
-
-See doc of BaseDebugWrapperSession.invoke_node_stepper.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.on_run_end(request)` {#DumpingDebugHook.on_run_end}
-
-See doc of BaseDebugWrapperSession.on_run_end.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.on_run_start(request)` {#DumpingDebugHook.on_run_start}
-
-See doc of BaseDebugWrapperSession.on_run_start.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.on_session_init(request)` {#DumpingDebugHook.on_session_init}
-
-See doc of BaseDebugWrapperSession.on_run_start.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.partial_run(handle, fetches, feed_dict=None)` {#DumpingDebugHook.partial_run}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.partial_run_setup(fetches, feeds=None)` {#DumpingDebugHook.partial_run_setup}
-
-Sets up the feeds and fetches for partial runs in the session.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#DumpingDebugHook.run}
-
-Wrapper around Session.run() that inserts tensor watch options.
-
-##### Args:
-
-
-* <b>`fetches`</b>: Same as the `fetches` arg to regular `Session.run()`.
-* <b>`feed_dict`</b>: Same as the `feed_dict` arg to regular `Session.run()`.
-* <b>`options`</b>: Same as the `options` arg to regular `Session.run()`.
-* <b>`run_metadata`</b>: Same as the `run_metadata` arg to regular `Session.run()`.
-
-##### Returns:
-
- Simply forwards the output of the wrapped `Session.run()` call.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: On invalid `OnRunStartAction` value.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.sess_str` {#DumpingDebugHook.sess_str}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.session` {#DumpingDebugHook.session}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf_debug.has_inf_or_nan.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf_debug.has_inf_or_nan.md
deleted file mode 100644
index c896055789..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard4/tf_debug.has_inf_or_nan.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf_debug.has_inf_or_nan(datum, tensor)` {#has_inf_or_nan}
-
-A predicate for whether a tensor consists of any bad numerical values.
-
-This predicate is common enough to merit definition in this module.
-Bad numerical values include `nan`s and `inf`s.
-The signature of this function follows the requirement of the method
-`DebugDumpDir.find()`.
-
-##### Args:
-
-
-* <b>`datum`</b>: (`DebugTensorDatum`) Datum metadata.
-* <b>`tensor`</b>: (`numpy.ndarray` or None) Value of the tensor. None represents
- an uninitialized tensor.
-
-##### Returns:
-
- (`bool`) True if and only if tensor consists of any nan or inf values.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.OpError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.OpError.md
deleted file mode 100644
index 650139bf1e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.OpError.md
+++ /dev/null
@@ -1,66 +0,0 @@
-A generic error that is raised when TensorFlow execution fails.
-
-Whenever possible, the session will raise a more specific subclass
-of `OpError` from the `tf.errors` module.
-- - -
-
-#### `tf.OpError.__init__(node_def, op, message, error_code)` {#OpError.__init__}
-
-Creates a new `OpError` indicating that a particular op failed.
-
-##### Args:
-
-
-* <b>`node_def`</b>: The `node_def_pb2.NodeDef` proto representing the op that
- failed, if known; otherwise None.
-* <b>`op`</b>: The `ops.Operation` that failed, if known; otherwise None.
-* <b>`message`</b>: The message string describing the failure.
-* <b>`error_code`</b>: The `error_codes_pb2.Code` describing the error.
-
-
-- - -
-
-#### `tf.OpError.__str__()` {#OpError.__str__}
-
-
-
-
-- - -
-
-#### `tf.OpError.error_code` {#OpError.error_code}
-
-The integer error code that describes the error.
-
-
-- - -
-
-#### `tf.OpError.message` {#OpError.message}
-
-The error message that describes the error.
-
-
-- - -
-
-#### `tf.OpError.node_def` {#OpError.node_def}
-
-The `NodeDef` proto representing the op that failed.
-
-
-- - -
-
-#### `tf.OpError.op` {#OpError.op}
-
-The operation that failed, if known.
-
-*N.B.* If the failed op was synthesized at runtime, e.g. a `Send`
-or `Recv` op, there will be no corresponding
-[`Operation`](../../api_docs/python/framework.md#Operation)
-object. In that case, this will return `None`, and you should
-instead use the [`OpError.node_def`](#OpError.node_def) to
-discover information about the op.
-
-##### Returns:
-
- The `Operation` that failed, or None.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.RandomShuffleQueue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.RandomShuffleQueue.md
deleted file mode 100644
index 04cf93cec1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.RandomShuffleQueue.md
+++ /dev/null
@@ -1,312 +0,0 @@
-A queue implementation that dequeues elements in a random order.
-
-See [`tf.QueueBase`](#QueueBase) for a description of the methods on
-this class.
-- - -
-
-#### `tf.RandomShuffleQueue.__init__(capacity, min_after_dequeue, dtypes, shapes=None, names=None, seed=None, shared_name=None, name='random_shuffle_queue')` {#RandomShuffleQueue.__init__}
-
-Create a queue that dequeues elements in a random order.
-
-A `RandomShuffleQueue` has bounded capacity; supports multiple
-concurrent producers and consumers; and provides exactly-once
-delivery.
-
-A `RandomShuffleQueue` holds a list of up to `capacity`
-elements. Each element is a fixed-length tuple of tensors whose
-dtypes are described by `dtypes`, and whose shapes are optionally
-described by the `shapes` argument.
-
-If the `shapes` argument is specified, each component of a queue
-element must have the respective fixed shape. If it is
-unspecified, different queue elements may have different shapes,
-but the use of `dequeue_many` is disallowed.
-
-The `min_after_dequeue` argument allows the caller to specify a
-minimum number of elements that will remain in the queue after a
-`dequeue` or `dequeue_many` operation completes, to ensure a
-minimum level of mixing of elements. This invariant is maintained
-by blocking those operations until sufficient elements have been
-enqueued. The `min_after_dequeue` argument is ignored after the
-queue has been closed.
-
-##### Args:
-
-
-* <b>`capacity`</b>: An integer. The upper bound on the number of elements
- that may be stored in this queue.
-* <b>`min_after_dequeue`</b>: An integer (described above).
-* <b>`dtypes`</b>: A list of `DType` objects. The length of `dtypes` must equal
- the number of tensors in each queue element.
-* <b>`shapes`</b>: (Optional.) A list of fully-defined `TensorShape` objects
- with the same length as `dtypes`, or `None`.
-* <b>`names`</b>: (Optional.) A list of string naming the components in the queue
- with the same length as `dtypes`, or `None`. If specified the dequeue
- methods return a dictionary with the names as keys.
-* <b>`seed`</b>: A Python integer. Used to create a random seed. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`shared_name`</b>: (Optional.) If non-empty, this queue will be shared under
- the given name across multiple sessions.
-* <b>`name`</b>: Optional name for the queue operation.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.close(cancel_pending_enqueues=False, name=None)` {#RandomShuffleQueue.close}
-
-Closes this queue.
-
-This operation signals that no more elements will be enqueued in
-the given queue. Subsequent `enqueue` and `enqueue_many`
-operations will fail. Subsequent `dequeue` and `dequeue_many`
-operations will continue to succeed if sufficient elements remain
-in the queue. Subsequent `dequeue` and `dequeue_many` operations
-that would block will fail immediately.
-
-If `cancel_pending_enqueues` is `True`, all pending requests will also
-be cancelled.
-
-##### Args:
-
-
-* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
- `False` (described above).
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that closes the queue.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.dequeue(name=None)` {#RandomShuffleQueue.dequeue}
-
-Dequeues one element from this queue.
-
-If the queue is empty when this operation executes, it will block
-until there is an element to dequeue.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue is empty, and there are no pending
-enqueue operations that can fulfill this request,
-`tf.errors.OutOfRangeError` will be raised. If the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of tensors that was dequeued.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.dequeue_many(n, name=None)` {#RandomShuffleQueue.dequeue_many}
-
-Dequeues and concatenates `n` elements from this queue.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. All of the
-components in the dequeued tuple will have size `n` in the 0th dimension.
-
-If the queue is closed and there are less than `n` elements left, then an
-`OutOfRange` exception is raised.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue contains fewer than `n` elements, and
-there are no pending enqueue operations that can fulfill this
-request, `tf.errors.OutOfRangeError` will be raised. If the
-session is [closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.dequeue_up_to(n, name=None)` {#RandomShuffleQueue.dequeue_up_to}
-
-Dequeues and concatenates `n` elements from this queue.
-
-**Note** This operation is not supported by all queues. If a queue does not
-support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. If the queue
-has not been closed, all of the components in the dequeued tuple
-will have size `n` in the 0th dimension.
-
-If the queue is closed and there are more than `0` but fewer than
-`n` elements remaining, then instead of raising a
-`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
-less than `n` elements are returned immediately. If the queue is
-closed and there are `0` elements left in the queue, then a
-`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
-Otherwise the behavior is identical to `dequeue_many`.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.dtypes` {#RandomShuffleQueue.dtypes}
-
-The list of dtypes for each component of a queue element.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.enqueue(vals, name=None)` {#RandomShuffleQueue.enqueue}
-
-Enqueues one element to this queue.
-
-If the queue is full when this operation executes, it will block
-until the element has been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
- the values to enqueue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a new tuple of tensors to the queue.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.enqueue_many(vals, name=None)` {#RandomShuffleQueue.enqueue_many}
-
-Enqueues zero or more elements to this queue.
-
-This operation slices each component tensor along the 0th dimension to
-make multiple queue elements. All of the tensors in `vals` must have the
-same size in the 0th dimension.
-
-If the queue is full when this operation executes, it will block
-until all of the elements have been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
- from which the queue elements are taken.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a batch of tuples of tensors to the queue.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.from_list(index, queues)` {#RandomShuffleQueue.from_list}
-
-Create a queue using the queue reference from `queues[index]`.
-
-##### Args:
-
-
-* <b>`index`</b>: An integer scalar tensor that determines the input that gets
- selected.
-* <b>`queues`</b>: A list of `QueueBase` objects.
-
-##### Returns:
-
- A `QueueBase` object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
- or when the data types of `queues` are not all the same.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.name` {#RandomShuffleQueue.name}
-
-The name of the underlying queue.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.names` {#RandomShuffleQueue.names}
-
-The list of names for each component of a queue element.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.queue_ref` {#RandomShuffleQueue.queue_ref}
-
-The underlying queue reference.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.shapes` {#RandomShuffleQueue.shapes}
-
-The list of shapes for each component of a queue element.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.size(name=None)` {#RandomShuffleQueue.size}
-
-Compute the number of elements in this queue.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A scalar tensor containing the number of elements in this queue.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.add.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.add.md
deleted file mode 100644
index da82da6076..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.add.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.add(x, y, name=None)` {#add}
-
-Returns x + y element-wise.
-
-*NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.asin.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.asin.md
deleted file mode 100644
index 64ec024b4c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.asin.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.asin(x, name=None)` {#asin}
-
-Computes asin of x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_greater.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_greater.md
deleted file mode 100644
index 7020081952..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_greater.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.assert_greater(x, y, data=None, summarize=None, message=None, name=None)` {#assert_greater}
-
-Assert the condition `x > y` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_greater(x, y)]):
- output = tf.reduce_sum(x)
-```
-
-This condition holds if for every pair of (possibly broadcast) elements
-`x[i]`, `y[i]`, we have `x[i] > y[i]`.
-If both `x` and `y` are empty, this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`y`</b>: Numeric `Tensor`, same dtype as and broadcastable to `x`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`, `y`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_greater".
-
-##### Returns:
-
- Op that raises `InvalidArgumentError` if `x > y` is False.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_integer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_integer.md
deleted file mode 100644
index b0cb7d2dbb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.assert_integer.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.assert_integer(x, message=None, name=None)` {#assert_integer}
-
-Assert that `x` is of integer dtype.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_integer(x)]):
- output = tf.reduce_sum(x)
-```
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` whose basetype is integer and is not quantized.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_integer".
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x.dtype` is anything other than non-quantized integer.
-
-##### Returns:
-
- A `no_op` that does nothing. Type can be determined statically.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.boolean_mask.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.boolean_mask.md
deleted file mode 100644
index 2f6c39a700..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.boolean_mask.md
+++ /dev/null
@@ -1,43 +0,0 @@
-### `tf.boolean_mask(tensor, mask, name='boolean_mask')` {#boolean_mask}
-
-Apply boolean mask to tensor. Numpy equivalent is `tensor[mask]`.
-
-```python
-# 1-D example
-tensor = [0, 1, 2, 3]
-mask = np.array([True, False, True, False])
-boolean_mask(tensor, mask) ==> [0, 2]
-```
-
-In general, `0 < dim(mask) = K <= dim(tensor)`, and `mask`'s shape must match
-the first K dimensions of `tensor`'s shape. We then have:
- `boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd]`
-where `(i1,...,iK)` is the ith `True` entry of `mask` (row-major order).
-
-##### Args:
-
-
-* <b>`tensor`</b>: N-D tensor.
-* <b>`mask`</b>: K-D boolean tensor, K <= N and K must be known statically.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- (N-K+1)-dimensional tensor populated by entries in `tensor` corresponding
- to `True` values in `mask`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If shapes do not conform.
-
-
-* <b>`Examples`</b>:
-
-```python
-# 2-D example
-tensor = [[1, 2], [3, 4], [5, 6]]
-mask = np.array([True, False, True])
-boolean_mask(tensor, mask) ==> [[1, 2], [5, 6]]
-```
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.broadcast_dynamic_shape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.broadcast_dynamic_shape.md
deleted file mode 100644
index 5dc534473a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.broadcast_dynamic_shape.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.broadcast_dynamic_shape(shape_x, shape_y)` {#broadcast_dynamic_shape}
-
-Returns the broadcasted dynamic shape between `shape_x` and `shape_y`.
-
-##### Args:
-
-
-* <b>`shape_x`</b>: A rank 1 integer `Tensor`, representing the shape of x.
-* <b>`shape_y`</b>: A rank 1 integer `Tensor`, representing the shape of x.
-
-##### Returns:
-
- A rank 1 integer `Tensor` representing the broadcasted shape.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.cast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.cast.md
deleted file mode 100644
index 9571f87afe..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.cast.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.cast(x, dtype, name=None)` {#cast}
-
-Casts a tensor to a new type.
-
-The operation casts `x` (in case of `Tensor`) or `x.values`
-(in case of `SparseTensor`) to `dtype`.
-
-For example:
-
-```python
-# tensor `a` is [1.8, 2.2], dtype=tf.float
-tf.cast(a, tf.int32) ==> [1, 2] # dtype=tf.int32
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`.
-* <b>`dtype`</b>: The destination type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` with same shape as `x`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` cannot be cast to the `dtype`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.clip_by_global_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.clip_by_global_norm.md
deleted file mode 100644
index a40f621bf4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.clip_by_global_norm.md
+++ /dev/null
@@ -1,50 +0,0 @@
-### `tf.clip_by_global_norm(t_list, clip_norm, use_norm=None, name=None)` {#clip_by_global_norm}
-
-Clips values of multiple tensors by the ratio of the sum of their norms.
-
-Given a tuple or list of tensors `t_list`, and a clipping ratio `clip_norm`,
-this operation returns a list of clipped tensors `list_clipped`
-and the global norm (`global_norm`) of all tensors in `t_list`. Optionally,
-if you've already computed the global norm for `t_list`, you can specify
-the global norm with `use_norm`.
-
-To perform the clipping, the values `t_list[i]` are set to:
-
- t_list[i] * clip_norm / max(global_norm, clip_norm)
-
-where:
-
- global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))
-
-If `clip_norm > global_norm` then the entries in `t_list` remain as they are,
-otherwise they're all shrunk by the global ratio.
-
-Any of the entries of `t_list` that are of type `None` are ignored.
-
-This is the correct way to perform gradient clipping (for example, see
-[Pascanu et al., 2012](http://arxiv.org/abs/1211.5063)
-([pdf](http://arxiv.org/pdf/1211.5063.pdf))).
-
-However, it is slower than `clip_by_norm()` because all the parameters must be
-ready before the clipping operation can be performed.
-
-##### Args:
-
-
-* <b>`t_list`</b>: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None.
-* <b>`clip_norm`</b>: A 0-D (scalar) `Tensor` > 0. The clipping ratio.
-* <b>`use_norm`</b>: A 0-D (scalar) `Tensor` of type `float` (optional). The global
- norm to use. If not provided, `global_norm()` is used to compute the norm.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
-
-* <b>`list_clipped`</b>: A list of `Tensors` of the same type as `list_t`.
-* <b>`global_norm`</b>: A 0-D (scalar) `Tensor` representing the global norm.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `t_list` is not a sequence.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.clip_by_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.clip_by_norm.md
deleted file mode 100644
index 22a642aed9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.clip_by_norm.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.clip_by_norm(t, clip_norm, axes=None, name=None)` {#clip_by_norm}
-
-Clips tensor values to a maximum L2-norm.
-
-Given a tensor `t`, and a maximum clip value `clip_norm`, this operation
-normalizes `t` so that its L2-norm is less than or equal to `clip_norm`,
-along the dimensions given in `axes`. Specifically, in the default case
-where all dimensions are used for calculation, if the L2-norm of `t` is
-already less than or equal to `clip_norm`, then `t` is not modified. If
-the L2-norm is greater than `clip_norm`, then this operation returns a
-tensor of the same type and shape as `t` with its values set to:
-
-`t * clip_norm / l2norm(t)`
-
-In this case, the L2-norm of the output tensor is `clip_norm`.
-
-As another example, if `t` is a matrix and `axes == [1]`, then each row
-of the output will have L2-norm equal to `clip_norm`. If `axes == [0]`
-instead, each column of the output will be clipped.
-
-This operation is typically used to clip gradients before applying them with
-an optimizer.
-
-##### Args:
-
-
-* <b>`t`</b>: A `Tensor`.
-* <b>`clip_norm`</b>: A 0-D (scalar) `Tensor` > 0. A maximum clipping value.
-* <b>`axes`</b>: A 1-D (vector) `Tensor` of type int32 containing the dimensions
- to use for computing the L2-norm. If `None` (the default), uses all
- dimensions.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A clipped `Tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.container.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.container.md
deleted file mode 100644
index 44221cd098..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.container.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.container(container_name)` {#container}
-
-Wrapper for `Graph.container()` using the default graph.
-
-##### Args:
-
-
-* <b>`container_name`</b>: The container string to use in the context.
-
-##### Returns:
-
- A context manager that specifies the default container to use for newly
- created stateful ops.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.bayesflow.stochastic_graph.surrogate_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.bayesflow.stochastic_graph.surrogate_loss.md
deleted file mode 100644
index 0928e5d001..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.bayesflow.stochastic_graph.surrogate_loss.md
+++ /dev/null
@@ -1,34 +0,0 @@
-### `tf.contrib.bayesflow.stochastic_graph.surrogate_loss(sample_losses, stochastic_tensors=None, name='SurrogateLoss')` {#surrogate_loss}
-
-Surrogate loss for stochastic graphs.
-
-This function will call `loss_fn` on each `StochasticTensor`
-upstream of `sample_losses`, passing the losses that it influenced.
-
-Note that currently `surrogate_loss` does not work with `StochasticTensor`s
-instantiated in `while_loop`s or other control structures.
-
-##### Args:
-
-
-* <b>`sample_losses`</b>: a list or tuple of final losses. Each loss should be per
- example in the batch (and possibly per sample); that is, it should have
- dimensionality of 1 or greater. All losses should have the same shape.
-* <b>`stochastic_tensors`</b>: a list of `StochasticTensor`s to add loss terms for.
- If None, defaults to all `StochasticTensor`s in the graph upstream of
- the `Tensor`s in `sample_losses`.
-* <b>`name`</b>: the name with which to prepend created ops.
-
-##### Returns:
-
- `Tensor` loss, which is the sum of `sample_losses` and the
- `loss_fn`s returned by the `StochasticTensor`s.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `sample_losses` is not a list or tuple, or if its elements
- are not `Tensor`s.
-* <b>`ValueError`</b>: if any loss in `sample_losses` does not have dimensionality 1
- or greater.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.crf.crf_log_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.crf.crf_log_norm.md
deleted file mode 100644
index 830a38940f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.crf.crf_log_norm.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.contrib.crf.crf_log_norm(inputs, sequence_lengths, transition_params)` {#crf_log_norm}
-
-Computes the normalization for a CRF.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A [batch_size, max_seq_len, num_tags] tensor of unary potentials
- to use as input to the CRF layer.
-* <b>`sequence_lengths`</b>: A [batch_size] vector of true sequence lengths.
-* <b>`transition_params`</b>: A [num_tags, num_tags] transition matrix.
-
-##### Returns:
-
-
-* <b>`log_norm`</b>: A [batch_size] vector of normalizers for a CRF.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.bijector.Affine.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.bijector.Affine.md
deleted file mode 100644
index 09cbf17110..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.bijector.Affine.md
+++ /dev/null
@@ -1,399 +0,0 @@
-Compute `Y = g(X; shift, scale) = scale @ X + shift`.
-
-Here `scale = c * I + diag(D1) + tril(L) + V @ diag(D2) @ V.T`.
-
-In TF parlance, the `scale` term is logically equivalent to:
-
-```python
-scale = (
- scale_identity_multiplier * tf.diag(tf.ones(d)) +
- tf.diag(scale_diag) +
- scale_tril +
- scale_perturb_factor @ diag(scale_perturb_diag) @
- tf.transpose([scale_perturb_factor])
-)
-```
-
-The `scale` term is applied without necessarily materializing constituent
-matrices, i.e., the matmul is [matrix-free](
-https://en.wikipedia.org/wiki/Matrix-free_methods) when possible.
-
-Examples:
-
-```python
-# Y = X
-b = Affine()
-
-# Y = X + shift
-b = Affine(shift=[1., 2, 3])
-
-# Y = 2 * I @ X.T + shift
-b = Affine(shift=[1., 2, 3],
- scale_identity_multiplier=2.)
-
-# Y = tf.diag(d1) @ X.T + shift
-b = Affine(shift=[1., 2, 3],
- scale_diag=[-1., 2, 1]) # Implicitly 3x3.
-
-# Y = (I + v * v.T) @ X.T + shift
-b = Affine(shift=[1., 2, 3],
- scale_perturb_factor=[[1., 0],
- [0, 1],
- [1, 1]])
-
-# Y = (diag(d1) + v * diag(d2) * v.T) @ X.T + shift
-b = Affine(shift=[1., 2, 3],
- scale_diag=[1., 3, 3], # Implicitly 3x3.
- scale_perturb_diag=[2., 1], # Implicitly 2x2.
- scale_perturb_factor=[[1., 0],
- [0, 1],
- [1, 1]])
-
-```
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.__init__(shift=None, scale_identity_multiplier=None, scale_diag=None, scale_tril=None, scale_perturb_factor=None, scale_perturb_diag=None, event_ndims=1, validate_args=False, name='affine')` {#Affine.__init__}
-
-Instantiates the `Affine` bijector.
-
-This `Bijector` is initialized with `shift` `Tensor` and `scale` arguments,
-giving the forward operation:
-
-```none
-Y = g(X) = scale @ X + shift
-```
-
-where the `scale` term is logically equivalent to:
-
-```python
-scale = (
- scale_identity_multiplier * tf.diag(tf.ones(d)) +
- tf.diag(scale_diag) +
- scale_tril +
- scale_perturb_factor @ diag(scale_perturb_diag) @
- tf.transpose([scale_perturb_factor])
-)
-```
-
-If none of `scale_identity_multiplier`, `scale_diag`, or `scale_tril` are
-specified then `scale += IdentityMatrix`. Otherwise specifying a
-`scale` argument has the semantics of `scale += Expand(arg)`, i.e.,
-`scale_diag != None` means `scale += tf.diag(scale_diag)`.
-
-##### Args:
-
-
-* <b>`shift`</b>: Floating-point `Tensor`. If this is set to `None`, no shift is
- applied.
-* <b>`scale_identity_multiplier`</b>: floating point rank 0 `Tensor` representing a
- scaling done to the identity matrix.
- When `scale_identity_multiplier = scale_diag = scale_tril = None` then
- `scale += IdentityMatrix`. Otherwise no scaled-identity-matrix is added
- to `scale`.
-* <b>`scale_diag`</b>: Floating-point `Tensor` representing the diagonal matrix.
- `scale_diag` has shape [N1, N2, ... k], which represents a k x k
- diagonal matrix.
- When `None` no diagonal term is added to `scale`.
-* <b>`scale_tril`</b>: Floating-point `Tensor` representing the diagonal matrix.
- `scale_diag` has shape [N1, N2, ... k, k], which represents a k x k
- lower triangular matrix.
- When `None` no `scale_tril` term is added to `scale`.
- The upper triangular elements above the diagonal are ignored.
-* <b>`scale_perturb_factor`</b>: Floating-point `Tensor` representing factor matrix
- with last two dimensions of shape `(k, r)`. When `None`, no rank-r
- update is added to `scale`.
-* <b>`scale_perturb_diag`</b>: Floating-point `Tensor` representing the diagonal
- matrix. `scale_perturb_diag` has shape [N1, N2, ... r], which
- represents an `r x r` diagonal matrix. When `None` low rank updates will
- take the form `scale_perturb_factor * scale_perturb_factor.T`.
-* <b>`event_ndims`</b>: Scalar `int32` `Tensor` indicating the number of dimensions
- associated with a particular draw from the distribution. Must be 0 or 1.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str` name given to ops managed by this object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `perturb_diag` is specified but not `perturb_factor`.
-* <b>`TypeError`</b>: if `shift` has different `dtype` from `scale` arguments.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.dtype` {#Affine.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.event_ndims` {#Affine.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.forward(x, name='forward')` {#Affine.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.forward_event_shape(input_shape)` {#Affine.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Affine.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Affine.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.graph_parents` {#Affine.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.inverse(y, name='inverse')` {#Affine.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Affine.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.inverse_event_shape(output_shape)` {#Affine.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Affine.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Affine.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.is_constant_jacobian` {#Affine.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.name` {#Affine.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.scale` {#Affine.scale}
-
-The `scale` `LinearOperator` in `Y = scale @ X + shift`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.shift` {#Affine.shift}
-
-The `shift` `Tensor` in `Y = scale @ X + shift`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Affine.validate_args` {#Affine.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.bijector.Chain.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.bijector.Chain.md
deleted file mode 100644
index 98a4130981..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.bijector.Chain.md
+++ /dev/null
@@ -1,324 +0,0 @@
-Bijector which applies a sequence of bijectors.
-
-Example Use:
-
-```python
-chain = Chain([Exp(), Softplus()], name="one_plus_exp")
-```
-
-Results in:
-
-* Forward:
-
- ```python
- exp = Exp()
- softplus = Softplus()
- Chain([exp, softplus]).forward(x)
- = exp.forward(softplus.forward(x))
- = tf.exp(tf.log(1. + tf.exp(x)))
- = 1. + tf.exp(x)
- ```
-
-* Inverse:
-
- ```python
- exp = Exp()
- softplus = Softplus()
- Chain([exp, softplus]).inverse(y)
- = softplus.inverse(exp.inverse(y))
- = tf.log(tf.exp(tf.log(y)) - 1.)
- = tf.log(y - 1.)
- ```
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.__init__(bijectors=(), validate_args=False, name=None)` {#Chain.__init__}
-
-Instantiates `Chain` bijector.
-
-##### Args:
-
-
-* <b>`bijectors`</b>: Python list of bijector instances. An empty list makes this
- bijector equivalent to the `Identity` bijector.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str`, name given to ops managed by this object. Default:
- E.g., `Chain([Exp(), Softplus()]).name == "chain_of_exp_of_softplus"`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if bijectors have different dtypes.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.bijectors` {#Chain.bijectors}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.dtype` {#Chain.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.event_ndims` {#Chain.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.forward(x, name='forward')` {#Chain.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.forward_event_shape(input_shape)` {#Chain.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Chain.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Chain.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.graph_parents` {#Chain.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.inverse(y, name='inverse')` {#Chain.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Chain.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.inverse_event_shape(output_shape)` {#Chain.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Chain.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Chain.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.is_constant_jacobian` {#Chain.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.name` {#Chain.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Chain.validate_args` {#Chain.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.bijector.Exp.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.bijector.Exp.md
deleted file mode 100644
index 9fde10ec22..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.distributions.bijector.Exp.md
+++ /dev/null
@@ -1,305 +0,0 @@
-Compute `Y = g(X) = exp(X)`.
-
-Example Use:
-
-```python
-# Create the Y=g(X)=exp(X) transform which works only on Tensors with 1
-# batch ndim and 2 event ndims (i.e., vector of matrices).
-exp = Exp(event_ndims=2)
-x = [[[1., 2],
- [3, 4]],
- [[5, 6],
- [7, 8]]]
-exp(x) == exp.forward(x)
-log(x) == exp.inverse(x)
-```
-
-Note: the exp(.) is applied element-wise but the Jacobian is a reduction
-over the event space.
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.__init__(event_ndims=0, validate_args=False, name='exp')` {#Exp.__init__}
-
-Instantiates the `Exp` bijector.
-
-##### Args:
-
-
-* <b>`event_ndims`</b>: Scalar `int32` `Tensor` indicating the number of dimensions
- associated with a particular draw from the distribution.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str` name given to ops managed by this object.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.dtype` {#Exp.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.event_ndims` {#Exp.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.forward(x, name='forward')` {#Exp.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.forward_event_shape(input_shape)` {#Exp.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Exp.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Exp.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.graph_parents` {#Exp.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.inverse(y, name='inverse')` {#Exp.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Exp.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.inverse_event_shape(output_shape)` {#Exp.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Exp.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Exp.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.is_constant_jacobian` {#Exp.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.name` {#Exp.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.power` {#Exp.power}
-
-The `c` in: `Y = g(X) = (1 + X * c)**(1 / c)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Exp.validate_args` {#Exp.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.arg_scoped_arguments.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.arg_scoped_arguments.md
deleted file mode 100644
index 507c28206d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.arg_scoped_arguments.md
+++ /dev/null
@@ -1,13 +0,0 @@
-### `tf.contrib.framework.arg_scoped_arguments(func)` {#arg_scoped_arguments}
-
-Returns the list kwargs that arg_scope can set for a func.
-
-##### Args:
-
-
-* <b>`func`</b>: function which has been decorated with @add_arg_scope.
-
-##### Returns:
-
- a list of kwargs names.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.get_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.get_variables.md
deleted file mode 100644
index e74c25d4a4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.get_variables.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.contrib.framework.get_variables(scope=None, suffix=None, collection='variables')` {#get_variables}
-
-Gets the list of variables, filtered by scope and/or suffix.
-
-##### Args:
-
-
-* <b>`scope`</b>: an optional scope for filtering the variables to return. Can be a
- variable scope or a string.
-* <b>`suffix`</b>: an optional suffix for filtering the variables to return.
-* <b>`collection`</b>: in which collection search for. Defaults to
- `GraphKeys.GLOBAL_VARIABLES`.
-
-##### Returns:
-
- a list of variables in collection with scope and suffix.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.list_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.list_variables.md
deleted file mode 100644
index fc8cceb6b1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.list_variables.md
+++ /dev/null
@@ -1,13 +0,0 @@
-### `tf.contrib.framework.list_variables(checkpoint_dir)` {#list_variables}
-
-Returns list of all variables in the latest checkpoint.
-
-##### Args:
-
-
-* <b>`checkpoint_dir`</b>: Directory with checkpoints file or path to checkpoint.
-
-##### Returns:
-
- List of tuples `(name, shape)`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.variable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.variable.md
deleted file mode 100644
index 79081d4e9f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.framework.variable.md
+++ /dev/null
@@ -1,33 +0,0 @@
-### `tf.contrib.framework.variable(*args, **kwargs)` {#variable}
-
-Gets an existing variable with these parameters or creates a new one.
-
-##### Args:
-
-
-* <b>`name`</b>: the name of the new or existing variable.
-* <b>`shape`</b>: shape of the new or existing variable.
-* <b>`dtype`</b>: type of the new or existing variable (defaults to `DT_FLOAT`).
-* <b>`initializer`</b>: initializer for the variable if one is created.
-* <b>`regularizer`</b>: a (Tensor -> Tensor or None) function; the result of
- applying it on a newly created variable will be added to the collection
- GraphKeys.REGULARIZATION_LOSSES and can be used for regularization.
-* <b>`trainable`</b>: If `True` also add the variable to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
-* <b>`collections`</b>: A list of collection names to which the Variable will be added.
- If None it would default to `tf.GraphKeys.GLOBAL_VARIABLES`.
-* <b>`caching_device`</b>: Optional device string or function describing where the
- Variable should be cached for reading. Defaults to the Variable's
- device.
-* <b>`device`</b>: Optional device to place the variable. It can be an string or a
- function that is called to get the device for the variable.
-* <b>`partitioner`</b>: Optional callable that accepts a fully defined `TensorShape`
- and dtype of the `Variable` to be created, and returns a list of
- partitions for each axis (currently only one axis can be partitioned).
-* <b>`custom_getter`</b>: Callable that allows overwriting the internal
- get_variable method and has to have the same signature.
-
-##### Returns:
-
- The created or existing variable.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.bypass.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.bypass.md
deleted file mode 100644
index 987a242d90..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.bypass.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.contrib.graph_editor.bypass(sgv)` {#bypass}
-
-Bypass the given subgraph by connecting its inputs to its outputs.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the subgraph view to be bypassed. This argument is converted to a
- subgraph using the same rules than the function subgraph.make_view.
- Note that sgv is modified in place.
-
-##### Returns:
-
- A tuple `(sgv, detached_inputs)` where:
- `sgv` is a new subgraph view of the bypassed subgraph;
- `detached_inputs` is a list of the created input placeholders.
-
-##### Raises:
-
-
-* <b>`StandardError`</b>: if sgv cannot be converted to a SubGraphView using
- the same rules than the function subgraph.make_view.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.can_be_regex.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.can_be_regex.md
deleted file mode 100644
index 212faafdc4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.can_be_regex.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.graph_editor.can_be_regex(obj)` {#can_be_regex}
-
-Return True if obj can be turned into a regular expression.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.detach_control_outputs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.detach_control_outputs.md
deleted file mode 100644
index 4488755c9b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.detach_control_outputs.md
+++ /dev/null
@@ -1,11 +0,0 @@
-### `tf.contrib.graph_editor.detach_control_outputs(sgv, control_outputs)` {#detach_control_outputs}
-
-Detach all the external control outputs of the subgraph sgv.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the subgraph view to be detached. This argument is converted to a
- subgraph using the same rules as the function subgraph.make_view.
-* <b>`control_outputs`</b>: a util.ControlOutputs instance.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.get_walks_intersection_ops.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.get_walks_intersection_ops.md
deleted file mode 100644
index 355b6301f8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.get_walks_intersection_ops.md
+++ /dev/null
@@ -1,39 +0,0 @@
-### `tf.contrib.graph_editor.get_walks_intersection_ops(forward_seed_ops, backward_seed_ops, forward_inclusive=True, backward_inclusive=True, within_ops=None, control_inputs=False, control_outputs=None, control_ios=None)` {#get_walks_intersection_ops}
-
-Return the intersection of a forward and a backward walk.
-
-##### Args:
-
-
-* <b>`forward_seed_ops`</b>: an iterable of operations from which the forward graph
- walk starts. If a list of tensors is given instead, the seed_ops are set
- to be the consumers of those tensors.
-* <b>`backward_seed_ops`</b>: an iterable of operations from which the backward graph
- walk starts. If a list of tensors is given instead, the seed_ops are set
- to be the generators of those tensors.
-* <b>`forward_inclusive`</b>: if True the given forward_seed_ops are also part of the
- resulting set.
-* <b>`backward_inclusive`</b>: if True the given backward_seed_ops are also part of the
- resulting set.
-* <b>`within_ops`</b>: an iterable of tf.Operation within which the search is
- restricted. If within_ops is None, the search is performed within
- the whole graph.
-* <b>`control_inputs`</b>: A boolean indicating whether control inputs are enabled.
-* <b>`control_outputs`</b>: An instance of util.ControlOutputs or None. If not None,
- control outputs are enabled.
-* <b>`control_ios`</b>: An instance of util.ControlOutputs or None. If not None, both
- control inputs and control outputs are enabled. This is equivalent to set
- control_inputs to True and control_outputs to the util.ControlOutputs
- instance.
-
-##### Returns:
-
- A Python set of all the tf.Operation in the intersection of a forward and a
- backward walk.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `forward_seed_ops` or `backward_seed_ops` or `within_ops`
- cannot be converted to a list of `tf.Operation`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.make_regex.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.make_regex.md
deleted file mode 100644
index e0aaae10b7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.make_regex.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.contrib.graph_editor.make_regex(obj)` {#make_regex}
-
-Return a compiled regular expression.
-
-##### Args:
-
-
-* <b>`obj`</b>: a string or a regular expression.
-
-##### Returns:
-
- A compiled regular expression.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if obj could not be converted to a regular expression.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.remove_control_inputs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.remove_control_inputs.md
deleted file mode 100644
index 59b3630485..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.graph_editor.remove_control_inputs.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.contrib.graph_editor.remove_control_inputs(op, cops)` {#remove_control_inputs}
-
-Remove the control inputs cops from co.
-
-Warning: this function is directly manipulating the internals of the
-`tf.Graph`.
-
-##### Args:
-
-
-* <b>`op`</b>: a `tf.Operation` from which to remove the control inputs.
-* <b>`cops`</b>: an object convertible to a list of `tf.Operation`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if op is not a `tf.Operation`.
-* <b>`ValueError`</b>: if any cop in cops is not a control input of op.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.convolution2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.convolution2d.md
deleted file mode 100644
index 40141a83f6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.convolution2d.md
+++ /dev/null
@@ -1,75 +0,0 @@
-### `tf.contrib.layers.convolution2d(*args, **kwargs)` {#convolution2d}
-
-Adds an N-D convolution followed by an optional batch_norm layer.
-
-It is required that 1 <= N <= 3.
-
-`convolution` creates a variable called `weights`, representing the
-convolutional kernel, that is convolved (actually cross-correlated) with the
-`inputs` to produce a `Tensor` of activations. If a `normalizer_fn` is
-provided (such as `batch_norm`), it is then applied. Otherwise, if
-`normalizer_fn` is None and a `biases_initializer` is provided then a `biases`
-variable would be created and added the activations. Finally, if
-`activation_fn` is not `None`, it is applied to the activations as well.
-
-Performs a'trous convolution with input stride/dilation rate equal to `rate`
-if a value > 1 for any dimension of `rate` is specified. In this case
-`stride` values != 1 are not supported.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A Tensor of rank N+2 of shape
- `[batch_size] + input_spatial_shape + [in_channels]` if data_format does
- not start with "NC" (default), or
- `[batch_size, in_channels] + input_spatial_shape` if data_format starts
- with "NC".
-* <b>`num_outputs`</b>: Integer, the number of output filters.
-* <b>`kernel_size`</b>: A sequence of N positive integers specifying the spatial
- dimensions of of the filters. Can be a single integer to specify the same
- value for all spatial dimensions.
-* <b>`stride`</b>: A sequence of N positive integers specifying the stride at which to
- compute output. Can be a single integer to specify the same value for all
- spatial dimensions. Specifying any `stride` value != 1 is incompatible
- with specifying any `rate` value != 1.
-* <b>`padding`</b>: One of `"VALID"` or `"SAME"`.
-* <b>`data_format`</b>: A string or None. Specifies whether the channel dimension of
- the `input` and output is the last dimension (default, or if `data_format`
- does not start with "NC"), or the second dimension (if `data_format`
- starts with "NC"). For N=1, the valid values are "NWC" (default) and
- "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For
- N=3, currently the only valid value is "NDHWC".
-* <b>`rate`</b>: A sequence of N positive integers specifying the dilation rate to use
- for a'trous convolution. Can be a single integer to specify the same
- value for all spatial dimensions. Specifying any `rate` value != 1 is
- incompatible with specifying any `stride` value != 1.
-* <b>`activation_fn`</b>: Activation function. The default value is a ReLU function.
- Explicitly set it to None to skip it and maintain a linear activation.
-* <b>`normalizer_fn`</b>: Normalization function to use instead of `biases`. If
- `normalizer_fn` is provided then `biases_initializer` and
- `biases_regularizer` are ignored and `biases` are not created nor added.
- default set to None for no normalizer function
-* <b>`normalizer_params`</b>: Normalization function parameters.
-* <b>`weights_initializer`</b>: An initializer for the weights.
-* <b>`weights_regularizer`</b>: Optional regularizer for the weights.
-* <b>`biases_initializer`</b>: An initializer for the biases. If None skip biases.
-* <b>`biases_regularizer`</b>: Optional regularizer for the biases.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional list of collections for all the variables or
- a dictionary containing a different list of collection per variable.
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for `variable_scope`.
-
-##### Returns:
-
- A tensor representing the output of the operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `data_format` is invalid.
-* <b>`ValueError`</b>: Both 'rate' and `stride` are not uniformly 1.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.fully_connected.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.fully_connected.md
deleted file mode 100644
index 846a09e3bb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.fully_connected.md
+++ /dev/null
@@ -1,50 +0,0 @@
-### `tf.contrib.layers.fully_connected(*args, **kwargs)` {#fully_connected}
-
-Adds a fully connected layer.
-
-`fully_connected` creates a variable called `weights`, representing a fully
-connected weight matrix, which is multiplied by the `inputs` to produce a
-`Tensor` of hidden units. If a `normalizer_fn` is provided (such as
-`batch_norm`), it is then applied. Otherwise, if `normalizer_fn` is
-None and a `biases_initializer` is provided then a `biases` variable would be
-created and added the hidden units. Finally, if `activation_fn` is not `None`,
-it is applied to the hidden units as well.
-
-Note: that if `inputs` have a rank greater than 2, then `inputs` is flattened
-prior to the initial matrix multiply by `weights`.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A tensor of at least rank 2 and static value for the last dimension;
- i.e. `[batch_size, depth]`, `[None, None, None, channels]`.
-* <b>`num_outputs`</b>: Integer or long, the number of output units in the layer.
-* <b>`activation_fn`</b>: Activation function. The default value is a ReLU function.
- Explicitly set it to None to skip it and maintain a linear activation.
-* <b>`normalizer_fn`</b>: Normalization function to use instead of `biases`. If
- `normalizer_fn` is provided then `biases_initializer` and
- `biases_regularizer` are ignored and `biases` are not created nor added.
- default set to None for no normalizer function
-* <b>`normalizer_params`</b>: Normalization function parameters.
-* <b>`weights_initializer`</b>: An initializer for the weights.
-* <b>`weights_regularizer`</b>: Optional regularizer for the weights.
-* <b>`biases_initializer`</b>: An initializer for the biases. If None skip biases.
-* <b>`biases_regularizer`</b>: Optional regularizer for the biases.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional list of collections for all the variables or
- a dictionary containing a different list of collections per variable.
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- The tensor variable representing the result of the series of operations.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If x has rank less than 2 or if its last dimension is not set.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.legacy_fully_connected.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.legacy_fully_connected.md
deleted file mode 100644
index f10993af30..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.legacy_fully_connected.md
+++ /dev/null
@@ -1,73 +0,0 @@
-### `tf.contrib.layers.legacy_fully_connected(x, num_output_units, activation_fn=None, weight_init=_initializer, bias_init=Zeros(), name=None, weight_collections=('weights',), bias_collections=('biases',), output_collections=('activations',), trainable=True, weight_regularizer=None, bias_regularizer=None)` {#legacy_fully_connected}
-
-Adds the parameters for a fully connected layer and returns the output.
-
-A fully connected layer is generally defined as a matrix multiply:
-`y = f(w * x + b)` where `f` is given by `activation_fn`. If
-`activation_fn` is `None`, the result of `y = w * x + b` is
-returned.
-
-If `x` has shape [\\\(\\text{dim}_0, \\text{dim}_1, ..., \\text{dim}_n\\\)]
-with more than 2 dimensions (\\\(n > 1\\\)), then we repeat the matrix
-multiply along the first dimensions. The result r is a tensor of shape
-[\\\(\\text{dim}_0, ..., \\text{dim}_{n-1},\\\) `num_output_units`],
-where \\\( r_{i_0, ..., i_{n-1}, k} =
-\\sum_{0 \\leq j < \\text{dim}_n} x_{i_0, ... i_{n-1}, j} \cdot w_{j, k}\\\).
-This is accomplished by reshaping `x` to 2-D
-[\\\(\\text{dim}_0 \\cdot ... \\cdot \\text{dim}_{n-1}, \\text{dim}_n\\\)]
-before the matrix multiply and afterwards reshaping it to
-[\\\(\\text{dim}_0, ..., \\text{dim}_{n-1},\\\) `num_output_units`].
-
-This op creates `w` and optionally `b`. Bias (`b`) can be disabled by setting
-`bias_init` to `None`.
-
-The variable creation is compatible with `tf.variable_scope` and so can be
-reused with `tf.variable_scope` or `tf.make_template`.
-
-Most of the details of variable creation can be controlled by specifying the
-initializers (`weight_init` and `bias_init`) and in which collections to place
-the created variables (`weight_collections` and `bias_collections`; note that
-the variables are always added to the `VARIABLES` collection). The output of
-the layer can be placed in custom collections using `output_collections`.
-The collections arguments default to `WEIGHTS`, `BIASES` and `ACTIVATIONS`,
-respectively.
-
-A per layer regularization can be specified by setting `weight_regularizer`
-and `bias_regularizer`, which are applied to the weights and biases
-respectively, and whose output is added to the `REGULARIZATION_LOSSES`
-collection.
-
-##### Args:
-
-
-* <b>`x`</b>: The input `Tensor`.
-* <b>`num_output_units`</b>: The size of the output.
-* <b>`activation_fn`</b>: Activation function, default set to None to skip it and
- maintain a linear activation.
-* <b>`weight_init`</b>: An optional weight initialization, defaults to
- `xavier_initializer`.
-* <b>`bias_init`</b>: An initializer for the bias, defaults to 0. Set to `None` in
- order to disable bias.
-* <b>`name`</b>: The name for this operation is used to name operations and to find
- variables. If specified it must be unique for this scope, otherwise a
- unique name starting with "fully_connected" will be created. See
- `tf.variable_scope` for details.
-* <b>`weight_collections`</b>: List of graph collections to which weights are added.
-* <b>`bias_collections`</b>: List of graph collections to which biases are added.
-* <b>`output_collections`</b>: List of graph collections to which outputs are added.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`weight_regularizer`</b>: A regularizer like the result of
- `l1_regularizer` or `l2_regularizer`. Used for weights.
-* <b>`bias_regularizer`</b>: A regularizer like the result of
- `l1_regularizer` or `l2_regularizer`. Used for biases.
-
-##### Returns:
-
- The output of the fully connected layer.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If x has rank less than 2 or if its last dimension is not set.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.multi_class_target.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.multi_class_target.md
deleted file mode 100644
index a1ef504e4e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.multi_class_target.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.contrib.layers.multi_class_target(*args, **kwargs)` {#multi_class_target}
-
-Creates a _TargetColumn for multi class single label classification. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-11-12.
-Instructions for updating:
-This file will be removed after the deprecation date.Please switch to third_party/tensorflow/contrib/learn/python/learn/estimators/head.py
-
-The target column uses softmax cross entropy loss.
-
-##### Args:
-
-
-* <b>`n_classes`</b>: Integer, number of classes, must be >= 2
-* <b>`label_name`</b>: String, name of the key in label dict. Can be null if label
- is a tensor (single headed models).
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training. It
- will be multiplied by the loss of the example.
-
-##### Returns:
-
- An instance of _MultiClassTargetColumn.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if n_classes is < 2
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.one_hot_encoding.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.one_hot_encoding.md
deleted file mode 100644
index 7cc66041ea..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.one_hot_encoding.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.contrib.layers.one_hot_encoding(*args, **kwargs)` {#one_hot_encoding}
-
-Transform numeric labels into onehot_labels using `tf.one_hot`.
-
-##### Args:
-
-
-* <b>`labels`</b>: [batch_size] target labels.
-* <b>`num_classes`</b>: Total number of classes.
-* <b>`on_value`</b>: A scalar defining the on-value.
-* <b>`off_value`</b>: A scalar defining the off-value.
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`scope`</b>: Optional scope for name_scope.
-
-##### Returns:
-
- One-hot encoding of the labels.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.scattered_embedding_column.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.scattered_embedding_column.md
deleted file mode 100644
index b905120ec4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.scattered_embedding_column.md
+++ /dev/null
@@ -1,51 +0,0 @@
-### `tf.contrib.layers.scattered_embedding_column(column_name, size, dimension, hash_key, combiner='mean', initializer=None)` {#scattered_embedding_column}
-
-Creates an embedding column of a sparse feature using parameter hashing.
-
-The i-th embedding component of a value v is found by retrieving an
-embedding weight whose index is a fingerprint of the pair (v,i).
-
-An embedding column with sparse_column_with_hash_bucket such as
- embedding_column(
- sparse_column_with_hash_bucket(column_name, bucket_size),
- dimension)
-
-could be replaced by
- scattered_embedding_column(
- column_name, size=bucket_size * dimension, dimension=dimension,
- hash_key=tf.contrib.layers.SPARSE_FEATURE_CROSS_DEFAULT_HASH_KEY)
-
-for the same number of embedding parameters and hopefully reduced impact of
-collisions with a cost of slowing down training.
-
-##### Args:
-
-
-* <b>`column_name`</b>: A string defining sparse column name.
-* <b>`size`</b>: An integer specifying the number of parameters in the embedding layer.
-* <b>`dimension`</b>: An integer specifying dimension of the embedding.
-* <b>`hash_key`</b>: Specify the hash_key that will be used by the `FingerprintCat64`
- function to combine the crosses fingerprints on SparseFeatureCrossOp.
-* <b>`combiner`</b>: A string specifying how to reduce if there are multiple entries
- in a single row. Currently "mean", "sqrtn" and "sum" are supported, with
- "mean" the default. "sqrtn" often achieves good accuracy, in particular
- with bag-of-words columns. Each of this can be thought as example level
- normalizations on the column:
- * "sum": do not normalize features in the column
- * "mean": do l1 normalization on features in the column
- * "sqrtn": do l2 normalization on features in the column
- For more information: `tf.embedding_lookup_sparse`.
-* <b>`initializer`</b>: A variable initializer function to be used in embedding
- variable initialization. If not specified, defaults to
- `tf.truncated_normal_initializer` with mean 0 and standard deviation 0.1.
-
-##### Returns:
-
- A _ScatteredEmbeddingColumn.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if dimension or size is not a positive integer; or if combiner
- is not supported.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.sum_regularizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.sum_regularizer.md
deleted file mode 100644
index 4ea32c2135..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.sum_regularizer.md
+++ /dev/null
@@ -1,15 +0,0 @@
-### `tf.contrib.layers.sum_regularizer(regularizer_list, scope=None)` {#sum_regularizer}
-
-Returns a function that applies the sum of multiple regularizers.
-
-##### Args:
-
-
-* <b>`regularizer_list`</b>: A list of regularizers to apply.
-* <b>`scope`</b>: An optional scope name
-
-##### Returns:
-
- A function with signature `sum_reg(weights)` that applies the
- sum of all the input regularizers.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.summarize_collection.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.summarize_collection.md
deleted file mode 100644
index b1b5f56056..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.layers.summarize_collection.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.layers.summarize_collection(collection, name_filter=None, summarizer=summarize_tensor)` {#summarize_collection}
-
-Summarize a graph collection of tensors, possibly filtered by name.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.monitors.BaseMonitor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.monitors.BaseMonitor.md
deleted file mode 100644
index bea2cc6516..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.monitors.BaseMonitor.md
+++ /dev/null
@@ -1,187 +0,0 @@
-Base class for Monitors.
-
-Defines basic interfaces of Monitors.
-Monitors can either be run on all workers or, more commonly, restricted
-to run exclusively on the elected chief worker.
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.__init__(*args, **kwargs)` {#BaseMonitor.__init__}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-05.
-Instructions for updating:
-Monitors are deprecated. Please use tf.train.SessionRunHook.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.begin(max_steps=None)` {#BaseMonitor.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.end(session=None)` {#BaseMonitor.end}
-
-Callback at the end of training/evaluation.
-
-##### Args:
-
-
-* <b>`session`</b>: A `tf.Session` object that can be used to run ops.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.epoch_begin(epoch)` {#BaseMonitor.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.epoch_end(epoch)` {#BaseMonitor.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.post_step(step, session)` {#BaseMonitor.post_step}
-
-Callback after the step is finished.
-
-Called after step_end and receives session to perform extra session.run
-calls. If failure occurred in the process, will be called as well.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, global step of the model.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.run_on_all_workers` {#BaseMonitor.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.set_estimator(estimator)` {#BaseMonitor.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.step_begin(step)` {#BaseMonitor.step_begin}
-
-Callback before training step begins.
-
-You may use this callback to request evaluation of additional tensors
-in the graph.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- List of `Tensor` objects or string tensor names to be run.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a step, or `step` < 0, or
- `step` > `max_steps`.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.BaseMonitor.step_end(step, output)` {#BaseMonitor.step_end}
-
-Callback after training step finished.
-
-This callback provides access to the tensors/ops evaluated at this step,
-including the additional tensors for which evaluation was requested in
-`step_begin`.
-
-In addition, the callback has the opportunity to stop training by returning
-`True`. This is useful for early stopping, for example.
-
-Note that this method is not called if the call to `Session.run()` that
-followed the last call to `step_begin()` failed.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`. True if training should stop.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun a step, or `step` number does not match.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.monitors.CheckpointSaver.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.monitors.CheckpointSaver.md
deleted file mode 100644
index 310b927376..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.monitors.CheckpointSaver.md
+++ /dev/null
@@ -1,146 +0,0 @@
-Saves checkpoints every N steps or N seconds.
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.__init__(checkpoint_dir, save_secs=None, save_steps=None, saver=None, checkpoint_basename='model.ckpt', scaffold=None)` {#CheckpointSaver.__init__}
-
-Initialize CheckpointSaver monitor.
-
-##### Args:
-
-
-* <b>`checkpoint_dir`</b>: `str`, base directory for the checkpoint files.
-* <b>`save_secs`</b>: `int`, save every N secs.
-* <b>`save_steps`</b>: `int`, save every N steps.
-* <b>`saver`</b>: `Saver` object, used for saving.
-* <b>`checkpoint_basename`</b>: `str`, base name for the checkpoint files.
-* <b>`scaffold`</b>: `Scaffold`, use to get saver object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both `save_steps` and `save_secs` are not `None`.
-* <b>`ValueError`</b>: If both `save_steps` and `save_secs` are `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.begin(max_steps=None)` {#CheckpointSaver.begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.end(session=None)` {#CheckpointSaver.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.epoch_begin(epoch)` {#CheckpointSaver.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.epoch_end(epoch)` {#CheckpointSaver.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.post_step(step, session)` {#CheckpointSaver.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.run_on_all_workers` {#CheckpointSaver.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.set_estimator(estimator)` {#CheckpointSaver.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.step_begin(step)` {#CheckpointSaver.step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.CheckpointSaver.step_end(step, output)` {#CheckpointSaver.step_end}
-
-Callback after training step finished.
-
-This callback provides access to the tensors/ops evaluated at this step,
-including the additional tensors for which evaluation was requested in
-`step_begin`.
-
-In addition, the callback has the opportunity to stop training by returning
-`True`. This is useful for early stopping, for example.
-
-Note that this method is not called if the call to `Session.run()` that
-followed the last call to `step_begin()` failed.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`. True if training should stop.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun a step, or `step` number does not match.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.monitors.RunHookAdapterForMonitors.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.monitors.RunHookAdapterForMonitors.md
deleted file mode 100644
index 4f1e3dcc94..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.monitors.RunHookAdapterForMonitors.md
+++ /dev/null
@@ -1,57 +0,0 @@
-Wraps monitors into a SessionRunHook.
-- - -
-
-#### `tf.contrib.learn.monitors.RunHookAdapterForMonitors.__init__(monitors)` {#RunHookAdapterForMonitors.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.RunHookAdapterForMonitors.after_create_session(session, coord)` {#RunHookAdapterForMonitors.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.RunHookAdapterForMonitors.after_run(run_context, run_values)` {#RunHookAdapterForMonitors.after_run}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.RunHookAdapterForMonitors.before_run(run_context)` {#RunHookAdapterForMonitors.before_run}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.RunHookAdapterForMonitors.begin()` {#RunHookAdapterForMonitors.begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.RunHookAdapterForMonitors.end(session)` {#RunHookAdapterForMonitors.end}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.run_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.run_n.md
deleted file mode 100644
index 69abb2628d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.learn.run_n.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.contrib.learn.run_n(*args, **kwargs)` {#run_n}
-
-Run `output_dict` tensors `n` times, with the same `feed_dict` each run. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-02-15.
-Instructions for updating:
-graph_actions.py will be deleted. Use tf.train.* utilities instead. You can use learn/estimators/estimator.py as an example.
-
-##### Args:
-
-
-* <b>`output_dict`</b>: A `dict` mapping string names to tensors to run. Must all be
- from the same graph.
-* <b>`feed_dict`</b>: `dict` of input values to feed each run.
-* <b>`restore_checkpoint_path`</b>: A string containing the path to a checkpoint to
- restore.
-* <b>`n`</b>: Number of times to repeat.
-
-##### Returns:
-
- A list of `n` `dict` objects, each containing values read from `output_dict`
- tensors.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.legacy_seq2seq.sequence_loss_by_example.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.legacy_seq2seq.sequence_loss_by_example.md
deleted file mode 100644
index a7b6c99c9a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.legacy_seq2seq.sequence_loss_by_example.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.contrib.legacy_seq2seq.sequence_loss_by_example(logits, targets, weights, average_across_timesteps=True, softmax_loss_function=None, name=None)` {#sequence_loss_by_example}
-
-Weighted cross-entropy loss for a sequence of logits (per example).
-
-##### Args:
-
-
-* <b>`logits`</b>: List of 2D Tensors of shape [batch_size x num_decoder_symbols].
-* <b>`targets`</b>: List of 1D batch-sized int32 Tensors of the same length as logits.
-* <b>`weights`</b>: List of 1D batch-sized float-Tensors of the same length as logits.
-* <b>`average_across_timesteps`</b>: If set, divide the returned cost by the total
- label weight.
-* <b>`softmax_loss_function`</b>: Function (labels-batch, inputs-batch) -> loss-batch
- to be used instead of the standard softmax (the default if this is None).
-* <b>`name`</b>: Optional name for this operation, default: "sequence_loss_by_example".
-
-##### Returns:
-
- 1D batch-sized float Tensor: The log-perplexity for each sequence.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If len(logits) is different from len(targets) or len(weights).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.losses.sigmoid_cross_entropy.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.losses.sigmoid_cross_entropy.md
deleted file mode 100644
index 9917476e28..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.losses.sigmoid_cross_entropy.md
+++ /dev/null
@@ -1,39 +0,0 @@
-### `tf.contrib.losses.sigmoid_cross_entropy(*args, **kwargs)` {#sigmoid_cross_entropy}
-
-Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.sigmoid_cross_entropy instead. Note that the order of the predictions and labels arguments was changed.
-
-`weights` acts as a coefficient for the loss. If a scalar is provided,
-then the loss is simply scaled by the given value. If `weights` is a
-tensor of size [`batch_size`], then the loss weights apply to each
-corresponding sample.
-
-If `label_smoothing` is nonzero, smooth the labels towards 1/2:
-
- new_multiclass_labels = multiclass_labels * (1 - label_smoothing)
- + 0.5 * label_smoothing
-
-##### Args:
-
-
-* <b>`logits`</b>: [batch_size, num_classes] logits outputs of the network .
-* <b>`multi_class_labels`</b>: [batch_size, num_classes] labels in (0, 1).
-* <b>`weights`</b>: Coefficients for the loss. The tensor must be a scalar, a tensor of
- shape [batch_size] or shape [batch_size, num_classes].
-* <b>`label_smoothing`</b>: If greater than 0 then smooth the labels.
-* <b>`scope`</b>: The scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `logits` doesn't match that of
- `multi_class_labels` or if the shape of `weights` is invalid, or if
- `weights` is None.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.accuracy.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.accuracy.md
deleted file mode 100644
index 71a82cb248..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.accuracy.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.contrib.metrics.accuracy(predictions, labels, weights=None)` {#accuracy}
-
-Computes the percentage of times that predictions matches labels.
-
-##### Args:
-
-
-* <b>`predictions`</b>: the predicted values, a `Tensor` whose dtype and shape
- matches 'labels'.
-* <b>`labels`</b>: the ground truth values, a `Tensor` of any shape and
- bool, integer, or string dtype.
-* <b>`weights`</b>: None or `Tensor` of float values to reweight the accuracy.
-
-##### Returns:
-
- Accuracy `Tensor`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if dtypes don't match or
- if dtype is not bool, integer, or string.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_false_positives.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_false_positives.md
deleted file mode 100644
index d3f748fec7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_false_positives.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.contrib.metrics.streaming_false_positives(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_false_positives}
-
-Sum the weights of false positives.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted values, a `Tensor` of arbitrary dimensions. Will
- be cast to `bool`.
-* <b>`labels`</b>: The ground truth values, a `Tensor` whose dimensions must match
- `predictions`. Will be cast to `bool`.
-* <b>`weights`</b>: Optional `Tensor` whose rank is either 0, or the same rank as
- `labels`, and must be broadcastable to `labels` (i.e., all dimensions
- must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`value_tensor`</b>: A `Tensor` representing the current value of the metric.
-* <b>`update_op`</b>: An operation that accumulates the error from a batch of data.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_percentage_less.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_percentage_less.md
deleted file mode 100644
index 829c15ee81..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_percentage_less.md
+++ /dev/null
@@ -1,43 +0,0 @@
-### `tf.contrib.metrics.streaming_percentage_less(values, threshold, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_percentage_less}
-
-Computes the percentage of values less than the given threshold.
-
-The `streaming_percentage_less` function creates two local variables,
-`total` and `count` that are used to compute the percentage of `values` that
-fall below `threshold`. This rate is weighted by `weights`, and it is
-ultimately returned as `percentage` which is an idempotent operation that
-simply divides `total` by `count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`percentage`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`values`</b>: A numeric `Tensor` of arbitrary size.
-* <b>`threshold`</b>: A scalar threshold.
-* <b>`weights`</b>: An optional `Tensor` whose shape is broadcastable to `values`.
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`percentage`</b>: A `Tensor` representing the current mean, the value of `total`
- divided by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is not `None` and its shape doesn't match `values`,
- or if either `metrics_collections` or `updates_collections` are not a list
- or tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_recall_at_thresholds.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_recall_at_thresholds.md
deleted file mode 100644
index 10c3c2a29c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.metrics.streaming_recall_at_thresholds.md
+++ /dev/null
@@ -1,48 +0,0 @@
-### `tf.contrib.metrics.streaming_recall_at_thresholds(predictions, labels, thresholds, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_recall_at_thresholds}
-
-Computes various recall values for different `thresholds` on `predictions`.
-
-The `streaming_recall_at_thresholds` function creates four local variables,
-`true_positives`, `true_negatives`, `false_positives` and `false_negatives`
-for various values of thresholds. `recall[i]` is defined as the total weight
-of values in `predictions` above `thresholds[i]` whose corresponding entry in
-`labels` is `True`, divided by the total weight of `True` values in `labels`
-(`true_positives[i] / (true_positives[i] + false_negatives[i])`).
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the `recall`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A floating point `Tensor` of arbitrary shape and whose values
- are in the range `[0, 1]`.
-* <b>`labels`</b>: A `bool` `Tensor` whose shape matches `predictions`.
-* <b>`thresholds`</b>: A python list or tuple of float thresholds in `[0, 1]`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `recall` should be
- added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`recall`</b>: A float `Tensor` of shape `[len(thresholds)]`.
-* <b>`update_op`</b>: An operation that increments the `true_positives`,
- `true_negatives`, `false_positives` and `false_negatives` variables that
- are used in the computation of `recall`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.rnn.EmbeddingWrapper.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.rnn.EmbeddingWrapper.md
deleted file mode 100644
index 4e1b78cd8b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.rnn.EmbeddingWrapper.md
+++ /dev/null
@@ -1,71 +0,0 @@
-Operator adding input embedding to the given cell.
-
-Note: in many cases it may be more efficient to not use this wrapper,
-but instead concatenate the whole sequence of your inputs in time,
-do the embedding on this batch-concatenated sequence, then split it and
-feed into your RNN.
-- - -
-
-#### `tf.contrib.rnn.EmbeddingWrapper.__call__(inputs, state, scope=None)` {#EmbeddingWrapper.__call__}
-
-Run the cell on embedded inputs.
-
-
-- - -
-
-#### `tf.contrib.rnn.EmbeddingWrapper.__init__(cell, embedding_classes, embedding_size, initializer=None)` {#EmbeddingWrapper.__init__}
-
-Create a cell with an added input embedding.
-
-##### Args:
-
-
-* <b>`cell`</b>: an RNNCell, an embedding will be put before its inputs.
-* <b>`embedding_classes`</b>: integer, how many symbols will be embedded.
-* <b>`embedding_size`</b>: integer, the size of the vectors we embed into.
-* <b>`initializer`</b>: an initializer to use when creating the embedding;
- if None, the initializer from variable scope or a default one is used.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if cell is not an RNNCell.
-* <b>`ValueError`</b>: if embedding_classes is not positive.
-
-
-- - -
-
-#### `tf.contrib.rnn.EmbeddingWrapper.output_size` {#EmbeddingWrapper.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.EmbeddingWrapper.state_size` {#EmbeddingWrapper.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.EmbeddingWrapper.zero_state(batch_size, dtype)` {#EmbeddingWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.rnn.RNNCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.rnn.RNNCell.md
deleted file mode 100644
index fa2d4f17d0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.contrib.rnn.RNNCell.md
+++ /dev/null
@@ -1,85 +0,0 @@
-Abstract object representing an RNN cell.
-
-The definition of cell in this package differs from the definition used in the
-literature. In the literature, cell refers to an object with a single scalar
-output. The definition in this package refers to a horizontal array of such
-units.
-
-An RNN cell, in the most abstract setting, is anything that has
-a state and performs some operation that takes a matrix of inputs.
-This operation results in an output matrix with `self.output_size` columns.
-If `self.state_size` is an integer, this operation also results in a new
-state matrix with `self.state_size` columns. If `self.state_size` is a
-tuple of integers, then it results in a tuple of `len(state_size)` state
-matrices, each with a column size corresponding to values in `state_size`.
-
-This module provides a number of basic commonly used RNN cells, such as
-LSTM (Long Short Term Memory) or GRU (Gated Recurrent Unit), and a number
-of operators that allow add dropouts, projections, or embeddings for inputs.
-Constructing multi-layer cells is supported by the class `MultiRNNCell`,
-or by calling the `rnn` ops several times. Every `RNNCell` must have the
-properties below and implement `__call__` with the following signature.
-- - -
-
-#### `tf.contrib.rnn.RNNCell.__call__(inputs, state, scope=None)` {#RNNCell.__call__}
-
-Run this RNN cell on inputs, starting from the given state.
-
-##### Args:
-
-
-* <b>`inputs`</b>: `2-D` tensor with shape `[batch_size x input_size]`.
-* <b>`state`</b>: if `self.state_size` is an integer, this should be a `2-D Tensor`
- with shape `[batch_size x self.state_size]`. Otherwise, if
- `self.state_size` is a tuple of integers, this should be a tuple
- with shapes `[batch_size x s] for s in self.state_size`.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to class name.
-
-##### Returns:
-
- A pair containing:
-
- - Output: A `2-D` tensor with shape `[batch_size x self.output_size]`.
- - New state: Either a single `2-D` tensor, or a tuple of tensors matching
- the arity and shapes of `state`.
-
-
-- - -
-
-#### `tf.contrib.rnn.RNNCell.output_size` {#RNNCell.output_size}
-
-Integer or TensorShape: size of outputs produced by this cell.
-
-
-- - -
-
-#### `tf.contrib.rnn.RNNCell.state_size` {#RNNCell.state_size}
-
-size(s) of state(s) used by this cell.
-
-It can be represented by an Integer, a TensorShape or a tuple of Integers
-or TensorShapes.
-
-
-- - -
-
-#### `tf.contrib.rnn.RNNCell.zero_state(batch_size, dtype)` {#RNNCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.cos.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.cos.md
deleted file mode 100644
index faf84ea9d3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.cos.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.cos(x, name=None)` {#cos}
-
-Computes cos of x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.count_nonzero.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.count_nonzero.md
deleted file mode 100644
index e464a8f8d3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.count_nonzero.md
+++ /dev/null
@@ -1,43 +0,0 @@
-### `tf.count_nonzero(input_tensor, axis=None, keep_dims=False, dtype=tf.int64, name=None, reduction_indices=None)` {#count_nonzero}
-
-Computes number of nonzero elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-**NOTE** Floating point comparison to zero is done by exact floating point
-equality check. Small values are **not** rounded to zero for purposes of
-the nonzero check.
-
-For example:
-
-```python
-# 'x' is [[0, 1, 0]
-# [1, 1, 0]]
-tf.count_nonzero(x) ==> 3
-tf.count_nonzero(x, 0) ==> [1, 2, 0]
-tf.count_nonzero(x, 1) ==> [1, 2]
-tf.count_nonzero(x, 1, keep_dims=True) ==> [[1], [2]]
-tf.count_nonzero(x, [0, 1]) ==> 3
-```
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The tensor to reduce. Should be of numeric type, or `bool`.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`dtype`</b>: The output dtype; defaults to `tf.int64`.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor (number of nonzero values).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.diag_part.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.diag_part.md
deleted file mode 100644
index 845a45669b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.diag_part.md
+++ /dev/null
@@ -1,34 +0,0 @@
-### `tf.diag_part(input, name=None)` {#diag_part}
-
-Returns the diagonal part of the tensor.
-
-This operation returns a tensor with the `diagonal` part
-of the `input`. The `diagonal` part is computed as follows:
-
-Assume `input` has dimensions `[D1,..., Dk, D1,..., Dk]`, then the output is a
-tensor of rank `k` with dimensions `[D1,..., Dk]` where:
-
-`diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]`.
-
-For example:
-
-```prettyprint
-# 'input' is [[1, 0, 0, 0]
- [0, 2, 0, 0]
- [0, 0, 3, 0]
- [0, 0, 0, 4]]
-
-tf.diag_part(input) ==> [1, 2, 3, 4]
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
- Rank k tensor where k is 2, 4, or 6.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`. The extracted diagonal.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.div.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.div.md
deleted file mode 100644
index 8c25e24373..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.div.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.div(x, y, name=None)` {#div}
-
-Divides x / y elementwise (using Python 2 division operator semantics).
-
-NOTE: Prefer using the Tensor division operator or tf.divide which obey Python
-division operator semantics.
-
-This function divides `x` and `y`, forcing Python 2.7 semantics. That is,
-if one of `x` or `y` is a float, then the result will be a float.
-Otherwise, the output will be an integer type. Flooring semantics are used
-for integer division.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` returns the quotient of x and y.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.PermissionDeniedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.PermissionDeniedError.md
deleted file mode 100644
index a8a81494c8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.PermissionDeniedError.md
+++ /dev/null
@@ -1,14 +0,0 @@
-Raised when the caller does not have permission to run an operation.
-
-For example, running the
-[`tf.WholeFileReader.read()`](../../api_docs/python/io_ops.md#WholeFileReader)
-operation could raise `PermissionDeniedError` if it receives the name of a
-file for which the user does not have the read file permission.
-
-- - -
-
-#### `tf.errors.PermissionDeniedError.__init__(node_def, op, message)` {#PermissionDeniedError.__init__}
-
-Creates a `PermissionDeniedError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.UnavailableError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.UnavailableError.md
deleted file mode 100644
index e212ae94ec..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.UnavailableError.md
+++ /dev/null
@@ -1,11 +0,0 @@
-Raised when the runtime is currently unavailable.
-
-This exception is not currently used.
-
-- - -
-
-#### `tf.errors.UnavailableError.__init__(node_def, op, message)` {#UnavailableError.__init__}
-
-Creates an `UnavailableError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.raise_exception_on_not_ok_status.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.raise_exception_on_not_ok_status.md
deleted file mode 100644
index a8d96ff97b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.errors.raise_exception_on_not_ok_status.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.errors.raise_exception_on_not_ok_status()` {#raise_exception_on_not_ok_status}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.floordiv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.floordiv.md
deleted file mode 100644
index cf389be85b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.floordiv.md
+++ /dev/null
@@ -1,32 +0,0 @@
-### `tf.floordiv(x, y, name=None)` {#floordiv}
-
-Divides `x / y` elementwise, rounding toward the most negative integer.
-
-The same as `tf.div(x,y)` for integers, but uses `tf.floor(tf.div(x,y))` for
-floating point arguments so that the result is always an integer (though
-possibly an integer represented as floating point). This op is generated by
-`x // y` floor division in Python 3 and in Python 2.7 with
-`from __future__ import division`.
-
-Note that for efficiency, `floordiv` uses C semantics for negative numbers
-(unlike Python and Numpy).
-
-`x` and `y` must have the same type, and the result will have the same type
-as well.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` rounded down (except possibly towards zero for negative integers).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the inputs are complex.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.decode_image.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.decode_image.md
deleted file mode 100644
index 46395cad62..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.decode_image.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.image.decode_image(contents, channels=None, name=None)` {#decode_image}
-
-Convenience function for `decode_gif`, `decode_jpeg`, and `decode_png`.
-Detects whether an image is a GIF, JPEG, or PNG, and performs the appropriate
-operation to convert the input bytes `string` into a `Tensor` of type `uint8`.
-
-Note: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`, as
-opposed to `decode_jpeg` and `decode_png`, which return 3-D arrays
-`[height, width, num_channels]`. Make sure to take this into account when
-constructing your graph if you are intermixing GIF files with JPEG and/or PNG
-files.
-
-##### Args:
-
-
-* <b>`contents`</b>: 0-D `string`. The encoded image bytes.
-* <b>`channels`</b>: An optional `int`. Defaults to `0`. Number of color channels for
- the decoded image.
-* <b>`name`</b>: A name for the operation (optional)
-
-##### Returns:
-
- `Tensor` with type `uint8` with shape `[height, width, num_channels]` for
- JPEG and PNG images and shape `[num_frames, height, width, 3]` for GIF
- images.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.flip_left_right.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.flip_left_right.md
deleted file mode 100644
index ac8c99806e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.flip_left_right.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.image.flip_left_right(image)` {#flip_left_right}
-
-Flip an image horizontally (left to right).
-
-Outputs the contents of `image` flipped along the second dimension, which is
-`width`.
-
-See also `reverse()`.
-
-##### Args:
-
-
-* <b>`image`</b>: A 3-D tensor of shape `[height, width, channels].`
-
-##### Returns:
-
- A 3-D tensor of the same type and shape as `image`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape of `image` not supported.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.flip_up_down.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.flip_up_down.md
deleted file mode 100644
index ed92277f8a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.flip_up_down.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.image.flip_up_down(image)` {#flip_up_down}
-
-Flip an image horizontally (upside down).
-
-Outputs the contents of `image` flipped along the first dimension, which is
-`height`.
-
-See also `reverse()`.
-
-##### Args:
-
-
-* <b>`image`</b>: A 3-D tensor of shape `[height, width, channels].`
-
-##### Returns:
-
- A 3-D tensor of the same type and shape as `image`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape of `image` not supported.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.pad_to_bounding_box.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.pad_to_bounding_box.md
deleted file mode 100644
index c731fb2d2a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.pad_to_bounding_box.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.image.pad_to_bounding_box(image, offset_height, offset_width, target_height, target_width)` {#pad_to_bounding_box}
-
-Pad `image` with zeros to the specified `height` and `width`.
-
-Adds `offset_height` rows of zeros on top, `offset_width` columns of
-zeros on the left, and then pads the image on the bottom and right
-with zeros until it has dimensions `target_height`, `target_width`.
-
-This op does nothing if `offset_*` is zero and the image already has size
-`target_height` by `target_width`.
-
-##### Args:
-
-
-* <b>`image`</b>: 3-D tensor with shape `[height, width, channels]`
-* <b>`offset_height`</b>: Number of rows of zeros to add on top.
-* <b>`offset_width`</b>: Number of columns of zeros to add on the left.
-* <b>`target_height`</b>: Height of output image.
-* <b>`target_width`</b>: Width of output image.
-
-##### Returns:
-
- 3-D tensor of shape `[target_height, target_width, channels]`
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `image` is incompatible with the `offset_*` or
- `target_*` arguments, or either `offset_height` or `offset_width` is
- negative.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.resize_image_with_crop_or_pad.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.resize_image_with_crop_or_pad.md
deleted file mode 100644
index 24104b647c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.resize_image_with_crop_or_pad.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.image.resize_image_with_crop_or_pad(image, target_height, target_width)` {#resize_image_with_crop_or_pad}
-
-Crops and/or pads an image to a target width and height.
-
-Resizes an image to a target width and height by either centrally
-cropping the image or padding it evenly with zeros.
-
-If `width` or `height` is greater than the specified `target_width` or
-`target_height` respectively, this op centrally crops along that dimension.
-If `width` or `height` is smaller than the specified `target_width` or
-`target_height` respectively, this op centrally pads with 0 along that
-dimension.
-
-##### Args:
-
-
-* <b>`image`</b>: 3-D tensor of shape `[height, width, channels]`
-* <b>`target_height`</b>: Target height.
-* <b>`target_width`</b>: Target width.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `target_height` or `target_width` are zero or negative.
-
-##### Returns:
-
- Cropped and/or padded image of shape
- `[target_height, target_width, channels]`
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.resize_nearest_neighbor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.resize_nearest_neighbor.md
deleted file mode 100644
index ba72e73ebd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.resize_nearest_neighbor.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.image.resize_nearest_neighbor(images, size, align_corners=None, name=None)` {#resize_nearest_neighbor}
-
-Resize `images` to `size` using nearest neighbor interpolation.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`.
- 4-D with shape `[batch, height, width, channels]`.
-* <b>`size`</b>: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The
- new size for the images.
-* <b>`align_corners`</b>: An optional `bool`. Defaults to `False`.
- If true, rescale input by (new_height - 1) / (height - 1), which
- exactly aligns the 4 corners of images and resized images. If false, rescale
- by new_height / height. Treat similarly the width dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `images`. 4-D with shape
- `[batch, new_height, new_width, channels]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.rot90.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.rot90.md
deleted file mode 100644
index 3923d715ab..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.rot90.md
+++ /dev/null
@@ -1,15 +0,0 @@
-### `tf.image.rot90(image, k=1, name=None)` {#rot90}
-
-Rotate an image counter-clockwise by 90 degrees.
-
-##### Args:
-
-
-* <b>`image`</b>: A 3-D tensor of shape `[height, width, channels]`.
-* <b>`k`</b>: A scalar integer. The number of times the image is rotated by 90 degrees.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A rotated 3-D tensor of the same type and shape as `image`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.sample_distorted_bounding_box.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.sample_distorted_bounding_box.md
deleted file mode 100644
index aeef14c3b6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.sample_distorted_bounding_box.md
+++ /dev/null
@@ -1,89 +0,0 @@
-### `tf.image.sample_distorted_bounding_box(image_size, bounding_boxes, seed=None, seed2=None, min_object_covered=None, aspect_ratio_range=None, area_range=None, max_attempts=None, use_image_if_no_bounding_boxes=None, name=None)` {#sample_distorted_bounding_box}
-
-Generate a single randomly distorted bounding box for an image.
-
-Bounding box annotations are often supplied in addition to ground-truth labels
-in image recognition or object localization tasks. A common technique for
-training such a system is to randomly distort an image while preserving
-its content, i.e. *data augmentation*. This Op outputs a randomly distorted
-localization of an object, i.e. bounding box, given an `image_size`,
-`bounding_boxes` and a series of constraints.
-
-The output of this Op is a single bounding box that may be used to crop the
-original image. The output is returned as 3 tensors: `begin`, `size` and
-`bboxes`. The first 2 tensors can be fed directly into `tf.slice` to crop the
-image. The latter may be supplied to `tf.image.draw_bounding_boxes` to visualize
-what the bounding box looks like.
-
-Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. The
-bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and
-height of the underlying image.
-
-For example,
-
-```python
- # Generate a single distorted bounding box.
- begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box(
- tf.shape(image),
- bounding_boxes=bounding_boxes)
-
- # Draw the bounding box in an image summary.
- image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0),
- bbox_for_draw)
- tf.image_summary('images_with_box', image_with_box)
-
- # Employ the bounding box to distort the image.
- distorted_image = tf.slice(image, begin, size)
-```
-
-Note that if no bounding box information is available, setting
-`use_image_if_no_bounding_boxes = true` will assume there is a single implicit
-bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is
-false and no bounding boxes are supplied, an error is raised.
-
-##### Args:
-
-
-* <b>`image_size`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`.
- 1-D, containing `[height, width, channels]`.
-* <b>`bounding_boxes`</b>: A `Tensor` of type `float32`.
- 3-D with shape `[batch, N, 4]` describing the N bounding boxes
- associated with the image.
-* <b>`seed`</b>: An optional `int`. Defaults to `0`.
- If either `seed` or `seed2` are set to non-zero, the random number
- generator is seeded by the given `seed`. Otherwise, it is seeded by a random
- seed.
-* <b>`seed2`</b>: An optional `int`. Defaults to `0`.
- A second seed to avoid seed collision.
-* <b>`min_object_covered`</b>: An optional `float`. Defaults to `0.1`.
- The cropped area of the image must contain at least this
- fraction of any bounding box supplied. The value of this parameter should be
- non-negative. In the case of 0, the cropped area does not need to overlap
- any of the bounding boxes supplied.
-* <b>`aspect_ratio_range`</b>: An optional list of `floats`. Defaults to `[0.75, 1.33]`.
- The cropped area of the image must have an aspect ratio =
- width / height within this range.
-* <b>`area_range`</b>: An optional list of `floats`. Defaults to `[0.05, 1]`.
- The cropped area of the image must contain a fraction of the
- supplied image within in this range.
-* <b>`max_attempts`</b>: An optional `int`. Defaults to `100`.
- Number of attempts at generating a cropped region of the image
- of the specified constraints. After `max_attempts` failures, return the entire
- image.
-* <b>`use_image_if_no_bounding_boxes`</b>: An optional `bool`. Defaults to `False`.
- Controls behavior if no bounding boxes supplied.
- If true, assume an implicit bounding box covering the whole input. If false,
- raise an error.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (begin, size, bboxes).
-
-* <b>`begin`</b>: A `Tensor`. Has the same type as `image_size`. 1-D, containing `[offset_height, offset_width, 0]`. Provide as input to
- `tf.slice`.
-* <b>`size`</b>: A `Tensor`. Has the same type as `image_size`. 1-D, containing `[target_height, target_width, -1]`. Provide as input to
- `tf.slice`.
-* <b>`bboxes`</b>: A `Tensor` of type `float32`. 3-D with shape `[1, 1, 4]` containing the distorted bounding box.
- Provide as input to `tf.image.draw_bounding_boxes`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.total_variation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.total_variation.md
deleted file mode 100644
index 03fec86c85..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.image.total_variation.md
+++ /dev/null
@@ -1,40 +0,0 @@
-### `tf.image.total_variation(images, name=None)` {#total_variation}
-
-Calculate and return the total variation for one or more images.
-
-The total variation is the sum of the absolute differences for neighboring
-pixel-values in the input images. This measures how much noise is in the
-images.
-
-This can be used as a loss-function during optimization so as to suppress
-noise in images. If you have a batch of images, then you should calculate
-the scalar loss-value as the sum:
-`loss = tf.reduce_sum(tf.image.total_variation(images))`
-
-This implements the anisotropic 2-D version of the formula described here:
-
-https://en.wikipedia.org/wiki/Total_variation_denoising
-
-##### Args:
-
-
-* <b>`images`</b>: 4-D Tensor of shape `[batch, height, width, channels]` or
- 3-D Tensor of shape `[height, width, channels]`.
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if images.shape is not a 3-D or 4-D vector.
-
-##### Returns:
-
- The total variation of `images`.
-
- If `images` was 4-D, return a 1-D float Tensor of shape `[batch]` with the
- total variation for each image in the batch.
- If `images` was 3-D, return a scalar float with the total variation for
- that image.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.load_file_system_library.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.load_file_system_library.md
deleted file mode 100644
index 60d768a624..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.load_file_system_library.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.load_file_system_library(library_filename)` {#load_file_system_library}
-
-Loads a TensorFlow plugin, containing file system implementation.
-
-Pass `library_filename` to a platform-specific mechanism for dynamically
-loading a library. The rules for determining the exact location of the
-library are platform-specific and are not documented here.
-
-##### Args:
-
-
-* <b>`library_filename`</b>: Path to the plugin.
- Relative or absolute filesystem path to a dynamic library file.
-
-##### Returns:
-
- None.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: when unable to load the library.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_and.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_and.md
deleted file mode 100644
index 2b5f011ccd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_and.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.logical_and(x, y, name=None)` {#logical_and}
-
-Returns the truth value of x AND y element-wise.
-
-*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_not.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_not.md
deleted file mode 100644
index 40a0bb2e43..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.logical_not.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.logical_not(x, name=None)` {#logical_not}
-
-Returns the truth value of NOT x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.make_template.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.make_template.md
deleted file mode 100644
index 99814cacc5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.make_template.md
+++ /dev/null
@@ -1,111 +0,0 @@
-### `tf.make_template(name_, func_, create_scope_now_=False, unique_name_=None, custom_getter_=None, **kwargs)` {#make_template}
-
-Given an arbitrary function, wrap it so that it does variable sharing.
-
-This wraps `func_` in a Template and partially evaluates it. Templates are
-functions that create variables the first time they are called and reuse them
-thereafter. In order for `func_` to be compatible with a `Template` it must
-have the following properties:
-
-* The function should create all trainable variables and any variables that
- should be reused by calling `tf.get_variable`. If a trainable variable is
- created using `tf.Variable`, then a ValueError will be thrown. Variables
- that are intended to be locals can be created by specifying
- `tf.Variable(..., trainable=false)`.
-* The function may use variable scopes and other templates internally to
- create and reuse variables, but it shouldn't use `tf.global_variables` to
- capture variables that are defined outside of the scope of the function.
-* Internal scopes and variable names should not depend on any arguments that
- are not supplied to `make_template`. In general you will get a ValueError
- telling you that you are trying to reuse a variable that doesn't exist
- if you make a mistake.
-
-In the following example, both `z` and `w` will be scaled by the same `y`. It
-is important to note that if we didn't assign `scalar_name` and used a
-different name for z and w that a `ValueError` would be thrown because it
-couldn't reuse the variable.
-
-```python
-def my_op(x, scalar_name):
- var1 = tf.get_variable(scalar_name,
- shape=[],
- initializer=tf.constant_initializer(1))
- return x * var1
-
-scale_by_y = tf.make_template('scale_by_y', my_op, scalar_name='y')
-
-z = scale_by_y(input1)
-w = scale_by_y(input2)
-```
-
-As a safe-guard, the returned function will raise a `ValueError` after the
-first call if trainable variables are created by calling `tf.Variable`.
-
-If all of these are true, then 2 properties are enforced by the template:
-
-1. Calling the same template multiple times will share all non-local
- variables.
-2. Two different templates are guaranteed to be unique, unless you reenter the
- same variable scope as the initial definition of a template and redefine
- it. An examples of this exception:
-
-```python
-def my_op(x, scalar_name):
- var1 = tf.get_variable(scalar_name,
- shape=[],
- initializer=tf.constant_initializer(1))
- return x * var1
-
-with tf.variable_scope('scope') as vs:
- scale_by_y = tf.make_template('scale_by_y', my_op, scalar_name='y')
- z = scale_by_y(input1)
- w = scale_by_y(input2)
-
-# Creates a template that reuses the variables above.
-with tf.variable_scope(vs, reuse=True):
- scale_by_y2 = tf.make_template('scale_by_y', my_op, scalar_name='y')
- z2 = scale_by_y2(input1)
- w2 = scale_by_y2(input2)
-```
-
-Depending on the value of `create_scope_now_`, the full variable scope may be
-captured either at the time of first call or at the time of construction. If
-this option is set to True, then all Tensors created by repeated calls to the
-template will have an extra trailing _N+1 to their name, as the first time the
-scope is entered in the Template constructor no Tensors are created.
-
-Note: `name_`, `func_` and `create_scope_now_` have a trailing underscore to
-reduce the likelihood of collisions with kwargs.
-
-##### Args:
-
-
-* <b>`name_`</b>: A name for the scope created by this template. If necessary, the name
- will be made unique by appending `_N` to the name.
-* <b>`func_`</b>: The function to wrap.
-* <b>`create_scope_now_`</b>: Boolean controlling whether the scope should be created
- when the template is constructed or when the template is called. Default
- is False, meaning the scope is created when the template is called.
-* <b>`unique_name_`</b>: When used, it overrides name_ and is not made unique. If a
- template of the same scope/unique_name already exists and reuse is false,
- an error is raised. Defaults to None.
-* <b>`custom_getter_`</b>: Optional custom getter for variables used in `func_`. See
- the [`get_variable`](#get_variable) `custom_getter` documentation for
- more information.
-* <b>`**kwargs`</b>: Keyword arguments to apply to `func_`.
-
-##### Returns:
-
- A function to encapsulate a set of variables which should be created once
- and reused. An enclosing scope will created, either where `make_template`
- is called, or wherever the result is called, depending on the value of
- `create_scope_now_`. Regardless of the value, the first time the template
- is called it will enter the scope with no reuse, and call `func_` to create
- variables, which are guaranteed to be unique. All subsequent calls will
- re-enter the scope and reuse those variables.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the name is None.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.model_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.model_variables.md
deleted file mode 100644
index f0bba3c637..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.model_variables.md
+++ /dev/null
@@ -1,8 +0,0 @@
-### `tf.model_variables()` {#model_variables}
-
-Returns all variables in the MODEL_VARIABLES collection.
-
-##### Returns:
-
- A list of local Variable objects.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.atrous_conv2d_transpose.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.atrous_conv2d_transpose.md
deleted file mode 100644
index a4caa46258..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.atrous_conv2d_transpose.md
+++ /dev/null
@@ -1,43 +0,0 @@
-### `tf.nn.atrous_conv2d_transpose(value, filters, output_shape, rate, padding, name=None)` {#atrous_conv2d_transpose}
-
-The transpose of `atrous_conv2d`.
-
-This operation is sometimes called "deconvolution" after [Deconvolutional
-Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is
-actually the transpose (gradient) of `atrous_conv2d` rather than an actual
-deconvolution.
-
-##### Args:
-
-
-* <b>`value`</b>: A 4-D `Tensor` of type `float`. It needs to be in the default `NHWC`
- format. Its shape is `[batch, in_height, in_width, in_channels]`.
-* <b>`filters`</b>: A 4-D `Tensor` with the same type as `value` and shape
- `[filter_height, filter_width, out_channels, in_channels]`. `filters`'
- `in_channels` dimension must match that of `value`. Atrous convolution is
- equivalent to standard convolution with upsampled filters with effective
- height `filter_height + (filter_height - 1) * (rate - 1)` and effective
- width `filter_width + (filter_width - 1) * (rate - 1)`, produced by
- inserting `rate - 1` zeros along consecutive elements across the
- `filters`' spatial dimensions.
-* <b>`output_shape`</b>: A 1-D `Tensor` of shape representing the output shape of the
- deconvolution op.
-* <b>`rate`</b>: A positive int32. The stride with which we sample input values across
- the `height` and `width` dimensions. Equivalently, the rate by which we
- upsample the filter values by inserting zeros across the `height` and
- `width` dimensions. In the literature, the same parameter is sometimes
- called `input stride` or `dilation`.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
-* <b>`name`</b>: Optional name for the returned tensor.
-
-##### Returns:
-
- A `Tensor` with the same type as `value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If input/output depth does not match `filters`' shape, or if
- padding is other than `'VALID'` or `'SAME'`, or if the `rate` is less
- than one, or if the output_shape is not a tensor with 4 elements.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.conv3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.conv3d.md
deleted file mode 100644
index cbac47eb58..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.conv3d.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.nn.conv3d(input, filter, strides, padding, name=None)` {#conv3d}
-
-Computes a 3-D convolution given 5-D `input` and `filter` tensors.
-
-In signal processing, cross-correlation is a measure of similarity of
-two waveforms as a function of a time-lag applied to one of them. This
-is also known as a sliding dot product or sliding inner-product.
-
-Our Conv3D implements a form of cross-correlation.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Shape `[batch, in_depth, in_height, in_width, in_channels]`.
-* <b>`filter`</b>: A `Tensor`. Must have the same type as `input`.
- Shape `[filter_depth, filter_height, filter_width, in_channels,
- out_channels]`. `in_channels` must match between `input` and `filter`.
-* <b>`strides`</b>: A list of `ints` that has length `>= 5`.
- 1-D tensor of length 5. The stride of the sliding window for each
- dimension of `input`. Must have `strides[0] = strides[4] = 1`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.sigmoid_cross_entropy_with_logits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.sigmoid_cross_entropy_with_logits.md
deleted file mode 100644
index 55e1b178ea..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.sigmoid_cross_entropy_with_logits.md
+++ /dev/null
@@ -1,49 +0,0 @@
-### `tf.nn.sigmoid_cross_entropy_with_logits(_sentinel=None, labels=None, logits=None, name=None)` {#sigmoid_cross_entropy_with_logits}
-
-Computes sigmoid cross entropy given `logits`.
-
-Measures the probability error in discrete classification tasks in which each
-class is independent and not mutually exclusive. For instance, one could
-perform multilabel classification where a picture can contain both an elephant
-and a dog at the same time.
-
-For brevity, let `x = logits`, `z = labels`. The logistic loss is
-
- z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
- = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))
- = z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))
- = z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))
- = (1 - z) * x + log(1 + exp(-x))
- = x - x * z + log(1 + exp(-x))
-
-For x < 0, to avoid overflow in exp(-x), we reformulate the above
-
- x - x * z + log(1 + exp(-x))
- = log(exp(x)) - x * z + log(1 + exp(-x))
- = - x * z + log(1 + exp(x))
-
-Hence, to ensure stability and avoid overflow, the implementation uses this
-equivalent formulation
-
- max(x, 0) - x * z + log(1 + exp(-abs(x)))
-
-`logits` and `labels` must have the same type and shape.
-
-##### Args:
-
- _sentinel: Used to prevent positional parameters. Internal, do not use.
-
-* <b>`labels`</b>: A `Tensor` of the same type and shape as `logits`.
-* <b>`logits`</b>: A `Tensor` of type `float32` or `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of the same shape as `logits` with the componentwise
- logistic losses.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `logits` and `labels` do not have the same shape.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.top_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.top_k.md
deleted file mode 100644
index 819c0ad068..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.top_k.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.nn.top_k(input, k=1, sorted=True, name=None)` {#top_k}
-
-Finds values and indices of the `k` largest entries for the last dimension.
-
-If the input is a vector (rank-1), finds the `k` largest entries in the vector
-and outputs their values and indices as vectors. Thus `values[j]` is the
-`j`-th largest entry in `input`, and its index is `indices[j]`.
-
-For matrices (resp. higher rank input), computes the top `k` entries in each
-row (resp. vector along the last dimension). Thus,
-
- values.shape = indices.shape = input.shape[:-1] + [k]
-
-If two elements are equal, the lower-index element appears first.
-
-##### Args:
-
-
-* <b>`input`</b>: 1-D or higher `Tensor` with last dimension at least `k`.
-* <b>`k`</b>: 0-D `int32` `Tensor`. Number of top elements to look for along the last
- dimension (along each row for matrices).
-* <b>`sorted`</b>: If true the resulting `k` elements will be sorted by the values in
- descending order.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
-
-* <b>`values`</b>: The `k` largest elements along each last dimensional slice.
-* <b>`indices`</b>: The indices of `values` within the last dimension of `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.no_op.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.no_op.md
deleted file mode 100644
index c1b5c0824b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.no_op.md
+++ /dev/null
@@ -1,13 +0,0 @@
-### `tf.no_op(name=None)` {#no_op}
-
-Does nothing. Only useful as a placeholder for control edges.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.range.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.range.md
deleted file mode 100644
index 90f8e7aa50..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.range.md
+++ /dev/null
@@ -1,52 +0,0 @@
-### `tf.range(start, limit=None, delta=1, dtype=None, name='range')` {#range}
-
-Creates a sequence of numbers.
-
-Creates a sequence of numbers that begins at `start` and extends by
-increments of `delta` up to but not including `limit`.
-
-The dtype of the resulting tensor is inferred from the inputs unless
-it is provided explicitly.
-
-Like the Python builtin `range`, `start` defaults to 0, so that
-`range(n) = range(0, n)`.
-
-For example:
-
-```python
-# 'start' is 3
-# 'limit' is 18
-# 'delta' is 3
-tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15]
-
-# 'start' is 3
-# 'limit' is 1
-# 'delta' is -0.5
-tf.range(start, limit, delta) ==> [3, 2.5, 2, 1.5]
-
-# 'limit' is 5
-tf.range(limit) ==> [0, 1, 2, 3, 4]
-```
-
-##### Args:
-
-
-* <b>`start`</b>: A 0-D `Tensor` (scalar). Acts as first entry in the range if
- `limit` is not None; otherwise, acts as range limit and first entry
- defaults to 0.
-* <b>`limit`</b>: A 0-D `Tensor` (scalar). Upper limit of sequence,
- exclusive. If None, defaults to the value of `start` while the first
- entry of the range defaults to 0.
-* <b>`delta`</b>: A 0-D `Tensor` (scalar). Number that increments
- `start`. Defaults to 1.
-* <b>`dtype`</b>: The type of the elements of the resulting tensor.
-* <b>`name`</b>: A name for the operation. Defaults to "range".
-
-##### Returns:
-
- An 1-D `Tensor` of type `dtype`.
-
-@compatibility(numpy)
-Equivalent to np.arange
-@end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reverse_v2.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reverse_v2.md
deleted file mode 100644
index 073f0bda7b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.reverse_v2.md
+++ /dev/null
@@ -1,64 +0,0 @@
-### `tf.reverse_v2(tensor, axis, name=None)` {#reverse_v2}
-
-Reverses specific dimensions of a tensor.
-
-NOTE `tf.reverse` has now changed behavior in preparation for 1.0.
-`tf.reverse_v2` is currently an alias that will be deprecated before TF 1.0.
-
-Given a `tensor`, and a `int32` tensor `axis` representing the set of
-dimensions of `tensor` to reverse. This operation reverses each dimension
-`i` for which there exists `j` s.t. `axis[j] == i`.
-
-`tensor` can have up to 8 dimensions. The number of dimensions specified
-in `axis` may be 0 or more entries. If an index is specified more than
-once, a InvalidArgument error is raised.
-
-For example:
-
-```prettyprint
-# tensor 't' is [[[[ 0, 1, 2, 3],
-# [ 4, 5, 6, 7],
-# [ 8, 9, 10, 11]],
-# [[12, 13, 14, 15],
-# [16, 17, 18, 19],
-# [20, 21, 22, 23]]]]
-# tensor 't' shape is [1, 2, 3, 4]
-
-# 'dims' is [3] or 'dims' is -1
-reverse(t, dims) ==> [[[[ 3, 2, 1, 0],
- [ 7, 6, 5, 4],
- [ 11, 10, 9, 8]],
- [[15, 14, 13, 12],
- [19, 18, 17, 16],
- [23, 22, 21, 20]]]]
-
-# 'dims' is '[1]' (or 'dims' is '[-3]')
-reverse(t, dims) ==> [[[[12, 13, 14, 15],
- [16, 17, 18, 19],
- [20, 21, 22, 23]
- [[ 0, 1, 2, 3],
- [ 4, 5, 6, 7],
- [ 8, 9, 10, 11]]]]
-
-# 'dims' is '[2]' (or 'dims' is '[-2]')
-reverse(t, dims) ==> [[[[8, 9, 10, 11],
- [4, 5, 6, 7],
- [0, 1, 2, 3]]
- [[20, 21, 22, 23],
- [16, 17, 18, 19],
- [12, 13, 14, 15]]]]
-```
-
-##### Args:
-
-
-* <b>`tensor`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `int64`, `bool`, `half`, `float32`, `float64`, `complex64`, `complex128`.
- Up to 8-D.
-* <b>`axis`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 1-D. The indices of the dimensions to reverse.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `tensor`. The same shape as `tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.scatter_nd.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.scatter_nd.md
deleted file mode 100644
index c4d448d9d8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.scatter_nd.md
+++ /dev/null
@@ -1,94 +0,0 @@
-### `tf.scatter_nd(indices, updates, shape, name=None)` {#scatter_nd}
-
-Creates a new tensor by applying sparse `updates` to individual
-
-values or slices within a zero tensor of the given `shape` tensor according to
-indices. This operator is the inverse of the [tf.gather_nd](#gather_nd)
-operator which extracts values or slices from a given tensor.
-
-TODO(simister): Add a link to Variable.__getitem__ documentation on slice
-syntax.
-
-`shape` is a `TensorShape` with rank `P` and `indices` is a `Tensor` of rank
-`Q`.
-
-`indices` must be integer tensor, containing indices into `shape`.
-It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
-
-The innermost dimension of `indices` (with length `K`) corresponds to
-indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
-dimension of `shape`.
-
-`updates` is Tensor of rank `Q-1+P-K` with shape:
-
-```
-[d_0, ..., d_{Q-2}, shape[K], ..., shape[P-1]].
-```
-
-The simplest form of scatter is to insert individual elements in a tensor by
-index. For example, say we want to insert 4 scattered elements in a rank-1
-tensor with 8 elements.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/ScatterNd1.png" alt>
-</div>
-
-In Python, this scatter operation would look like this:
-
- indices = tf.constant([[4], [3], [1], [7]])
- updates = tf.constant([9, 10, 11, 12])
- shape = tf.constant([8])
- scatter = tf.scatter_nd(indices, updates, shape)
- with tf.Session() as sess:
- print sess.run(scatter)
-
-The resulting tensor would look like this:
-
- [0, 11, 0, 10, 9, 0, 0, 12]
-
-We can also, insert entire slices of a higher rank tensor all at once. For
-example, if we wanted to insert two slices in the first dimension of a
-rank-3 tensor with two matrices of new values.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/ScatterNd2.png" alt>
-</div>
-
-In Python, this scatter operation would look like this:
-
- indices = tf.constant([[0], [2]])
- updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],
- [7, 7, 7, 7], [8, 8, 8, 8]],
- [[5, 5, 5, 5], [6, 6, 6, 6],
- [7, 7, 7, 7], [8, 8, 8, 8]]])
- shape = tf.constant([4, 4, 4])
- scatter = tf.scatter_nd(indices, updates, shape)
- with tf.Session() as sess:
- print sess.run(scatter)
-
-The resulting tensor would look like this:
-
- [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
- [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
- [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
- [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]
-
-##### Args:
-
-
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A Tensor. Must be one of the following types: int32, int64.
- A tensor of indices into ref.
-* <b>`updates`</b>: A `Tensor`.
- A Tensor. Must have the same type as tensor. A tensor of updated values
- to store in ref.
-* <b>`shape`</b>: A `Tensor`. Must have the same type as `indices`.
- A vector. The shape of the resulting tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `updates`.
- A new tensor with the given shape and updates applied according
- to the indices.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sign.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sign.md
deleted file mode 100644
index e7fa339847..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sign.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.sign(x, name=None)` {#sign}
-
-Returns an element-wise indication of the sign of a number.
-
-`y = sign(x) = -1` if `x < 0`; 0 if `x == 0`; 1 if `x > 0`.
-
-For complex numbers, `y = sign(x) = x / |x|` if `x != 0`, otherwise `y = 0`.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,
- `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_maximum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_maximum.md
deleted file mode 100644
index 2f2759f2c6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.sparse_maximum.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.sparse_maximum(sp_a, sp_b, name=None)` {#sparse_maximum}
-
-Returns the element-wise max of two SparseTensors.
-
-Assumes the two SparseTensors have the same shape, i.e., no broadcasting.
-Example:
-
-```python
-sp_zero = sparse_tensor.SparseTensor([[0]], [0], [7])
-sp_one = sparse_tensor.SparseTensor([[1]], [1], [7])
-res = tf.sparse_maximum(sp_zero, sp_one).eval()
-# "res" should be equal to SparseTensor([[0], [1]], [0, 1], [7]).
-```
-
-##### Args:
-
-
-* <b>`sp_a`</b>: a `SparseTensor` operand whose dtype is real, and indices
- lexicographically ordered.
-* <b>`sp_b`</b>: the other `SparseTensor` operand with the same requirements (and the
- same shape).
-* <b>`name`</b>: optional name of the operation.
-
-##### Returns:
-
-
-* <b>`output`</b>: the output SparseTensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.FileWriterCache.get.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.FileWriterCache.get.md
deleted file mode 100644
index 0f416a5909..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.FileWriterCache.get.md
+++ /dev/null
@@ -1,13 +0,0 @@
-#### `tf.summary.FileWriterCache.get(logdir)` {#FileWriterCache.get}
-
-Returns the FileWriter for the specified directory.
-
-##### Args:
-
-
-* <b>`logdir`</b>: str, name of the directory.
-
-##### Returns:
-
- A `FileWriter`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.SummaryDescription.FromString.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.SummaryDescription.FromString.md
deleted file mode 100644
index 24a3b3f10c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.SummaryDescription.FromString.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.summary.SummaryDescription.FromString(s)` {#SummaryDescription.FromString}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.image.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.image.md
deleted file mode 100644
index 64d16619f0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.summary.image.md
+++ /dev/null
@@ -1,47 +0,0 @@
-### `tf.summary.image(name, tensor, max_outputs=3, collections=None)` {#image}
-
-Outputs a `Summary` protocol buffer with images.
-
-The summary has up to `max_outputs` summary values containing images. The
-images are built from `tensor` which must be 4-D with shape `[batch_size,
-height, width, channels]` and where `channels` can be:
-
-* 1: `tensor` is interpreted as Grayscale.
-* 3: `tensor` is interpreted as RGB.
-* 4: `tensor` is interpreted as RGBA.
-
-The images have the same number of channels as the input tensor. For float
-input, the values are normalized one image at a time to fit in the range
-`[0, 255]`. `uint8` values are unchanged. The op uses two different
-normalization algorithms:
-
-* If the input values are all positive, they are rescaled so the largest one
- is 255.
-
-* If any input value is negative, the values are shifted so input value 0.0
- is at 127. They are then rescaled so that either the smallest value is 0,
- or the largest one is 255.
-
-The `tag` in the outputted Summary.Value protobufs is generated based on the
-name, with a suffix depending on the max_outputs setting:
-
-* If `max_outputs` is 1, the summary value tag is '*name*/image'.
-* If `max_outputs` is greater than 1, the summary value tags are
- generated sequentially as '*name*/image/0', '*name*/image/1', etc.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the generated node. Will also serve as a series name in
- TensorBoard.
-* <b>`tensor`</b>: A 4-D `uint8` or `float32` `Tensor` of shape `[batch_size, height,
- width, channels]` where `channels` is 1, 3, or 4.
-* <b>`max_outputs`</b>: Max number of batch elements to generate images for.
-* <b>`collections`</b>: Optional list of ops.GraphKeys. The collections to add the
- summary to. Defaults to [_ops.GraphKeys.SUMMARIES]
-
-##### Returns:
-
- A scalar `Tensor` of type `string`. The serialized `Summary` protocol
- buffer.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.tan.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.tan.md
deleted file mode 100644
index cb05f1427b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.tan.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.tan(x, name=None)` {#tan}
-
-Computes tan of x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.test.main.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.test.main.md
deleted file mode 100644
index 4a9fbf12bf..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.test.main.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.test.main(argv=None)` {#main}
-
-Runs all unit tests.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.to_int64.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.to_int64.md
deleted file mode 100644
index 0762822b3d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.to_int64.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.to_int64(x, name='ToInt64')` {#to_int64}
-
-Casts a tensor to type `int64`.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` with same shape as `x` with type `int64`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` cannot be cast to the `int64`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.ChiefSessionCreator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.ChiefSessionCreator.md
deleted file mode 100644
index e5c7a3953a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.ChiefSessionCreator.md
+++ /dev/null
@@ -1,26 +0,0 @@
-Creates a tf.Session for a chief.
-- - -
-
-#### `tf.train.ChiefSessionCreator.__init__(scaffold=None, master='', config=None, checkpoint_dir=None, checkpoint_filename_with_path=None)` {#ChiefSessionCreator.__init__}
-
-Initializes a chief session creator.
-
-##### Args:
-
-
-* <b>`scaffold`</b>: A `Scaffold` used for gathering or building supportive ops. If
- not specified a default one is created. It's used to finalize the graph.
-* <b>`master`</b>: `String` representation of the TensorFlow master to use.
-* <b>`config`</b>: `ConfigProto` proto used to configure the session.
-* <b>`checkpoint_dir`</b>: A string. Optional path to a directory where to restore
- variables.
-* <b>`checkpoint_filename_with_path`</b>: Full file name path to the checkpoint file.
-
-
-- - -
-
-#### `tf.train.ChiefSessionCreator.create_session()` {#ChiefSessionCreator.create_session}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.FtrlOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.FtrlOptimizer.md
deleted file mode 100644
index c1b1755ed8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.FtrlOptimizer.md
+++ /dev/null
@@ -1,185 +0,0 @@
-Optimizer that implements the FTRL algorithm.
-
-See this [paper](
-https://www.eecs.tufts.edu/~dsculley/papers/ad-click-prediction.pdf).
-- - -
-
-#### `tf.train.FtrlOptimizer.__init__(learning_rate, learning_rate_power=-0.5, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='Ftrl')` {#FtrlOptimizer.__init__}
-
-Construct a new FTRL optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A float value or a constant float `Tensor`.
-* <b>`learning_rate_power`</b>: A float value, must be less or equal to zero.
-* <b>`initial_accumulator_value`</b>: The starting value for accumulators.
- Only positive values are allowed.
-* <b>`l1_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`l2_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`use_locking`</b>: If `True` use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "Ftrl".
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If one of the arguments is invalid.
-
-
-- - -
-
-#### `tf.train.FtrlOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#FtrlOptimizer.apply_gradients}
-
-Apply gradients to variables.
-
-This is the second part of `minimize()`. It returns an `Operation` that
-applies gradients.
-
-##### Args:
-
-
-* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
- `compute_gradients()`.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`name`</b>: Optional name for the returned operation. Default to the
- name passed to the `Optimizer` constructor.
-
-##### Returns:
-
- An `Operation` that applies the specified gradients. If `global_step`
- was not None, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
-* <b>`ValueError`</b>: If none of the variables have gradients.
-
-
-- - -
-
-#### `tf.train.FtrlOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#FtrlOptimizer.compute_gradients}
-
-Compute gradients of `loss` for the variables in `var_list`.
-
-This is the first part of `minimize()`. It returns a list
-of (gradient, variable) pairs where "gradient" is the gradient
-for "variable". Note that "gradient" can be a `Tensor`, an
-`IndexedSlices`, or `None` if there is no gradient for the
-given variable.
-
-##### Args:
-
-
-* <b>`loss`</b>: A Tensor containing the value to minimize.
-* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKey.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- A list of (gradient, variable) pairs. Variable is always present, but
- gradient can be `None`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
-* <b>`ValueError`</b>: If some arguments are invalid.
-
-
-- - -
-
-#### `tf.train.FtrlOptimizer.get_name()` {#FtrlOptimizer.get_name}
-
-
-
-
-- - -
-
-#### `tf.train.FtrlOptimizer.get_slot(var, name)` {#FtrlOptimizer.get_slot}
-
-Return a slot named `name` created for `var` by the Optimizer.
-
-Some `Optimizer` subclasses use additional variables. For example
-`Momentum` and `Adagrad` use variables to accumulate updates. This method
-gives access to these `Variable` objects if for some reason you need them.
-
-Use `get_slot_names()` to get the list of slot names created by the
-`Optimizer`.
-
-##### Args:
-
-
-* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
-* <b>`name`</b>: A string.
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-- - -
-
-#### `tf.train.FtrlOptimizer.get_slot_names()` {#FtrlOptimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-See `get_slot()`.
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.train.FtrlOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#FtrlOptimizer.minimize}
-
-Add operations to minimize `loss` by updating `var_list`.
-
-This method simply combines calls `compute_gradients()` and
-`apply_gradients()`. If you want to process the gradient before applying
-them call `compute_gradients()` and `apply_gradients()` explicitly instead
-of using this function.
-
-##### Args:
-
-
-* <b>`loss`</b>: A `Tensor` containing the value to minimize.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`name`</b>: Optional name for the returned operation.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- An Operation that updates the variables in `var_list`. If `global_step`
- was not `None`, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.LooperThread.loop.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.LooperThread.loop.md
deleted file mode 100644
index 6665ca7369..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.LooperThread.loop.md
+++ /dev/null
@@ -1,22 +0,0 @@
-#### `tf.train.LooperThread.loop(coord, timer_interval_secs, target, args=None, kwargs=None)` {#LooperThread.loop}
-
-Start a LooperThread that calls a function periodically.
-
-If `timer_interval_secs` is None the thread calls `target(args)`
-repeatedly. Otherwise `target(args)` is called every `timer_interval_secs`
-seconds. The thread terminates when a stop of the coordinator is
-requested.
-
-##### Args:
-
-
-* <b>`coord`</b>: A Coordinator.
-* <b>`timer_interval_secs`</b>: Number. Time boundaries at which to call `target`.
-* <b>`target`</b>: A callable object.
-* <b>`args`</b>: Optional arguments to pass to `target` when calling it.
-* <b>`kwargs`</b>: Optional keyword arguments to pass to `target` when calling it.
-
-##### Returns:
-
- The started thread.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.NewCheckpointReader.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.NewCheckpointReader.md
deleted file mode 100644
index 324dcf80c5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.NewCheckpointReader.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.train.NewCheckpointReader(filepattern)` {#NewCheckpointReader}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.Optimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.Optimizer.md
deleted file mode 100644
index 626a0a87ab..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.Optimizer.md
+++ /dev/null
@@ -1,265 +0,0 @@
-Base class for optimizers.
-
-This class defines the API to add Ops to train a model. You never use this
-class directly, but instead instantiate one of its subclasses such as
-`GradientDescentOptimizer`, `AdagradOptimizer`, or `MomentumOptimizer`.
-
-### Usage
-
-```python
-# Create an optimizer with the desired parameters.
-opt = GradientDescentOptimizer(learning_rate=0.1)
-# Add Ops to the graph to minimize a cost by updating a list of variables.
-# "cost" is a Tensor, and the list of variables contains tf.Variable
-# objects.
-opt_op = opt.minimize(cost, var_list=<list of variables>)
-```
-
-In the training program you will just have to run the returned Op.
-
-```python
-# Execute opt_op to do one step of training:
-opt_op.run()
-```
-
-### Processing gradients before applying them.
-
-Calling `minimize()` takes care of both computing the gradients and
-applying them to the variables. If you want to process the gradients
-before applying them you can instead use the optimizer in three steps:
-
-1. Compute the gradients with `compute_gradients()`.
-2. Process the gradients as you wish.
-3. Apply the processed gradients with `apply_gradients()`.
-
-Example:
-
-```python
-# Create an optimizer.
-opt = GradientDescentOptimizer(learning_rate=0.1)
-
-# Compute the gradients for a list of variables.
-grads_and_vars = opt.compute_gradients(loss, <list of variables>)
-
-# grads_and_vars is a list of tuples (gradient, variable). Do whatever you
-# need to the 'gradient' part, for example cap them, etc.
-capped_grads_and_vars = [(MyCapper(gv[0]), gv[1]) for gv in grads_and_vars]
-
-# Ask the optimizer to apply the capped gradients.
-opt.apply_gradients(capped_grads_and_vars)
-```
-
-- - -
-
-#### `tf.train.Optimizer.__init__(use_locking, name)` {#Optimizer.__init__}
-
-Create a new Optimizer.
-
-This must be called by the constructors of subclasses.
-
-##### Args:
-
-
-* <b>`use_locking`</b>: Bool. If True apply use locks to prevent concurrent updates
- to variables.
-* <b>`name`</b>: A non-empty string. The name to use for accumulators created
- for the optimizer.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If name is malformed.
-
-
-
-- - -
-
-#### `tf.train.Optimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#Optimizer.minimize}
-
-Add operations to minimize `loss` by updating `var_list`.
-
-This method simply combines calls `compute_gradients()` and
-`apply_gradients()`. If you want to process the gradient before applying
-them call `compute_gradients()` and `apply_gradients()` explicitly instead
-of using this function.
-
-##### Args:
-
-
-* <b>`loss`</b>: A `Tensor` containing the value to minimize.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`name`</b>: Optional name for the returned operation.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- An Operation that updates the variables in `var_list`. If `global_step`
- was not `None`, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
-
-
-- - -
-
-#### `tf.train.Optimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#Optimizer.compute_gradients}
-
-Compute gradients of `loss` for the variables in `var_list`.
-
-This is the first part of `minimize()`. It returns a list
-of (gradient, variable) pairs where "gradient" is the gradient
-for "variable". Note that "gradient" can be a `Tensor`, an
-`IndexedSlices`, or `None` if there is no gradient for the
-given variable.
-
-##### Args:
-
-
-* <b>`loss`</b>: A Tensor containing the value to minimize.
-* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKey.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- A list of (gradient, variable) pairs. Variable is always present, but
- gradient can be `None`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
-* <b>`ValueError`</b>: If some arguments are invalid.
-
-
-- - -
-
-#### `tf.train.Optimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#Optimizer.apply_gradients}
-
-Apply gradients to variables.
-
-This is the second part of `minimize()`. It returns an `Operation` that
-applies gradients.
-
-##### Args:
-
-
-* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
- `compute_gradients()`.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`name`</b>: Optional name for the returned operation. Default to the
- name passed to the `Optimizer` constructor.
-
-##### Returns:
-
- An `Operation` that applies the specified gradients. If `global_step`
- was not None, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
-* <b>`ValueError`</b>: If none of the variables have gradients.
-
-
-
-### Gating Gradients
-
-Both `minimize()` and `compute_gradients()` accept a `gate_gradients`
-argument that controls the degree of parallelism during the application of
-the gradients.
-
-The possible values are: `GATE_NONE`, `GATE_OP`, and `GATE_GRAPH`.
-
-<b>`GATE_NONE`</b>: Compute and apply gradients in parallel. This provides
-the maximum parallelism in execution, at the cost of some non-reproducibility
-in the results. For example the two gradients of `matmul` depend on the input
-values: With `GATE_NONE` one of the gradients could be applied to one of the
-inputs _before_ the other gradient is computed resulting in non-reproducible
-results.
-
-<b>`GATE_OP`</b>: For each Op, make sure all gradients are computed before
-they are used. This prevents race conditions for Ops that generate gradients
-for multiple inputs where the gradients depend on the inputs.
-
-<b>`GATE_GRAPH`</b>: Make sure all gradients for all variables are computed
-before any one of them is used. This provides the least parallelism but can
-be useful if you want to process all gradients before applying any of them.
-
-### Slots
-
-Some optimizer subclasses, such as `MomentumOptimizer` and `AdagradOptimizer`
-allocate and manage additional variables associated with the variables to
-train. These are called <i>Slots</i>. Slots have names and you can ask the
-optimizer for the names of the slots that it uses. Once you have a slot name
-you can ask the optimizer for the variable it created to hold the slot value.
-
-This can be useful if you want to log debug a training algorithm, report stats
-about the slots, etc.
-
-- - -
-
-#### `tf.train.Optimizer.get_slot_names()` {#Optimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-See `get_slot()`.
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.train.Optimizer.get_slot(var, name)` {#Optimizer.get_slot}
-
-Return a slot named `name` created for `var` by the Optimizer.
-
-Some `Optimizer` subclasses use additional variables. For example
-`Momentum` and `Adagrad` use variables to accumulate updates. This method
-gives access to these `Variable` objects if for some reason you need them.
-
-Use `get_slot_names()` to get the list of slot names created by the
-`Optimizer`.
-
-##### Args:
-
-
-* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
-* <b>`name`</b>: A string.
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.train.Optimizer.get_name()` {#Optimizer.get_name}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.Saver.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.Saver.md
deleted file mode 100644
index d44000649a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.Saver.md
+++ /dev/null
@@ -1,372 +0,0 @@
-Saves and restores variables.
-
-See [Variables](../../how_tos/variables/index.md)
-for an overview of variables, saving and restoring.
-
-The `Saver` class adds ops to save and restore variables to and from
-*checkpoints*. It also provides convenience methods to run these ops.
-
-Checkpoints are binary files in a proprietary format which map variable names
-to tensor values. The best way to examine the contents of a checkpoint is to
-load it using a `Saver`.
-
-Savers can automatically number checkpoint filenames with a provided counter.
-This lets you keep multiple checkpoints at different steps while training a
-model. For example you can number the checkpoint filenames with the training
-step number. To avoid filling up disks, savers manage checkpoint files
-automatically. For example, they can keep only the N most recent files, or
-one checkpoint for every N hours of training.
-
-You number checkpoint filenames by passing a value to the optional
-`global_step` argument to `save()`:
-
-```python
-saver.save(sess, 'my-model', global_step=0) ==> filename: 'my-model-0'
-...
-saver.save(sess, 'my-model', global_step=1000) ==> filename: 'my-model-1000'
-```
-
-Additionally, optional arguments to the `Saver()` constructor let you control
-the proliferation of checkpoint files on disk:
-
-* `max_to_keep` indicates the maximum number of recent checkpoint files to
- keep. As new files are created, older files are deleted. If None or 0,
- all checkpoint files are kept. Defaults to 5 (that is, the 5 most recent
- checkpoint files are kept.)
-
-* `keep_checkpoint_every_n_hours`: In addition to keeping the most recent
- `max_to_keep` checkpoint files, you might want to keep one checkpoint file
- for every N hours of training. This can be useful if you want to later
- analyze how a model progressed during a long training session. For
- example, passing `keep_checkpoint_every_n_hours=2` ensures that you keep
- one checkpoint file for every 2 hours of training. The default value of
- 10,000 hours effectively disables the feature.
-
-Note that you still have to call the `save()` method to save the model.
-Passing these arguments to the constructor will not save variables
-automatically for you.
-
-A training program that saves regularly looks like:
-
-```python
-...
-# Create a saver.
-saver = tf.train.Saver(...variables...)
-# Launch the graph and train, saving the model every 1,000 steps.
-sess = tf.Session()
-for step in xrange(1000000):
- sess.run(..training_op..)
- if step % 1000 == 0:
- # Append the step number to the checkpoint name:
- saver.save(sess, 'my-model', global_step=step)
-```
-
-In addition to checkpoint files, savers keep a protocol buffer on disk with
-the list of recent checkpoints. This is used to manage numbered checkpoint
-files and by `latest_checkpoint()`, which makes it easy to discover the path
-to the most recent checkpoint. That protocol buffer is stored in a file named
-'checkpoint' next to the checkpoint files.
-
-If you create several savers, you can specify a different filename for the
-protocol buffer file in the call to `save()`.
-
-- - -
-
-#### `tf.train.Saver.__init__(var_list=None, reshape=False, sharded=False, max_to_keep=5, keep_checkpoint_every_n_hours=10000.0, name=None, restore_sequentially=False, saver_def=None, builder=None, defer_build=False, allow_empty=False, write_version=2, pad_step_number=False)` {#Saver.__init__}
-
-Creates a `Saver`.
-
-The constructor adds ops to save and restore variables.
-
-`var_list` specifies the variables that will be saved and restored. It can
-be passed as a `dict` or a list:
-
-* A `dict` of names to variables: The keys are the names that will be
- used to save or restore the variables in the checkpoint files.
-* A list of variables: The variables will be keyed with their op name in
- the checkpoint files.
-
-For example:
-
-```python
-v1 = tf.Variable(..., name='v1')
-v2 = tf.Variable(..., name='v2')
-
-# Pass the variables as a dict:
-saver = tf.train.Saver({'v1': v1, 'v2': v2})
-
-# Or pass them as a list.
-saver = tf.train.Saver([v1, v2])
-# Passing a list is equivalent to passing a dict with the variable op names
-# as keys:
-saver = tf.train.Saver({v.op.name: v for v in [v1, v2]})
-```
-
-The optional `reshape` argument, if `True`, allows restoring a variable from
-a save file where the variable had a different shape, but the same number
-of elements and type. This is useful if you have reshaped a variable and
-want to reload it from an older checkpoint.
-
-The optional `sharded` argument, if `True`, instructs the saver to shard
-checkpoints per device.
-
-##### Args:
-
-
-* <b>`var_list`</b>: A list of `Variable`/`SaveableObject`, or a dictionary mapping
- names to `SaveableObject`s. If `None`, defaults to the list of all
- saveable objects.
-* <b>`reshape`</b>: If `True`, allows restoring parameters from a checkpoint
- where the variables have a different shape.
-* <b>`sharded`</b>: If `True`, shard the checkpoints, one per device.
-* <b>`max_to_keep`</b>: Maximum number of recent checkpoints to keep.
- Defaults to 5.
-* <b>`keep_checkpoint_every_n_hours`</b>: How often to keep checkpoints.
- Defaults to 10,000 hours.
-* <b>`name`</b>: String. Optional name to use as a prefix when adding operations.
-* <b>`restore_sequentially`</b>: A `Bool`, which if true, causes restore of different
- variables to happen sequentially within each device. This can lower
- memory usage when restoring very large models.
-* <b>`saver_def`</b>: Optional `SaverDef` proto to use instead of running the
- builder. This is only useful for specialty code that wants to recreate
- a `Saver` object for a previously built `Graph` that had a `Saver`.
- The `saver_def` proto should be the one returned by the
- `as_saver_def()` call of the `Saver` that was created for that `Graph`.
-* <b>`builder`</b>: Optional `SaverBuilder` to use if a `saver_def` was not provided.
- Defaults to `BaseSaverBuilder()`.
-* <b>`defer_build`</b>: If `True`, defer adding the save and restore ops to the
- `build()` call. In that case `build()` should be called before
- finalizing the graph or using the saver.
-* <b>`allow_empty`</b>: If `False` (default) raise an error if there are no
- variables in the graph. Otherwise, construct the saver anyway and make
- it a no-op.
-* <b>`write_version`</b>: controls what format to use when saving checkpoints. It
- also affects certain filepath matching logic. The V2 format is the
- recommended choice: it is much more optimized than V1 in terms of
- memory required and latency incurred during restore. Regardless of
- this flag, the Saver is able to restore from both V2 and V1 checkpoints.
-* <b>`pad_step_number`</b>: if True, pads the global step number in the checkpoint
- filepaths to some fixed width (8 by default). This is turned off by
- default.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` is invalid.
-* <b>`ValueError`</b>: If any of the keys or values in `var_list` are not unique.
-
-
-- - -
-
-#### `tf.train.Saver.save(sess, save_path, global_step=None, latest_filename=None, meta_graph_suffix='meta', write_meta_graph=True, write_state=True)` {#Saver.save}
-
-Saves variables.
-
-This method runs the ops added by the constructor for saving variables.
-It requires a session in which the graph was launched. The variables to
-save must also have been initialized.
-
-The method returns the path of the newly created checkpoint file. This
-path can be passed directly to a call to `restore()`.
-
-##### Args:
-
-
-* <b>`sess`</b>: A Session to use to save the variables.
-* <b>`save_path`</b>: String. Path to the checkpoint filename. If the saver is
- `sharded`, this is the prefix of the sharded checkpoint filename.
-* <b>`global_step`</b>: If provided the global step number is appended to
- `save_path` to create the checkpoint filename. The optional argument
- can be a `Tensor`, a `Tensor` name or an integer.
-* <b>`latest_filename`</b>: Optional name for the protocol buffer file that will
- contains the list of most recent checkpoint filenames. That file,
- kept in the same directory as the checkpoint files, is automatically
- managed by the saver to keep track of recent checkpoints. Defaults to
- 'checkpoint'.
-* <b>`meta_graph_suffix`</b>: Suffix for `MetaGraphDef` file. Defaults to 'meta'.
-* <b>`write_meta_graph`</b>: `Boolean` indicating whether or not to write the meta
- graph file.
-* <b>`write_state`</b>: `Boolean` indicating whether or not to write the
- `CheckpointStateProto`.
-
-##### Returns:
-
- A string: path at which the variables were saved. If the saver is
- sharded, this string ends with: '-?????-of-nnnnn' where 'nnnnn'
- is the number of shards created.
- If the saver is empty, returns None.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sess` is not a `Session`.
-* <b>`ValueError`</b>: If `latest_filename` contains path components, or if it
- collides with `save_path`.
-* <b>`RuntimeError`</b>: If save and restore ops weren't built.
-
-
-- - -
-
-#### `tf.train.Saver.restore(sess, save_path)` {#Saver.restore}
-
-Restores previously saved variables.
-
-This method runs the ops added by the constructor for restoring variables.
-It requires a session in which the graph was launched. The variables to
-restore do not have to have been initialized, as restoring is itself a way
-to initialize variables.
-
-The `save_path` argument is typically a value previously returned from a
-`save()` call, or a call to `latest_checkpoint()`.
-
-##### Args:
-
-
-* <b>`sess`</b>: A `Session` to use to restore the parameters.
-* <b>`save_path`</b>: Path where parameters were previously saved.
-
-
-
-Other utility methods.
-
-- - -
-
-#### `tf.train.Saver.last_checkpoints` {#Saver.last_checkpoints}
-
-List of not-yet-deleted checkpoint filenames.
-
-You can pass any of the returned values to `restore()`.
-
-##### Returns:
-
- A list of checkpoint filenames, sorted from oldest to newest.
-
-
-- - -
-
-#### `tf.train.Saver.set_last_checkpoints_with_time(last_checkpoints_with_time)` {#Saver.set_last_checkpoints_with_time}
-
-Sets the list of old checkpoint filenames and timestamps.
-
-##### Args:
-
-
-* <b>`last_checkpoints_with_time`</b>: A list of tuples of checkpoint filenames and
- timestamps.
-
-##### Raises:
-
-
-* <b>`AssertionError`</b>: If last_checkpoints_with_time is not a list.
-
-
-- - -
-
-#### `tf.train.Saver.recover_last_checkpoints(checkpoint_paths)` {#Saver.recover_last_checkpoints}
-
-Recovers the internal saver state after a crash.
-
-This method is useful for recovering the "self._last_checkpoints" state.
-
-Globs for the checkpoints pointed to by `checkpoint_paths`. If the files
-exist, use their mtime as the checkpoint timestamp.
-
-##### Args:
-
-
-* <b>`checkpoint_paths`</b>: a list of checkpoint paths.
-
-
-- - -
-
-#### `tf.train.Saver.as_saver_def()` {#Saver.as_saver_def}
-
-Generates a `SaverDef` representation of this saver.
-
-##### Returns:
-
- A `SaverDef` proto.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.train.Saver.build()` {#Saver.build}
-
-Builds saver_def.
-
-
-- - -
-
-#### `tf.train.Saver.export_meta_graph(filename=None, collection_list=None, as_text=False, export_scope=None, clear_devices=False)` {#Saver.export_meta_graph}
-
-Writes `MetaGraphDef` to save_path/filename.
-
-##### Args:
-
-
-* <b>`filename`</b>: Optional meta_graph filename including the path.
-* <b>`collection_list`</b>: List of string keys to collect.
-* <b>`as_text`</b>: If `True`, writes the meta_graph as an ASCII proto.
-* <b>`export_scope`</b>: Optional `string`. Name scope to remove.
-* <b>`clear_devices`</b>: Whether or not to clear the device field for an `Operation`
- or `Tensor` during export.
-
-##### Returns:
-
- A `MetaGraphDef` proto.
-
-
-- - -
-
-#### `tf.train.Saver.from_proto(saver_def, import_scope=None)` {#Saver.from_proto}
-
-Returns a `Saver` object created from `saver_def`.
-
-##### Args:
-
-
-* <b>`saver_def`</b>: a `SaveDef` protocol buffer.
-* <b>`import_scope`</b>: Optional `string`. Name scope to use.
-
-##### Returns:
-
- A `Saver` built from saver_def.
-
-
-- - -
-
-#### `tf.train.Saver.set_last_checkpoints(last_checkpoints)` {#Saver.set_last_checkpoints}
-
-DEPRECATED: Use set_last_checkpoints_with_time.
-
-Sets the list of old checkpoint filenames.
-
-##### Args:
-
-
-* <b>`last_checkpoints`</b>: A list of checkpoint filenames.
-
-##### Raises:
-
-
-* <b>`AssertionError`</b>: If last_checkpoints is not a list.
-
-
-- - -
-
-#### `tf.train.Saver.to_proto(export_scope=None)` {#Saver.to_proto}
-
-Converts this `Saver` to a `SaverDef` protocol buffer.
-
-##### Args:
-
-
-* <b>`export_scope`</b>: Optional `string`. Name scope to remove.
-
-##### Returns:
-
- A `SaverDef` protocol buffer.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.SessionManager.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.SessionManager.md
deleted file mode 100644
index c142b2aca8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.SessionManager.md
+++ /dev/null
@@ -1,209 +0,0 @@
-Training helper that restores from checkpoint and creates session.
-
-This class is a small wrapper that takes care of session creation and
-checkpoint recovery. It also provides functions that to facilitate
-coordination among multiple training threads or processes.
-
-* Checkpointing trained variables as the training progresses.
-* Initializing variables on startup, restoring them from the most recent
- checkpoint after a crash, or wait for checkpoints to become available.
-
-### Usage:
-
-```python
-with tf.Graph().as_default():
- ...add operations to the graph...
- # Create a SessionManager that will checkpoint the model in '/tmp/mydir'.
- sm = SessionManager()
- sess = sm.prepare_session(master, init_op, saver, checkpoint_dir)
- # Use the session to train the graph.
- while True:
- sess.run(<my_train_op>)
-```
-
-`prepare_session()` initializes or restores a model. It requires `init_op`
-and `saver` as an argument.
-
-A second process could wait for the model to be ready by doing the following:
-
-```python
-with tf.Graph().as_default():
- ...add operations to the graph...
- # Create a SessionManager that will wait for the model to become ready.
- sm = SessionManager()
- sess = sm.wait_for_session(master)
- # Use the session to train the graph.
- while True:
- sess.run(<my_train_op>)
-```
-
-`wait_for_session()` waits for a model to be initialized by other processes.
-- - -
-
-#### `tf.train.SessionManager.__init__(local_init_op=None, ready_op=None, ready_for_local_init_op=None, graph=None, recovery_wait_secs=30)` {#SessionManager.__init__}
-
-Creates a SessionManager.
-
-The `local_init_op` is an `Operation` that is run always after a new session
-was created. If `None`, this step is skipped.
-
-The `ready_op` is an `Operation` used to check if the model is ready. The
-model is considered ready if that operation returns an empty 1D string
-tensor. If the operation returns a non empty 1D string tensor, the elements
-are concatenated and used to indicate to the user why the model is not
-ready.
-
-The `ready_for_local_init_op` is an `Operation` used to check if the model
-is ready to run local_init_op. The model is considered ready if that
-operation returns an empty 1D string tensor. If the operation returns a non
-empty 1D string tensor, the elements are concatenated and used to indicate
-to the user why the model is not ready.
-
-If `ready_op` is `None`, the model is not checked for readiness.
-
-`recovery_wait_secs` is the number of seconds between checks that
-the model is ready. It is used by processes to wait for a model to
-be initialized or restored. Defaults to 30 seconds.
-
-##### Args:
-
-
-* <b>`local_init_op`</b>: An `Operation` run immediately after session creation.
- Usually used to initialize tables and local variables.
-* <b>`ready_op`</b>: An `Operation` to check if the model is initialized.
-* <b>`ready_for_local_init_op`</b>: An `Operation` to check if the model is ready
- to run local_init_op.
-* <b>`graph`</b>: The `Graph` that the model will use.
-* <b>`recovery_wait_secs`</b>: Seconds between checks for the model to be ready.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If ready_for_local_init_op is not None but local_init_op is
- None
-
-
-- - -
-
-#### `tf.train.SessionManager.prepare_session(master, init_op=None, saver=None, checkpoint_dir=None, checkpoint_filename_with_path=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None, init_feed_dict=None, init_fn=None)` {#SessionManager.prepare_session}
-
-Creates a `Session`. Makes sure the model is ready to be used.
-
-Creates a `Session` on 'master'. If a `saver` object is passed in, and
-`checkpoint_dir` points to a directory containing valid checkpoint
-files, then it will try to recover the model from checkpoint. If
-no checkpoint files are available, and `wait_for_checkpoint` is
-`True`, then the process would check every `recovery_wait_secs`,
-up to `max_wait_secs`, for recovery to succeed.
-
-If the model cannot be recovered successfully then it is initialized by
-either running the provided `init_op`, or calling the provided `init_fn`.
-The local_init_op is also run after init_op and init_fn, regardless of
-whether the model was recovered successfully, but only if
-ready_for_local_init_op passes.
-
-It is an error if the model cannot be recovered and no `init_op`
-or `init_fn` or `local_init_op` are passed.
-
-##### Args:
-
-
-* <b>`master`</b>: `String` representation of the TensorFlow master to use.
-* <b>`init_op`</b>: Optional `Operation` used to initialize the model.
-* <b>`saver`</b>: A `Saver` object used to restore a model.
-* <b>`checkpoint_dir`</b>: Path to the checkpoint files. The latest checkpoint in the
- dir will be used to restore.
-* <b>`checkpoint_filename_with_path`</b>: Full file name path to the checkpoint file.
-* <b>`wait_for_checkpoint`</b>: Whether to wait for checkpoint to become available.
-* <b>`max_wait_secs`</b>: Maximum time to wait for checkpoints to become available.
-* <b>`config`</b>: Optional `ConfigProto` proto used to configure the session.
-* <b>`init_feed_dict`</b>: Optional dictionary that maps `Tensor` objects to feed
- values. This feed dictionary is passed to the session `run()` call when
- running the init op.
-* <b>`init_fn`</b>: Optional callable used to initialize the model. Called after the
- optional `init_op` is called. The callable must accept one argument,
- the session being initialized.
-
-##### Returns:
-
- A `Session` object that can be used to drive the model.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If the model cannot be initialized or recovered.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both checkpoint_dir and checkpoint_filename_with_path are
- set.
-
-
-- - -
-
-#### `tf.train.SessionManager.recover_session(master, saver=None, checkpoint_dir=None, checkpoint_filename_with_path=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None)` {#SessionManager.recover_session}
-
-Creates a `Session`, recovering if possible.
-
-Creates a new session on 'master'. If the session is not initialized
-and can be recovered from a checkpoint, recover it.
-
-##### Args:
-
-
-* <b>`master`</b>: `String` representation of the TensorFlow master to use.
-* <b>`saver`</b>: A `Saver` object used to restore a model.
-* <b>`checkpoint_dir`</b>: Path to the checkpoint files. The latest checkpoint in the
- dir will be used to restore.
-* <b>`checkpoint_filename_with_path`</b>: Full file name path to the checkpoint file.
-* <b>`wait_for_checkpoint`</b>: Whether to wait for checkpoint to become available.
-* <b>`max_wait_secs`</b>: Maximum time to wait for checkpoints to become available.
-* <b>`config`</b>: Optional `ConfigProto` proto used to configure the session.
-
-##### Returns:
-
- A pair (sess, initialized) where 'initialized' is `True` if
- the session could be recovered and initialized, `False` otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both checkpoint_dir and checkpoint_filename_with_path are
- set.
-
-
-- - -
-
-#### `tf.train.SessionManager.wait_for_session(master, config=None, max_wait_secs=inf)` {#SessionManager.wait_for_session}
-
-Creates a new `Session` and waits for model to be ready.
-
-Creates a new `Session` on 'master'. Waits for the model to be
-initialized or recovered from a checkpoint. It's expected that
-another thread or process will make the model ready, and that this
-is intended to be used by threads/processes that participate in a
-distributed training configuration where a different thread/process
-is responsible for initializing or recovering the model being trained.
-
-NB: The amount of time this method waits for the session is bounded
-by max_wait_secs. By default, this function will wait indefinitely.
-
-##### Args:
-
-
-* <b>`master`</b>: `String` representation of the TensorFlow master to use.
-* <b>`config`</b>: Optional ConfigProto proto used to configure the session.
-* <b>`max_wait_secs`</b>: Maximum time to wait for the session to become available.
-
-##### Returns:
-
- A `Session`. May be None if the operation exceeds the timeout
- specified by config.operation_timeout_in_ms.
-
-##### Raises:
-
- tf.DeadlineExceededError: if the session is not available after
- max_wait_secs.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.checkpoint_exists.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.checkpoint_exists.md
deleted file mode 100644
index f28e994e52..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.checkpoint_exists.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.train.checkpoint_exists(checkpoint_prefix)` {#checkpoint_exists}
-
-Checks whether a V1 or V2 checkpoint exists with the specified prefix.
-
-This is the recommended way to check if a checkpoint exists, since it takes
-into account the naming difference between V1 and V2 formats.
-
-##### Args:
-
-
-* <b>`checkpoint_prefix`</b>: the prefix of a V1 or V2 checkpoint, with V2 taking
- priority. Typically the result of `Saver.save()` or that of
- `tf.train.latest_checkpoint()`, regardless of sharded/non-sharded or
- V1/V2.
-
-##### Returns:
-
- A bool, true iff a checkpoint referred to by `checkpoint_prefix` exists.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.maybe_shuffle_batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.maybe_shuffle_batch.md
deleted file mode 100644
index 2c90ddfafe..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.maybe_shuffle_batch.md
+++ /dev/null
@@ -1,41 +0,0 @@
-### `tf.train.maybe_shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, keep_input, num_threads=1, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_shuffle_batch}
-
-Creates batches by randomly shuffling conditionally-enqueued tensors.
-
-See docstring in `shuffle_batch` for more details.
-
-##### Args:
-
-
-* <b>`tensors`</b>: The list or dictionary of tensors to enqueue.
-* <b>`batch_size`</b>: The new batch size pulled from the queue.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`min_after_dequeue`</b>: Minimum number elements in the queue after a
- dequeue, used to ensure a level of mixing of elements.
-* <b>`keep_input`</b>: A `bool` Tensor. This tensor controls whether the input is
- added to the queue or not. If it is a scalar and evaluates `True`, then
- `tensors` are all added to the queue. If it is a vector and `enqueue_many`
- is `True`, then each example is added to the queue only if the
- corresonding value in `keep_input` is `True`. This tensor essentially acts
- as a filtering mechanism.
-* <b>`num_threads`</b>: The number of threads enqueuing `tensor_list`.
-* <b>`seed`</b>: Seed for the random shuffling within the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list` is a single example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensor_list`.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (Optional) If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the types as `tensors`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensors`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.piecewise_constant.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.piecewise_constant.md
deleted file mode 100644
index b41f38eb49..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.piecewise_constant.md
+++ /dev/null
@@ -1,41 +0,0 @@
-### `tf.train.piecewise_constant(x, boundaries, values, name=None)` {#piecewise_constant}
-
-Piecewise constant from boundaries and interval values.
-
-Example: use a learning rate that's 1.0 for the first 100000 steps, 0.5
- for steps 100001 to 110000, and 0.1 for any additional steps.
-
-```python
-global_step = tf.Variable(0, trainable=False)
-boundaries = [100000, 110000]
-values = [1.0, 0.5, 0.1]
-learning_rate = tf.train.piecewise_constant(global_step, boundaries, values)
-
-# Later, whenever we perform an optimization step, we increment global_step.
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A 0-D scalar `Tensor`. Must be one of the following types: `float32`,
- `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`.
-* <b>`boundaries`</b>: A list of `Tensor`s or `int`s or `float`s with strictly
- increasing entries, and with all elements having the same type as `x`.
-* <b>`values`</b>: A list of `Tensor`s or float`s or `int`s that specifies the values
- for the intervals defined by `boundaries`. It should have one more element
- than `boundaries`, and all elements should have the same type.
-* <b>`name`</b>: A string. Optional name of the operation. Defaults to
- 'PiecewiseConstant'.
-
-##### Returns:
-
- A 0-D Tensor. Its value is `values[0]` when `x <= boundaries[0]`,
- `values[1]` when `x > boundaries[0]` and `x <= boundaries[1]`, ...,
- and values[-1] when `x > boundaries[-1]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if types of `x` and `buondaries` do not match, or types of all
- `values` do not match.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.polynomial_decay.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.polynomial_decay.md
deleted file mode 100644
index 64a365fb08..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.polynomial_decay.md
+++ /dev/null
@@ -1,78 +0,0 @@
-### `tf.train.polynomial_decay(learning_rate, global_step, decay_steps, end_learning_rate=0.0001, power=1.0, cycle=False, name=None)` {#polynomial_decay}
-
-Applies a polynomial decay to the learning rate.
-
-It is commonly observed that a monotonically decreasing learning rate, whose
-degree of change is carefully chosen, results in a better performing model.
-This function applies a polynomial decay function to a provided initial
-`learning_rate` to reach an `end_learning_rate` in the given `decay_steps`.
-
-It requires a `global_step` value to compute the decayed learning rate. You
-can just pass a TensorFlow variable that you increment at each training step.
-
-The function returns the decayed learning rate. It is computed as:
-
-```python
-global_step = min(global_step, decay_steps)
-decayed_learning_rate = (learning_rate - end_learning_rate) *
- (1 - global_step / decay_steps) ^ (power) +
- end_learning_rate
-
-```
-
-If `cycle` is True then a multiple of `decay_steps` is used, the first one
-that is bigger than `global_steps`.
-
-```python
-decay_steps = decay_steps * ceil(global_step / decay_steps)
-decayed_learning_rate = (learning_rate - end_learning_rate) *
- (1 - global_step / decay_steps) ^ (power) +
- end_learning_rate
-
-```
-
-Example: decay from 0.1 to 0.01 in 10000 steps using sqrt (i.e. power=0.5):
-
-```python
-...
-global_step = tf.Variable(0, trainable=False)
-starter_learning_rate = 0.1
-end_learning_rate = 0.01
-decay_steps = 10000
-learning_rate = tf.train.polynomial_decay(starter_learning_rate, global_step,
- decay_steps, end_learning_rate,
- power=0.5)
-# Passing global_step to minimize() will increment it at each step.
-learning_step = (
- tf.train.GradientDescentOptimizer(learning_rate)
- .minimize(...my loss..., global_step=global_step)
-)
-```
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A scalar `float32` or `float64` `Tensor` or a
- Python number. The initial learning rate.
-* <b>`global_step`</b>: A scalar `int32` or `int64` `Tensor` or a Python number.
- Global step to use for the decay computation. Must not be negative.
-* <b>`decay_steps`</b>: A scalar `int32` or `int64` `Tensor` or a Python number.
- Must be positive. See the decay computation above.
-* <b>`end_learning_rate`</b>: A scalar `float32` or `float64` `Tensor` or a
- Python number. The minimal end learning rate.
-* <b>`power`</b>: A scalar `float32` or `float64` `Tensor` or a
- Python number. The power of the polynomial. Defaults to sqrt, i.e. 0.5.
-* <b>`cycle`</b>: A boolean, whether or not it should cycle beyond decay_steps.
-* <b>`name`</b>: String. Optional name of the operation. Defaults to
- 'PolynomialDecay'.
-
-##### Returns:
-
- A scalar `Tensor` of the same type as `learning_rate`. The decayed
- learning rate.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `global_step` is not supplied.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.replica_device_setter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.replica_device_setter.md
deleted file mode 100644
index 4009cc9b30..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.train.replica_device_setter.md
+++ /dev/null
@@ -1,63 +0,0 @@
-### `tf.train.replica_device_setter(ps_tasks=0, ps_device='/job:ps', worker_device='/job:worker', merge_devices=True, cluster=None, ps_ops=None, ps_strategy=None)` {#replica_device_setter}
-
-Return a `device function` to use when building a Graph for replicas.
-
-Device Functions are used in `with tf.device(device_function):` statement to
-automatically assign devices to `Operation` objects as they are constructed,
-Device constraints are added from the inner-most context first, working
-outwards. The merging behavior adds constraints to fields that are yet unset
-by a more inner context. Currently the fields are (job, task, cpu/gpu).
-
-If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op.
-Otherwise, the value of `ps_tasks` is derived from `cluster`.
-
-By default, only Variable ops are placed on ps tasks, and the placement
-strategy is round-robin over all ps tasks. A custom `ps_strategy` may be used
-to do more intelligent placement, such as
-`tf.contrib.training.GreedyLoadBalancingStrategy`.
-
-For example,
-
-```python
-# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker
-# jobs on hosts worker0, worker1 and worker2.
-cluster_spec = {
- "ps": ["ps0:2222", "ps1:2222"],
- "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]}
-with tf.device(tf.train.replica_device_setter(cluster=cluster_spec)):
- # Build your graph
- v1 = tf.Variable(...) # assigned to /job:ps/task:0
- v2 = tf.Variable(...) # assigned to /job:ps/task:1
- v3 = tf.Variable(...) # assigned to /job:ps/task:0
-# Run compute
-```
-
-##### Args:
-
-
-* <b>`ps_tasks`</b>: Number of tasks in the `ps` job. Ignored if `cluster` is
- provided.
-* <b>`ps_device`</b>: String. Device of the `ps` job. If empty no `ps` job is used.
- Defaults to `ps`.
-* <b>`worker_device`</b>: String. Device of the `worker` job. If empty no `worker`
- job is used.
-* <b>`merge_devices`</b>: `Boolean`. If `True`, merges or only sets a device if the
- device constraint is completely unset. merges device specification rather
- than overriding them.
-* <b>`cluster`</b>: `ClusterDef` proto or `ClusterSpec`.
-* <b>`ps_ops`</b>: List of strings representing `Operation` types that need to be
- placed on `ps` devices. If `None`, defaults to `["Variable"]`.
-* <b>`ps_strategy`</b>: A callable invoked for every ps `Operation` (i.e. matched by
- `ps_ops`), that takes the `Operation` and returns the ps task index to
- use. If `None`, defaults to a round-robin strategy across all `ps`
- devices.
-
-##### Returns:
-
- A function to pass to `tf.device()`.
-
-##### Raises:
-
- TypeError if `cluster` is not a dictionary or `ClusterDef` protocol buffer,
- or if `ps_strategy` is provided but not a callable.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.transpose.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.transpose.md
deleted file mode 100644
index c6b76c7824..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.transpose.md
+++ /dev/null
@@ -1,49 +0,0 @@
-### `tf.transpose(a, perm=None, name='transpose')` {#transpose}
-
-Transposes `a`. Permutes the dimensions according to `perm`.
-
-The returned tensor's dimension i will correspond to the input dimension
-`perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is
-the rank of the input tensor. Hence by default, this operation performs a
-regular matrix transpose on 2-D input Tensors.
-
-For example:
-
-```python
-# 'x' is [[1 2 3]
-# [4 5 6]]
-tf.transpose(x) ==> [[1 4]
- [2 5]
- [3 6]]
-
-# Equivalently
-tf.transpose(x, perm=[1, 0]) ==> [[1 4]
- [2 5]
- [3 6]]
-
-# 'perm' is more useful for n-dimensional tensors, for n > 2
-# 'x' is [[[1 2 3]
-# [4 5 6]]
-# [[7 8 9]
-# [10 11 12]]]
-# Take the transpose of the matrices in dimension-0
-tf.transpose(x, perm=[0, 2, 1]) ==> [[[1 4]
- [2 5]
- [3 6]]
-
- [[7 10]
- [8 11]
- [9 12]]]
-```
-
-##### Args:
-
-
-* <b>`a`</b>: A `Tensor`.
-* <b>`perm`</b>: A permutation of the dimensions of `a`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A transposed `Tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.truncated_normal_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.truncated_normal_initializer.md
deleted file mode 100644
index 7ccec1074a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.truncated_normal_initializer.md
+++ /dev/null
@@ -1,30 +0,0 @@
-Initializer that generates a truncated normal distribution.
-
-These values are similar to values from a `random_normal_initializer`
-except that values more than two standard deviations from the mean
-are discarded and re-drawn. This is the recommended initializer for
-neural network weights and filters.
-
-Args:
- mean: a python scalar or a scalar tensor. Mean of the random values
- to generate.
- stddev: a python scalar or a scalar tensor. Standard deviation of the
- random values to generate.
- seed: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
- dtype: The data type. Only floating point types are supported.
-- - -
-
-#### `tf.truncated_normal_initializer.__call__(shape, dtype=None, partition_info=None)` {#truncated_normal_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.truncated_normal_initializer.__init__(mean=0.0, stddev=1.0, seed=None, dtype=tf.float32)` {#truncated_normal_initializer.__init__}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.variable_op_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.variable_op_scope.md
deleted file mode 100644
index 266ac318e4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.variable_op_scope.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.variable_op_scope(values, name_or_scope, default_name=None, initializer=None, regularizer=None, caching_device=None, partitioner=None, custom_getter=None, reuse=None, dtype=None, use_resource=None)` {#variable_op_scope}
-
-Deprecated: context manager for defining an op that creates variables.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.verify_tensor_all_finite.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.verify_tensor_all_finite.md
deleted file mode 100644
index 37fa105df5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.verify_tensor_all_finite.md
+++ /dev/null
@@ -1,15 +0,0 @@
-### `tf.verify_tensor_all_finite(t, msg, name=None)` {#verify_tensor_all_finite}
-
-Assert that the tensor does not contain any NaN's or Inf's.
-
-##### Args:
-
-
-* <b>`t`</b>: Tensor to check.
-* <b>`msg`</b>: Message to log on failure.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- Same tensor as `t`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Dimension.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Dimension.md
deleted file mode 100644
index 18d6d04fc0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.Dimension.md
+++ /dev/null
@@ -1,361 +0,0 @@
-Represents the value of one dimension in a TensorShape.
-- - -
-
-#### `tf.Dimension.__add__(other)` {#Dimension.__add__}
-
-Returns the sum of `self` and `other`.
-
-Dimensions are summed as follows:
-
- Dimension(m) + Dimension(n) == Dimension(m + n)
- Dimension(m) + Dimension(None) == Dimension(None)
- Dimension(None) + Dimension(n) == Dimension(None)
- Dimension(None) + Dimension(None) == Dimension(None)
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- A Dimension whose value is the sum of `self` and `other`.
-
-
-- - -
-
-#### `tf.Dimension.__div__(other)` {#Dimension.__div__}
-
-DEPRECATED: Use `__floordiv__` via `x // y` instead.
-
-This function exists only for backwards compatibility purposes; new code
-should use `__floordiv__` via the syntax `x // y`. Using `x // y`
-communicates clearly that the result rounds down, and is forward compatible
-to Python 3.
-
-##### Args:
-
-
-* <b>`other`</b>: Another `Dimension`.
-
-##### Returns:
-
- A `Dimension` whose value is the integer quotient of `self` and `other`.
-
-
-- - -
-
-#### `tf.Dimension.__eq__(other)` {#Dimension.__eq__}
-
-Returns true if `other` has the same known value as this Dimension.
-
-
-- - -
-
-#### `tf.Dimension.__floordiv__(other)` {#Dimension.__floordiv__}
-
-Returns the quotient of `self` and `other` rounded down.
-
-Dimensions are divided as follows:
-
- Dimension(m) // Dimension(n) == Dimension(m // n)
- Dimension(m) // Dimension(None) == Dimension(None)
- Dimension(None) // Dimension(n) == Dimension(None)
- Dimension(None) // Dimension(None) == Dimension(None)
-
-##### Args:
-
-
-* <b>`other`</b>: Another `Dimension`.
-
-##### Returns:
-
- A `Dimension` whose value is the integer quotient of `self` and `other`.
-
-
-- - -
-
-#### `tf.Dimension.__ge__(other)` {#Dimension.__ge__}
-
-Returns True if `self` is known to be greater than or equal to `other`.
-
-Dimensions are compared as follows:
-
- Dimension(m) >= Dimension(n) == m >= n
- Dimension(m) >= Dimension(None) == None
- Dimension(None) >= Dimension(n) == None
- Dimension(None) >= Dimension(None) == None
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- The value of `self.value >= other.value` if both are known, otherwise
- None.
-
-
-- - -
-
-#### `tf.Dimension.__gt__(other)` {#Dimension.__gt__}
-
-Returns True if `self` is known to be greater than `other`.
-
-Dimensions are compared as follows:
-
- Dimension(m) > Dimension(n) == m > n
- Dimension(m) > Dimension(None) == None
- Dimension(None) > Dimension(n) == None
- Dimension(None) > Dimension(None) == None
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- The value of `self.value > other.value` if both are known, otherwise
- None.
-
-
-- - -
-
-#### `tf.Dimension.__index__()` {#Dimension.__index__}
-
-
-
-
-- - -
-
-#### `tf.Dimension.__init__(value)` {#Dimension.__init__}
-
-Creates a new Dimension with the given value.
-
-
-- - -
-
-#### `tf.Dimension.__int__()` {#Dimension.__int__}
-
-
-
-
-- - -
-
-#### `tf.Dimension.__le__(other)` {#Dimension.__le__}
-
-Returns True if `self` is known to be less than or equal to `other`.
-
-Dimensions are compared as follows:
-
- Dimension(m) <= Dimension(n) == m <= n
- Dimension(m) <= Dimension(None) == None
- Dimension(None) <= Dimension(n) == None
- Dimension(None) <= Dimension(None) == None
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- The value of `self.value <= other.value` if both are known, otherwise
- None.
-
-
-- - -
-
-#### `tf.Dimension.__lt__(other)` {#Dimension.__lt__}
-
-Returns True if `self` is known to be less than `other`.
-
-Dimensions are compared as follows:
-
- Dimension(m) < Dimension(n) == m < n
- Dimension(m) < Dimension(None) == None
- Dimension(None) < Dimension(n) == None
- Dimension(None) < Dimension(None) == None
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- The value of `self.value < other.value` if both are known, otherwise
- None.
-
-
-- - -
-
-#### `tf.Dimension.__mod__(other)` {#Dimension.__mod__}
-
-Returns `self` modulo `other.
-
-Dimension moduli are computed as follows:
-
- Dimension(m) % Dimension(n) == Dimension(m % n)
- Dimension(m) % Dimension(None) == Dimension(None)
- Dimension(None) % Dimension(n) == Dimension(None)
- Dimension(None) % Dimension(None) == Dimension(None)
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- A Dimension whose value is `self` modulo `other`.
-
-
-- - -
-
-#### `tf.Dimension.__mul__(other)` {#Dimension.__mul__}
-
-Returns the product of `self` and `other`.
-
-Dimensions are summed as follows:
-
-```
- Dimension(m) * Dimension(n) == Dimension(m * n)
- Dimension(m) * Dimension(None) == Dimension(None)
- Dimension(None) * Dimension(n) == Dimension(None)
- Dimension(None) * Dimension(None) == Dimension(None)
-```
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- A Dimension whose value is the product of `self` and `other`.
-
-
-- - -
-
-#### `tf.Dimension.__ne__(other)` {#Dimension.__ne__}
-
-Returns true if `other` has a different known value from `self`.
-
-
-- - -
-
-#### `tf.Dimension.__repr__()` {#Dimension.__repr__}
-
-
-
-
-- - -
-
-#### `tf.Dimension.__str__()` {#Dimension.__str__}
-
-
-
-
-- - -
-
-#### `tf.Dimension.__sub__(other)` {#Dimension.__sub__}
-
-Returns the subtraction of `other` from `self`.
-
-Dimensions are subtracted as follows:
-
- Dimension(m) - Dimension(n) == Dimension(m - n)
- Dimension(m) - Dimension(None) == Dimension(None)
- Dimension(None) - Dimension(n) == Dimension(None)
- Dimension(None) - Dimension(None) == Dimension(None)
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- A Dimension whose value is the subtraction of sum of `other` from `self`.
-
-
-- - -
-
-#### `tf.Dimension.assert_is_compatible_with(other)` {#Dimension.assert_is_compatible_with}
-
-Raises an exception if `other` is not compatible with this Dimension.
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` and `other` are not compatible (see
- is_compatible_with).
-
-
-- - -
-
-#### `tf.Dimension.is_compatible_with(other)` {#Dimension.is_compatible_with}
-
-Returns true if `other` is compatible with this Dimension.
-
-Two known Dimensions are compatible if they have the same value.
-An unknown Dimension is compatible with all other Dimensions.
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- True if this Dimension and `other` are compatible.
-
-
-- - -
-
-#### `tf.Dimension.merge_with(other)` {#Dimension.merge_with}
-
-Returns a Dimension that combines the information in `self` and `other`.
-
-Dimensions are combined as follows:
-
-```python
- Dimension(n) .merge_with(Dimension(n)) == Dimension(n)
- Dimension(n) .merge_with(Dimension(None)) == Dimension(n)
- Dimension(None).merge_with(Dimension(n)) == Dimension(n)
- Dimension(None).merge_with(Dimension(None)) == Dimension(None)
- Dimension(n) .merge_with(Dimension(m)) raises ValueError for n != m
-```
-
-##### Args:
-
-
-* <b>`other`</b>: Another Dimension.
-
-##### Returns:
-
- A Dimension containing the combined information of `self` and
- `other`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `self` and `other` are not compatible (see
- is_compatible_with).
-
-
-- - -
-
-#### `tf.Dimension.value` {#Dimension.value}
-
-The value of this dimension, or None if it is unknown.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.FixedLenSequenceFeature.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.FixedLenSequenceFeature.md
deleted file mode 100644
index 49d7b07cb4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.FixedLenSequenceFeature.md
+++ /dev/null
@@ -1,59 +0,0 @@
-Configuration for a dense input feature in a sequence item.
-
-To treat a sparse input as dense, provide `allow_missing=True`; otherwise,
-the parse functions will fail on any examples missing this feature.
-
-Fields:
- shape: Shape of input data.
- dtype: Data type of input.
- allow_missing: Whether to allow this feature to be missing from a feature
- list item.
-- - -
-
-#### `tf.FixedLenSequenceFeature.__getnewargs__()` {#FixedLenSequenceFeature.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.FixedLenSequenceFeature.__getstate__()` {#FixedLenSequenceFeature.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.FixedLenSequenceFeature.__new__(_cls, shape, dtype, allow_missing=False)` {#FixedLenSequenceFeature.__new__}
-
-Create new instance of FixedLenSequenceFeature(shape, dtype, allow_missing)
-
-
-- - -
-
-#### `tf.FixedLenSequenceFeature.__repr__()` {#FixedLenSequenceFeature.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.FixedLenSequenceFeature.allow_missing` {#FixedLenSequenceFeature.allow_missing}
-
-Alias for field number 2
-
-
-- - -
-
-#### `tf.FixedLenSequenceFeature.dtype` {#FixedLenSequenceFeature.dtype}
-
-Alias for field number 1
-
-
-- - -
-
-#### `tf.FixedLenSequenceFeature.shape` {#FixedLenSequenceFeature.shape}
-
-Alias for field number 0
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.PaddingFIFOQueue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.PaddingFIFOQueue.md
deleted file mode 100644
index eaf4408c9f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.PaddingFIFOQueue.md
+++ /dev/null
@@ -1,313 +0,0 @@
-A FIFOQueue that supports batching variable-sized tensors by padding.
-
-A `PaddingFIFOQueue` may contain components with dynamic shape, while also
-supporting `dequeue_many`. See the constructor for more details.
-
-See [`tf.QueueBase`](#QueueBase) for a description of the methods on
-this class.
-- - -
-
-#### `tf.PaddingFIFOQueue.__init__(capacity, dtypes, shapes, names=None, shared_name=None, name='padding_fifo_queue')` {#PaddingFIFOQueue.__init__}
-
-Creates a queue that dequeues elements in a first-in first-out order.
-
-A `PaddingFIFOQueue` has bounded capacity; supports multiple concurrent
-producers and consumers; and provides exactly-once delivery.
-
-A `PaddingFIFOQueue` holds a list of up to `capacity` elements. Each
-element is a fixed-length tuple of tensors whose dtypes are
-described by `dtypes`, and whose shapes are described by the `shapes`
-argument.
-
-The `shapes` argument must be specified; each component of a queue
-element must have the respective shape. Shapes of fixed
-rank but variable size are allowed by setting any shape dimension to None.
-In this case, the inputs' shape may vary along the given dimension, and
-`dequeue_many` will pad the given dimension with zeros up to the maximum
-shape of all elements in the given batch.
-
-##### Args:
-
-
-* <b>`capacity`</b>: An integer. The upper bound on the number of elements
- that may be stored in this queue.
-* <b>`dtypes`</b>: A list of `DType` objects. The length of `dtypes` must equal
- the number of tensors in each queue element.
-* <b>`shapes`</b>: A list of `TensorShape` objects, with the same length as
- `dtypes`. Any dimension in the `TensorShape` containing value
- `None` is dynamic and allows values to be enqueued with
- variable size in that dimension.
-* <b>`names`</b>: (Optional.) A list of string naming the components in the queue
- with the same length as `dtypes`, or `None`. If specified the dequeue
- methods return a dictionary with the names as keys.
-* <b>`shared_name`</b>: (Optional.) If non-empty, this queue will be shared under
- the given name across multiple sessions.
-* <b>`name`</b>: Optional name for the queue operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If shapes is not a list of shapes, or the lengths of dtypes
- and shapes do not match, or if names is specified and the lengths of
- dtypes and names do not match.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.close(cancel_pending_enqueues=False, name=None)` {#PaddingFIFOQueue.close}
-
-Closes this queue.
-
-This operation signals that no more elements will be enqueued in
-the given queue. Subsequent `enqueue` and `enqueue_many`
-operations will fail. Subsequent `dequeue` and `dequeue_many`
-operations will continue to succeed if sufficient elements remain
-in the queue. Subsequent `dequeue` and `dequeue_many` operations
-that would block will fail immediately.
-
-If `cancel_pending_enqueues` is `True`, all pending requests will also
-be cancelled.
-
-##### Args:
-
-
-* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
- `False` (described above).
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that closes the queue.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.dequeue(name=None)` {#PaddingFIFOQueue.dequeue}
-
-Dequeues one element from this queue.
-
-If the queue is empty when this operation executes, it will block
-until there is an element to dequeue.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue is empty, and there are no pending
-enqueue operations that can fulfill this request,
-`tf.errors.OutOfRangeError` will be raised. If the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of tensors that was dequeued.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.dequeue_many(n, name=None)` {#PaddingFIFOQueue.dequeue_many}
-
-Dequeues and concatenates `n` elements from this queue.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. All of the
-components in the dequeued tuple will have size `n` in the 0th dimension.
-
-If the queue is closed and there are less than `n` elements left, then an
-`OutOfRange` exception is raised.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue contains fewer than `n` elements, and
-there are no pending enqueue operations that can fulfill this
-request, `tf.errors.OutOfRangeError` will be raised. If the
-session is [closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.dequeue_up_to(n, name=None)` {#PaddingFIFOQueue.dequeue_up_to}
-
-Dequeues and concatenates `n` elements from this queue.
-
-**Note** This operation is not supported by all queues. If a queue does not
-support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. If the queue
-has not been closed, all of the components in the dequeued tuple
-will have size `n` in the 0th dimension.
-
-If the queue is closed and there are more than `0` but fewer than
-`n` elements remaining, then instead of raising a
-`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
-less than `n` elements are returned immediately. If the queue is
-closed and there are `0` elements left in the queue, then a
-`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
-Otherwise the behavior is identical to `dequeue_many`.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.dtypes` {#PaddingFIFOQueue.dtypes}
-
-The list of dtypes for each component of a queue element.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.enqueue(vals, name=None)` {#PaddingFIFOQueue.enqueue}
-
-Enqueues one element to this queue.
-
-If the queue is full when this operation executes, it will block
-until the element has been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
- the values to enqueue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a new tuple of tensors to the queue.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.enqueue_many(vals, name=None)` {#PaddingFIFOQueue.enqueue_many}
-
-Enqueues zero or more elements to this queue.
-
-This operation slices each component tensor along the 0th dimension to
-make multiple queue elements. All of the tensors in `vals` must have the
-same size in the 0th dimension.
-
-If the queue is full when this operation executes, it will block
-until all of the elements have been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
- from which the queue elements are taken.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a batch of tuples of tensors to the queue.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.from_list(index, queues)` {#PaddingFIFOQueue.from_list}
-
-Create a queue using the queue reference from `queues[index]`.
-
-##### Args:
-
-
-* <b>`index`</b>: An integer scalar tensor that determines the input that gets
- selected.
-* <b>`queues`</b>: A list of `QueueBase` objects.
-
-##### Returns:
-
- A `QueueBase` object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
- or when the data types of `queues` are not all the same.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.name` {#PaddingFIFOQueue.name}
-
-The name of the underlying queue.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.names` {#PaddingFIFOQueue.names}
-
-The list of names for each component of a queue element.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.queue_ref` {#PaddingFIFOQueue.queue_ref}
-
-The underlying queue reference.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.shapes` {#PaddingFIFOQueue.shapes}
-
-The list of shapes for each component of a queue element.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.size(name=None)` {#PaddingFIFOQueue.size}
-
-Compute the number of elements in this queue.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A scalar tensor containing the number of elements in this queue.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.QueueBase.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.QueueBase.md
deleted file mode 100644
index 941f8f5dec..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.QueueBase.md
+++ /dev/null
@@ -1,305 +0,0 @@
-Base class for queue implementations.
-
-A queue is a TensorFlow data structure that stores tensors across
-multiple steps, and exposes operations that enqueue and dequeue
-tensors.
-
-Each queue element is a tuple of one or more tensors, where each
-tuple component has a static dtype, and may have a static shape. The
-queue implementations support versions of enqueue and dequeue that
-handle single elements, versions that support enqueuing and
-dequeuing a batch of elements at once.
-
-See [`tf.FIFOQueue`](#FIFOQueue) and
-[`tf.RandomShuffleQueue`](#RandomShuffleQueue) for concrete
-implementations of this class, and instructions on how to create
-them.
-- - -
-
-#### `tf.QueueBase.__init__(dtypes, shapes, names, queue_ref)` {#QueueBase.__init__}
-
-Constructs a queue object from a queue reference.
-
-The two optional lists, `shapes` and `names`, must be of the same length
-as `dtypes` if provided. The values at a given index `i` indicate the
-shape and name to use for the corresponding queue component in `dtypes`.
-
-##### Args:
-
-
-* <b>`dtypes`</b>: A list of types. The length of dtypes must equal the number
- of tensors in each element.
-* <b>`shapes`</b>: Constraints on the shapes of tensors in an element:
- A list of shape tuples or None. This list is the same length
- as dtypes. If the shape of any tensors in the element are constrained,
- all must be; shapes can be None if the shapes should not be constrained.
-* <b>`names`</b>: Optional list of names. If provided, the `enqueue()` and
- `dequeue()` methods will use dictionaries with these names as keys.
- Must be None or a list or tuple of the same length as `dtypes`.
-* <b>`queue_ref`</b>: The queue reference, i.e. the output of the queue op.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If one of the arguments is invalid.
-
-
-- - -
-
-#### `tf.QueueBase.close(cancel_pending_enqueues=False, name=None)` {#QueueBase.close}
-
-Closes this queue.
-
-This operation signals that no more elements will be enqueued in
-the given queue. Subsequent `enqueue` and `enqueue_many`
-operations will fail. Subsequent `dequeue` and `dequeue_many`
-operations will continue to succeed if sufficient elements remain
-in the queue. Subsequent `dequeue` and `dequeue_many` operations
-that would block will fail immediately.
-
-If `cancel_pending_enqueues` is `True`, all pending requests will also
-be cancelled.
-
-##### Args:
-
-
-* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
- `False` (described above).
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that closes the queue.
-
-
-- - -
-
-#### `tf.QueueBase.dequeue(name=None)` {#QueueBase.dequeue}
-
-Dequeues one element from this queue.
-
-If the queue is empty when this operation executes, it will block
-until there is an element to dequeue.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue is empty, and there are no pending
-enqueue operations that can fulfill this request,
-`tf.errors.OutOfRangeError` will be raised. If the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of tensors that was dequeued.
-
-
-- - -
-
-#### `tf.QueueBase.dequeue_many(n, name=None)` {#QueueBase.dequeue_many}
-
-Dequeues and concatenates `n` elements from this queue.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. All of the
-components in the dequeued tuple will have size `n` in the 0th dimension.
-
-If the queue is closed and there are less than `n` elements left, then an
-`OutOfRange` exception is raised.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue contains fewer than `n` elements, and
-there are no pending enqueue operations that can fulfill this
-request, `tf.errors.OutOfRangeError` will be raised. If the
-session is [closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.QueueBase.dequeue_up_to(n, name=None)` {#QueueBase.dequeue_up_to}
-
-Dequeues and concatenates `n` elements from this queue.
-
-**Note** This operation is not supported by all queues. If a queue does not
-support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. If the queue
-has not been closed, all of the components in the dequeued tuple
-will have size `n` in the 0th dimension.
-
-If the queue is closed and there are more than `0` but fewer than
-`n` elements remaining, then instead of raising a
-`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
-less than `n` elements are returned immediately. If the queue is
-closed and there are `0` elements left in the queue, then a
-`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
-Otherwise the behavior is identical to `dequeue_many`.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.QueueBase.dtypes` {#QueueBase.dtypes}
-
-The list of dtypes for each component of a queue element.
-
-
-- - -
-
-#### `tf.QueueBase.enqueue(vals, name=None)` {#QueueBase.enqueue}
-
-Enqueues one element to this queue.
-
-If the queue is full when this operation executes, it will block
-until the element has been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
- the values to enqueue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a new tuple of tensors to the queue.
-
-
-- - -
-
-#### `tf.QueueBase.enqueue_many(vals, name=None)` {#QueueBase.enqueue_many}
-
-Enqueues zero or more elements to this queue.
-
-This operation slices each component tensor along the 0th dimension to
-make multiple queue elements. All of the tensors in `vals` must have the
-same size in the 0th dimension.
-
-If the queue is full when this operation executes, it will block
-until all of the elements have been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
- from which the queue elements are taken.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a batch of tuples of tensors to the queue.
-
-
-- - -
-
-#### `tf.QueueBase.from_list(index, queues)` {#QueueBase.from_list}
-
-Create a queue using the queue reference from `queues[index]`.
-
-##### Args:
-
-
-* <b>`index`</b>: An integer scalar tensor that determines the input that gets
- selected.
-* <b>`queues`</b>: A list of `QueueBase` objects.
-
-##### Returns:
-
- A `QueueBase` object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
- or when the data types of `queues` are not all the same.
-
-
-- - -
-
-#### `tf.QueueBase.name` {#QueueBase.name}
-
-The name of the underlying queue.
-
-
-- - -
-
-#### `tf.QueueBase.names` {#QueueBase.names}
-
-The list of names for each component of a queue element.
-
-
-- - -
-
-#### `tf.QueueBase.queue_ref` {#QueueBase.queue_ref}
-
-The underlying queue reference.
-
-
-- - -
-
-#### `tf.QueueBase.shapes` {#QueueBase.shapes}
-
-The list of shapes for each component of a queue element.
-
-
-- - -
-
-#### `tf.QueueBase.size(name=None)` {#QueueBase.size}
-
-Compute the number of elements in this queue.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A scalar tensor containing the number of elements in this queue.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.SparseConditionalAccumulator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.SparseConditionalAccumulator.md
deleted file mode 100644
index f0329e8cbb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.SparseConditionalAccumulator.md
+++ /dev/null
@@ -1,209 +0,0 @@
-A conditional accumulator for aggregating sparse gradients.
-
-Sparse gradients are represented by IndexedSlices.
-
-Up-to-date gradients (i.e., time step at which gradient was computed is
-equal to the accumulator's time step) are added to the accumulator.
-
-Extraction of the average gradient is blocked until the required number of
-gradients has been accumulated.
-
-Args:
- dtype: Datatype of the accumulated gradients.
- shape: Shape of the accumulated gradients.
- shared_name: Optional. If non-empty, this accumulator will be shared under
- the given name across multiple sessions.
- name: Optional name for the accumulator.
-- - -
-
-#### `tf.SparseConditionalAccumulator.__init__(dtype, shape=None, shared_name=None, name='sparse_conditional_accumulator')` {#SparseConditionalAccumulator.__init__}
-
-
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.accumulator_ref` {#SparseConditionalAccumulator.accumulator_ref}
-
-The underlying accumulator reference.
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.apply_grad(grad_indices, grad_values, grad_shape=None, local_step=0, name=None)` {#SparseConditionalAccumulator.apply_grad}
-
-Attempts to apply a sparse gradient to the accumulator.
-
-The attempt is silently dropped if the gradient is stale, i.e., local_step
-is less than the accumulator's global time step.
-
-A sparse gradient is represented by its indices, values and possibly empty
-or None shape. Indices must be a vector representing the locations of
-non-zero entries in the tensor. Values are the non-zero slices of the
-gradient, and must have the same first dimension as indices, i.e., the nnz
-represented by indices and values must be consistent. Shape, if not empty or
-None, must be consistent with the accumulator's shape (if also provided).
-
-##### Example:
-
- A tensor [[0, 0], [0. 1], [2, 3]] can be represented
-
-* <b>`indices`</b>: [1,2]
-* <b>`values`</b>: [[0,1],[2,3]]
-* <b>`shape`</b>: [3, 2]
-
-##### Args:
-
-
-* <b>`grad_indices`</b>: Indices of the sparse gradient to be applied.
-* <b>`grad_values`</b>: Values of the sparse gradient to be applied.
-* <b>`grad_shape`</b>: Shape of the sparse gradient to be applied.
-* <b>`local_step`</b>: Time step at which the gradient was computed.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- The operation that (conditionally) applies a gradient to the accumulator.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: If grad is of the wrong shape
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.apply_indexed_slices_grad(grad, local_step=0, name=None)` {#SparseConditionalAccumulator.apply_indexed_slices_grad}
-
-Attempts to apply a gradient to the accumulator.
-
-The attempt is silently dropped if the gradient is stale, i.e., local_step
-is less than the accumulator's global time step.
-
-##### Args:
-
-
-* <b>`grad`</b>: The gradient IndexedSlices to be applied.
-* <b>`local_step`</b>: Time step at which the gradient was computed.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- The operation that (conditionally) applies a gradient to the accumulator.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: If grad is of the wrong shape
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.dtype` {#SparseConditionalAccumulator.dtype}
-
-The datatype of the gradients accumulated by this accumulator.
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.name` {#SparseConditionalAccumulator.name}
-
-The name of the underlying accumulator.
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.num_accumulated(name=None)` {#SparseConditionalAccumulator.num_accumulated}
-
-Number of gradients that have currently been aggregated in accumulator.
-
-##### Args:
-
-
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- Number of accumulated gradients currently in accumulator.
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.set_global_step(new_global_step, name=None)` {#SparseConditionalAccumulator.set_global_step}
-
-Sets the global time step of the accumulator.
-
-The operation logs a warning if we attempt to set to a time step that is
-lower than the accumulator's own time step.
-
-##### Args:
-
-
-* <b>`new_global_step`</b>: Value of new time step. Can be a variable or a constant
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- Operation that sets the accumulator's time step.
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.take_grad(num_required, name=None)` {#SparseConditionalAccumulator.take_grad}
-
-Attempts to extract the average gradient from the accumulator.
-
-The operation blocks until sufficient number of gradients have been
-successfully applied to the accumulator.
-
-Once successful, the following actions are also triggered:
-- Counter of accumulated gradients is reset to 0.
-- Aggregated gradient is reset to 0 tensor.
-- Accumulator's internal time step is incremented by 1.
-
-##### Args:
-
-
-* <b>`num_required`</b>: Number of gradients that needs to have been aggregated
-* <b>`name`</b>: Optional name for the operation
-
-##### Returns:
-
- A tuple of indices, values, and shape representing the average gradient.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: If num_required < 1
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.take_indexed_slices_grad(num_required, name=None)` {#SparseConditionalAccumulator.take_indexed_slices_grad}
-
-Attempts to extract the average gradient from the accumulator.
-
-The operation blocks until sufficient number of gradients have been
-successfully applied to the accumulator.
-
-Once successful, the following actions are also triggered:
-- Counter of accumulated gradients is reset to 0.
-- Aggregated gradient is reset to 0 tensor.
-- Accumulator's internal time step is incremented by 1.
-
-##### Args:
-
-
-* <b>`num_required`</b>: Number of gradients that needs to have been aggregated
-* <b>`name`</b>: Optional name for the operation
-
-##### Returns:
-
- An IndexedSlices holding the value of the average gradient.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: If num_required < 1
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.abs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.abs.md
deleted file mode 100644
index 8a5ae1b9ac..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.abs.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.abs(x, name=None)` {#abs}
-
-Computes the absolute value of a tensor.
-
-Given a tensor of real numbers `x`, this operation returns a tensor
-containing the absolute value of each element in `x`. For example, if x is
-an input element and y is an output element, this operation computes
-\\(y = |x|\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor` of type `float32`, `float64`, `int32`, or
- `int64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` the same size and type as `x` with absolute
- values.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.as_string.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.as_string.md
deleted file mode 100644
index 0217ad3113..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.as_string.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.as_string(input, precision=None, scientific=None, shortest=None, width=None, fill=None, name=None)` {#as_string}
-
-Converts each entry in the given tensor to strings. Supports many numeric
-
-types and boolean.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `complex64`, `float32`, `float64`, `bool`, `int8`.
-* <b>`precision`</b>: An optional `int`. Defaults to `-1`.
- The post-decimal precision to use for floating point numbers.
- Only used if precision > -1.
-* <b>`scientific`</b>: An optional `bool`. Defaults to `False`.
- Use scientific notation for floating point numbers.
-* <b>`shortest`</b>: An optional `bool`. Defaults to `False`.
- Use shortest representation (either scientific or standard) for
- floating point numbers.
-* <b>`width`</b>: An optional `int`. Defaults to `-1`.
- Pad pre-decimal numbers to this width.
- Applies to both floating point and integer numbers.
- Only used if width > -1.
-* <b>`fill`</b>: An optional `string`. Defaults to `""`.
- The value to pad if width > -1. If empty, pads with spaces.
- Another typical value is '0'. String cannot be longer than 1 character.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_positive.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_positive.md
deleted file mode 100644
index ee73f2f9a5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.assert_positive.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.assert_positive(x, data=None, summarize=None, message=None, name=None)` {#assert_positive}
-
-Assert the condition `x > 0` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_positive(x)]):
- output = tf.reduce_sum(x)
-```
-
-Positive means, for every element `x[i]` of `x`, we have `x[i] > 0`.
-If `x` is empty this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_positive".
-
-##### Returns:
-
- Op raising `InvalidArgumentError` unless `x` is all positive.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.bitcast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.bitcast.md
deleted file mode 100644
index 9e60ab2144..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.bitcast.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.bitcast(input, type, name=None)` {#bitcast}
-
-Bitcasts a tensor from one type to another without copying data.
-
-Given a tensor `input`, this operation returns a tensor that has the same buffer
-data as `input` with datatype `type`.
-
-If the input datatype `T` is larger than the output datatype `type` then the
-shape changes from [...] to [..., sizeof(`T`)/sizeof(`type`)].
-
-If `T` is smaller than `type`, the operator requires that the rightmost
-dimension be equal to sizeof(`type`)/sizeof(`T`). The shape then goes from
-[..., sizeof(`type`)/sizeof(`T`)] to [...].
-
-*NOTE*: Bitcast is implemented as a low-level cast, so machines with different
-endian orderings will give different results.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`type`</b>: A `tf.DType` from: `tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.int16, tf.int8, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint32, tf.half`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `type`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.concat.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.concat.md
deleted file mode 100644
index 321429967e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.concat.md
+++ /dev/null
@@ -1,58 +0,0 @@
-### `tf.concat(values, axis, name='concat')` {#concat}
-
-Concatenates tensors along one dimension.
-
-Concatenates the list of tensors `values` along dimension `axis`. If
-`values[i].shape = [D0, D1, ... Daxis(i), ...Dn]`, the concatenated
-result has shape
-
- [D0, D1, ... Raxis, ...Dn]
-
-where
-
- Raxis = sum(Daxis(i))
-
-That is, the data from the input tensors is joined along the `axis`
-dimension.
-
-The number of dimensions of the input tensors must match, and all dimensions
-except `axis` must be equal.
-
-For example:
-
-```python
-t1 = [[1, 2, 3], [4, 5, 6]]
-t2 = [[7, 8, 9], [10, 11, 12]]
-tf.concat([t1, t2], 0) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
-tf.concat([t1, t2], 1) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]
-
-# tensor t3 with shape [2, 3]
-# tensor t4 with shape [2, 3]
-tf.shape(tf.concat([t3, t4], 0)) ==> [4, 3]
-tf.shape(tf.concat([t3, t4], 1)) ==> [2, 6]
-```
-
-Note: If you are concatenating along a new axis consider using stack.
-E.g.
-
-```python
-tf.concat([tf.expand_dims(t, axis) for t in tensors], axis)
-```
-
-can be rewritten as
-
-```python
-tf.stack(tensors, axis=axis)
-```
-
-##### Args:
-
-
-* <b>`values`</b>: A list of `Tensor` objects or a single `Tensor`.
-* <b>`axis`</b>: 0-D `int32` `Tensor`. Dimension along which to concatenate.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` resulting from concatenation of the input tensors.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.conj.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.conj.md
deleted file mode 100644
index e7491301cb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.conj.md
+++ /dev/null
@@ -1,33 +0,0 @@
-### `tf.conj(x, name=None)` {#conj}
-
-Returns the complex conjugate of a complex number.
-
-Given a tensor `input` of complex numbers, this operation returns a tensor of
-complex numbers that are the complex conjugate of each element in `input`. The
-complex numbers in `input` must be of the form \\(a + bj\\), where *a* is the
-real part and *b* is the imaginary part.
-
-The complex conjugate returned by this operation is of the form \\(a - bj\\).
-
-For example:
-
- # tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
- tf.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j]
-
-If `x` is real, it is returned unchanged.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` to conjugate. Must have numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` that is the conjugate of `x` (with the same type).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` is not a numeric tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.entropy.elbo_ratio.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.entropy.elbo_ratio.md
deleted file mode 100644
index 0419408ce4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.entropy.elbo_ratio.md
+++ /dev/null
@@ -1,68 +0,0 @@
-### `tf.contrib.bayesflow.entropy.elbo_ratio(log_p, q, z=None, n=None, seed=None, form=None, name='elbo_ratio')` {#elbo_ratio}
-
-Estimate of the ratio appearing in the `ELBO` and `KL` divergence.
-
-With `p(z) := exp{log_p(z)}`, this `Op` returns an approximation of
-
-```
-E_q[ Log[p(Z) / q(Z)] ]
-```
-
-The term `E_q[ Log[p(Z)] ]` is always computed as a sample mean.
-The term `E_q[ Log[q(z)] ]` can be computed with samples, or an exact formula
-if `q.entropy()` is defined. This is controlled with the kwarg `form`.
-
-This log-ratio appears in different contexts:
-
-#### `KL[q || p]`
-
-If `log_p(z) = Log[p(z)]` for distribution `p`, this `Op` approximates
-the negative Kullback-Leibler divergence.
-
-```
-elbo_ratio(log_p, q, n=100) = -1 * KL[q || p],
-KL[q || p] = E[ Log[q(Z)] - Log[p(Z)] ]
-```
-
-Note that if `p` is a `Distribution`, then `distributions.kl(q, p)` may be
-defined and available as an exact result.
-
-#### ELBO
-
-If `log_p(z) = Log[p(z, x)]` is the log joint of a distribution `p`, this is
-the Evidence Lower BOund (ELBO):
-
-```
-ELBO ~= E[ Log[p(Z, x)] - Log[q(Z)] ]
- = Log[p(x)] - KL[q || p]
- <= Log[p(x)]
-```
-
-User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
-
-##### Args:
-
-
-* <b>`log_p`</b>: Callable mapping samples from `q` to `Tensors` with
- shape broadcastable to `q.batch_shape`.
- For example, `log_p` works "just like" `q.log_prob`.
-* <b>`q`</b>: `tf.contrib.distributions.Distribution`.
-* <b>`z`</b>: `Tensor` of samples from `q`, produced by `q.sample(n)` for some `n`.
-* <b>`n`</b>: Integer `Tensor`. Number of samples to generate if `z` is not provided.
-* <b>`seed`</b>: Python integer to seed the random number generator.
-* <b>`form`</b>: Either `ELBOForms.analytic_entropy` (use formula for entropy of `q`)
- or `ELBOForms.sample` (sample estimate of entropy), or `ELBOForms.default`
- (attempt analytic entropy, fallback on sample).
- Default value is `ELBOForms.default`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- Scalar `Tensor` holding sample mean KL divergence. `shape` is the batch
- shape of `q`, and `dtype` is the same as `q`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `form` is not handled by this function.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.entropy.renyi_alpha.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.entropy.renyi_alpha.md
deleted file mode 100644
index bf65d1a823..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.entropy.renyi_alpha.md
+++ /dev/null
@@ -1,38 +0,0 @@
-### `tf.contrib.bayesflow.entropy.renyi_alpha(step, decay_time, alpha_min, alpha_max=0.99999, name='renyi_alpha')` {#renyi_alpha}
-
-Exponentially decaying `Tensor` appropriate for Renyi ratios.
-
-When minimizing the Renyi divergence for `0 <= alpha < 1` (or maximizing the
-Renyi equivalent of elbo) in high dimensions, it is not uncommon to experience
-`NaN` and `inf` values when `alpha` is far from `1`.
-
-For that reason, it is often desirable to start the optimization with `alpha`
-very close to 1, and reduce it to a final `alpha_min` according to some
-schedule. The user may even want to optimize using `elbo_ratio` for
-some fixed time before switching to Renyi based methods.
-
-This `Op` returns an `alpha` decaying exponentially with step:
-
-```
-s(step) = (exp{step / decay_time} - 1) / (e - 1)
-t(s) = max(0, min(s, 1)), (smooth growth from 0 to 1)
-alpha(t) = (1 - t) alpha_min + t alpha_max
-```
-
-##### Args:
-
-
-* <b>`step`</b>: Non-negative scalar `Tensor`. Typically the global step or an
- offset version thereof.
-* <b>`decay_time`</b>: Positive scalar `Tensor`.
-* <b>`alpha_min`</b>: `float` or `double` `Tensor`.
- The minimal, final value of `alpha`, achieved when `step >= decay_time`
-* <b>`alpha_max`</b>: `Tensor` of same `dtype` as `alpha_min`.
- The maximal, beginning value of `alpha`, achieved when `step == 0`
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
-
-* <b>`alpha`</b>: A `Tensor` of same `dtype` as `alpha_min`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler.md
deleted file mode 100644
index 9dce634e13..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler.md
+++ /dev/null
@@ -1,43 +0,0 @@
-### `tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler(f, log_p, sampling_dist_q, z=None, n=None, seed=None, name='expectation_importance_sampler')` {#expectation_importance_sampler}
-
-Monte Carlo estimate of `E_p[f(Z)] = E_q[f(Z) p(Z) / q(Z)]`.
-
-With `p(z) := exp{log_p(z)}`, this `Op` returns
-
-```
-n^{-1} sum_{i=1}^n [ f(z_i) p(z_i) / q(z_i) ], z_i ~ q,
-\approx E_q[ f(Z) p(Z) / q(Z) ]
-= E_p[f(Z)]
-```
-
-This integral is done in log-space with max-subtraction to better handle the
-often extreme values that `f(z) p(z) / q(z)` can take on.
-
-If `f >= 0`, it is up to 2x more efficient to exponentiate the result of
-`expectation_importance_sampler_logspace` applied to `Log[f]`.
-
-User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
-
-##### Args:
-
-
-* <b>`f`</b>: Callable mapping samples from `sampling_dist_q` to `Tensors` with shape
- broadcastable to `q.batch_shape`.
- For example, `f` works "just like" `q.log_prob`.
-* <b>`log_p`</b>: Callable mapping samples from `sampling_dist_q` to `Tensors` with
- shape broadcastable to `q.batch_shape`.
- For example, `log_p` works "just like" `sampling_dist_q.log_prob`.
-* <b>`sampling_dist_q`</b>: The sampling distribution.
- `tf.contrib.distributions.Distribution`.
- `float64` `dtype` recommended.
- `log_p` and `q` should be supported on the same set.
-* <b>`z`</b>: `Tensor` of samples from `q`, produced by `q.sample` for some `n`.
-* <b>`n`</b>: Integer `Tensor`. Number of samples to generate if `z` is not provided.
-* <b>`seed`</b>: Python integer to seed the random number generator.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- The importance sampling estimate. `Tensor` with `shape` equal
- to batch shape of `q`, and `dtype` = `q.dtype`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.stochastic_tensor.value_type.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.stochastic_tensor.value_type.md
deleted file mode 100644
index f1182cb21c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.bayesflow.stochastic_tensor.value_type.md
+++ /dev/null
@@ -1,35 +0,0 @@
-### `tf.contrib.bayesflow.stochastic_tensor.value_type(dist_value_type)` {#value_type}
-
-Creates a value type context for any StochasticTensor created within.
-
-Typical usage:
-
-```
-with sg.value_type(sg.MeanValue(stop_gradients=True)):
- st = sg.StochasticTensor(tf.contrib.distributions.Normal, mu=mu,
- sigma=sigma)
-```
-
-In the example above, `st.value()` (or equivalently, `tf.identity(st)`) will
-be the mean value of the Normal distribution, i.e., `mu` (possibly
-broadcasted to the shape of `sigma`). Furthermore, because the `MeanValue`
-was marked with `stop_gradients=True`, this value will have been wrapped
-in a `stop_gradients` call to disable any possible backpropagation.
-
-##### Args:
-
-
-* <b>`dist_value_type`</b>: An instance of `MeanValue`, `SampleValue`, or
- any other stochastic value type.
-
-##### Yields:
-
- A context for `StochasticTensor` objects that controls the
- value created when they are initialized.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `dist_value_type` is not an instance of a stochastic value
- type.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.copy_graph.copy_variable_to_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.copy_graph.copy_variable_to_graph.md
deleted file mode 100644
index 85e336a29b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.copy_graph.copy_variable_to_graph.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.contrib.copy_graph.copy_variable_to_graph(org_instance, to_graph, scope='')` {#copy_variable_to_graph}
-
-Given a `Variable` instance from one `Graph`, initializes and returns
-a copy of it from another `Graph`, under the specified scope
-(default `""`).
-
-Args:
-org_instance: A `Variable` from some `Graph`.
-to_graph: The `Graph` to copy the `Variable` to.
-scope: A scope for the new `Variable` (default `""`).
-
-##### Returns:
-
- The copied `Variable` from `to_graph`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `org_instance` is not a `Variable`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.crf.crf_sequence_score.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.crf.crf_sequence_score.md
deleted file mode 100644
index 95cbf2e8eb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.crf.crf_sequence_score.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.contrib.crf.crf_sequence_score(inputs, tag_indices, sequence_lengths, transition_params)` {#crf_sequence_score}
-
-Computes the unnormalized score for a tag sequence.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A [batch_size, max_seq_len, num_tags] tensor of unary potentials
- to use as input to the CRF layer.
-* <b>`tag_indices`</b>: A [batch_size, max_seq_len] matrix of tag indices for which we
- compute the unnormalized score.
-* <b>`sequence_lengths`</b>: A [batch_size] vector of true sequence lengths.
-* <b>`transition_params`</b>: A [num_tags, num_tags] transition matrix.
-
-##### Returns:
-
-
-* <b>`sequence_scores`</b>: A [batch_size] vector of unnormalized sequence scores.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.crf.crf_unary_score.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.crf.crf_unary_score.md
deleted file mode 100644
index 4a344623ce..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.crf.crf_unary_score.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.contrib.crf.crf_unary_score(tag_indices, sequence_lengths, inputs)` {#crf_unary_score}
-
-Computes the unary scores of tag sequences.
-
-##### Args:
-
-
-* <b>`tag_indices`</b>: A [batch_size, max_seq_len] matrix of tag indices.
-* <b>`sequence_lengths`</b>: A [batch_size] vector of true sequence lengths.
-* <b>`inputs`</b>: A [batch_size, max_seq_len, num_tags] tensor of unary potentials.
-
-##### Returns:
-
-
-* <b>`unary_scores`</b>: A [batch_size] vector of unary scores.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.BernoulliWithSigmoidProbs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.BernoulliWithSigmoidProbs.md
deleted file mode 100644
index b0a926f8ed..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.BernoulliWithSigmoidProbs.md
+++ /dev/null
@@ -1,563 +0,0 @@
-Bernoulli with `probs = nn.sigmoid(logits)`.
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.__init__(logits=None, dtype=tf.int32, validate_args=False, allow_nan_stats=True, name='BernoulliWithSigmoidProbs')` {#BernoulliWithSigmoidProbs.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.allow_nan_stats` {#BernoulliWithSigmoidProbs.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.batch_shape` {#BernoulliWithSigmoidProbs.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.batch_shape_tensor(name='batch_shape_tensor')` {#BernoulliWithSigmoidProbs.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.cdf(value, name='cdf')` {#BernoulliWithSigmoidProbs.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.copy(**override_parameters_kwargs)` {#BernoulliWithSigmoidProbs.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.covariance(name='covariance')` {#BernoulliWithSigmoidProbs.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.dtype` {#BernoulliWithSigmoidProbs.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.entropy(name='entropy')` {#BernoulliWithSigmoidProbs.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.event_shape` {#BernoulliWithSigmoidProbs.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.event_shape_tensor(name='event_shape_tensor')` {#BernoulliWithSigmoidProbs.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.is_continuous` {#BernoulliWithSigmoidProbs.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.is_scalar_batch(name='is_scalar_batch')` {#BernoulliWithSigmoidProbs.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.is_scalar_event(name='is_scalar_event')` {#BernoulliWithSigmoidProbs.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.log_cdf(value, name='log_cdf')` {#BernoulliWithSigmoidProbs.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.log_prob(value, name='log_prob')` {#BernoulliWithSigmoidProbs.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.log_survival_function(value, name='log_survival_function')` {#BernoulliWithSigmoidProbs.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.logits` {#BernoulliWithSigmoidProbs.logits}
-
-Log-odds of a `1` outcome (vs `0`).
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.mean(name='mean')` {#BernoulliWithSigmoidProbs.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.mode(name='mode')` {#BernoulliWithSigmoidProbs.mode}
-
-Mode.
-
-Additional documentation from `Bernoulli`:
-
-Returns `1` if `prob > 0.5` and `0` otherwise.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.name` {#BernoulliWithSigmoidProbs.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#BernoulliWithSigmoidProbs.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.param_static_shapes(cls, sample_shape)` {#BernoulliWithSigmoidProbs.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.parameters` {#BernoulliWithSigmoidProbs.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.prob(value, name='prob')` {#BernoulliWithSigmoidProbs.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.probs` {#BernoulliWithSigmoidProbs.probs}
-
-Probability of a `1` outcome (vs `0`).
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.reparameterization_type` {#BernoulliWithSigmoidProbs.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.sample(sample_shape=(), seed=None, name='sample')` {#BernoulliWithSigmoidProbs.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.stddev(name='stddev')` {#BernoulliWithSigmoidProbs.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.survival_function(value, name='survival_function')` {#BernoulliWithSigmoidProbs.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.validate_args` {#BernoulliWithSigmoidProbs.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.BernoulliWithSigmoidProbs.variance(name='variance')` {#BernoulliWithSigmoidProbs.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.Beta.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.Beta.md
deleted file mode 100644
index ed9f312153..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.Beta.md
+++ /dev/null
@@ -1,689 +0,0 @@
-Beta distribution.
-
-The Beta distribution is defined over the `(0, 1)` interval using parameters
-`concentration1` (aka "alpha") and `concentration0` (aka "beta").
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; alpha, beta) = x**(alpha - 1) (1 - x)**(beta - 1) / Z
-Z = Gamma(alpha) Gamma(beta) / Gamma(alpha + beta)
-```
-
-where:
-
-* `concentration1 = alpha`,
-* `concentration0 = beta`,
-* `Z` is the normalization constant, and,
-* `Gamma` is the [gamma function](
- https://en.wikipedia.org/wiki/Gamma_function).
-
-The concentration parameters represent mean total counts of a `1` or a `0`,
-i.e.,
-
-```none
-concentration1 = alpha = mean * total_concentration
-concentration0 = beta = (1. - mean) * total_concentration
-```
-
-where `mean` in `(0, 1)` and `total_concentration` is a positive real number
-representing a mean `total_count = concentration1 + concentration0`.
-
-Distribution parameters are automatically broadcast in all functions; see
-examples for details.
-
-#### Examples
-
-```python
-# Create a batch of three Beta distributions.
-alpha = [1, 2, 3]
-beta = [1, 2, 3]
-dist = Beta(alpha, beta)
-
-dist.sample([4, 5]) # Shape [4, 5, 3]
-
-# `x` has three batch entries, each with two samples.
-x = [[.1, .4, .5],
- [.2, .3, .5]]
-# Calculate the probability of each pair of samples under the corresponding
-# distribution in `dist`.
-dist.prob(x) # Shape [2, 3]
-```
-
-```python
-# Create batch_shape=[2, 3] via parameter broadcast:
-alpha = [[1.], [2]] # Shape [2, 1]
-beta = [3., 4, 5] # Shape [3]
-dist = Beta(alpha, beta)
-
-# alpha broadcast as: [[1., 1, 1,],
-# [2, 2, 2]]
-# beta broadcast as: [[3., 4, 5],
-# [3, 4, 5]]
-# batch_Shape [2, 3]
-dist.sample([4, 5]) # Shape [4, 5, 2, 3]
-
-x = [.2, .3, .5]
-# x will be broadcast as [[.2, .3, .5],
-# [.2, .3, .5]],
-# thus matching batch_shape [2, 3].
-dist.prob(x) # Shape [2, 3]
-```
-- - -
-
-#### `tf.contrib.distributions.Beta.__init__(concentration1=None, concentration0=None, validate_args=False, allow_nan_stats=True, name='Beta')` {#Beta.__init__}
-
-Initialize a batch of Beta distributions.
-
-##### Args:
-
-
-* <b>`concentration1`</b>: Positive floating-point `Tensor` indicating mean
- number of successes; aka "alpha". Implies `self.dtype` and
- `self.batch_shape`, i.e.,
- `concentration1.shape = [N1, N2, ..., Nm] = self.batch_shape`.
-* <b>`concentration0`</b>: Positive floating-point `Tensor` indicating mean
- number of failures; aka "beta". Otherwise has same semantics as
- `concentration1`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.allow_nan_stats` {#Beta.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.batch_shape` {#Beta.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.batch_shape_tensor(name='batch_shape_tensor')` {#Beta.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.cdf(value, name='cdf')` {#Beta.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.concentration0` {#Beta.concentration0}
-
-Concentration parameter associated with a `0` outcome.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.concentration1` {#Beta.concentration1}
-
-Concentration parameter associated with a `1` outcome.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.copy(**override_parameters_kwargs)` {#Beta.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.covariance(name='covariance')` {#Beta.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.dtype` {#Beta.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.entropy(name='entropy')` {#Beta.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.event_shape` {#Beta.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.event_shape_tensor(name='event_shape_tensor')` {#Beta.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.is_continuous` {#Beta.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.is_scalar_batch(name='is_scalar_batch')` {#Beta.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.is_scalar_event(name='is_scalar_event')` {#Beta.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.log_cdf(value, name='log_cdf')` {#Beta.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.log_prob(value, name='log_prob')` {#Beta.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.log_survival_function(value, name='log_survival_function')` {#Beta.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.mean(name='mean')` {#Beta.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.mode(name='mode')` {#Beta.mode}
-
-Mode.
-
-Additional documentation from `Beta`:
-
-Note: The mode is undefined when `concentration1 <= 1` or
-`concentration0 <= 1`. If `self.allow_nan_stats` is `True`, `NaN`
-is used for undefined modes. If `self.allow_nan_stats` is `False` an
-exception is raised when one or more modes are undefined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.name` {#Beta.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Beta.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.param_static_shapes(cls, sample_shape)` {#Beta.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.parameters` {#Beta.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.prob(value, name='prob')` {#Beta.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.reparameterization_type` {#Beta.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.sample(sample_shape=(), seed=None, name='sample')` {#Beta.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.stddev(name='stddev')` {#Beta.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.survival_function(value, name='survival_function')` {#Beta.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.total_concentration` {#Beta.total_concentration}
-
-Sum of concentration parameters.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.validate_args` {#Beta.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Beta.variance(name='variance')` {#Beta.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.Laplace.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.Laplace.md
deleted file mode 100644
index 3f31604508..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.Laplace.md
+++ /dev/null
@@ -1,607 +0,0 @@
-The Laplace distribution with location `loc` and `scale` parameters.
-
-#### Mathematical details
-
-The probability density function (pdf) of this distribution is,
-
-```none
-pdf(x; mu, sigma) = exp(-|x - mu| / sigma) / Z
-Z = 2 sigma
-```
-
-where `loc = mu`, `scale = sigma`, and `Z` is the normalization constant.
-
-Note that the Laplace distribution can be thought of two exponential
-distributions spliced together "back-to-back."
-
-The Lpalce distribution is a member of the [location-scale family](
-https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be
-constructed as,
-
-```none
-X ~ Laplace(loc=0, scale=1)
-Y = loc + scale * X
-```
-- - -
-
-#### `tf.contrib.distributions.Laplace.__init__(loc, scale, validate_args=False, allow_nan_stats=True, name='Laplace')` {#Laplace.__init__}
-
-Construct Laplace distribution with parameters `loc` and `scale`.
-
-The parameters `loc` and `scale` must be shaped in a way that supports
-broadcasting (e.g., `loc / scale` is a valid operation).
-
-##### Args:
-
-
-* <b>`loc`</b>: Floating point tensor which characterizes the location (center)
- of the distribution.
-* <b>`scale`</b>: Positive floating point tensor which characterizes the spread of
- the distribution.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`,
- statistics (e.g., mean, mode, variance) use the value "`NaN`" to
- indicate the result is undefined. When `False`, an exception is raised
- if one or more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `loc` and `scale` are of different dtype.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.allow_nan_stats` {#Laplace.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.batch_shape` {#Laplace.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.batch_shape_tensor(name='batch_shape_tensor')` {#Laplace.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.cdf(value, name='cdf')` {#Laplace.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.copy(**override_parameters_kwargs)` {#Laplace.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.covariance(name='covariance')` {#Laplace.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.dtype` {#Laplace.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.entropy(name='entropy')` {#Laplace.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.event_shape` {#Laplace.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.event_shape_tensor(name='event_shape_tensor')` {#Laplace.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.is_continuous` {#Laplace.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.is_scalar_batch(name='is_scalar_batch')` {#Laplace.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.is_scalar_event(name='is_scalar_event')` {#Laplace.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.loc` {#Laplace.loc}
-
-Distribution parameter for the location.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.log_cdf(value, name='log_cdf')` {#Laplace.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.log_prob(value, name='log_prob')` {#Laplace.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.log_survival_function(value, name='log_survival_function')` {#Laplace.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.mean(name='mean')` {#Laplace.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.mode(name='mode')` {#Laplace.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.name` {#Laplace.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Laplace.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.param_static_shapes(cls, sample_shape)` {#Laplace.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.parameters` {#Laplace.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.prob(value, name='prob')` {#Laplace.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.reparameterization_type` {#Laplace.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.sample(sample_shape=(), seed=None, name='sample')` {#Laplace.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.scale` {#Laplace.scale}
-
-Distribution parameter for scale.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.stddev(name='stddev')` {#Laplace.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.survival_function(value, name='survival_function')` {#Laplace.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.validate_args` {#Laplace.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Laplace.variance(name='variance')` {#Laplace.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.LaplaceWithSoftplusScale.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.LaplaceWithSoftplusScale.md
deleted file mode 100644
index 998d117e8f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.LaplaceWithSoftplusScale.md
+++ /dev/null
@@ -1,559 +0,0 @@
-Laplace with softplus applied to `scale`.
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.__init__(loc, scale, validate_args=False, allow_nan_stats=True, name='LaplaceWithSoftplusScale')` {#LaplaceWithSoftplusScale.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.allow_nan_stats` {#LaplaceWithSoftplusScale.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.batch_shape` {#LaplaceWithSoftplusScale.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.batch_shape_tensor(name='batch_shape_tensor')` {#LaplaceWithSoftplusScale.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.cdf(value, name='cdf')` {#LaplaceWithSoftplusScale.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.copy(**override_parameters_kwargs)` {#LaplaceWithSoftplusScale.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.covariance(name='covariance')` {#LaplaceWithSoftplusScale.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.dtype` {#LaplaceWithSoftplusScale.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.entropy(name='entropy')` {#LaplaceWithSoftplusScale.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.event_shape` {#LaplaceWithSoftplusScale.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.event_shape_tensor(name='event_shape_tensor')` {#LaplaceWithSoftplusScale.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.is_continuous` {#LaplaceWithSoftplusScale.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.is_scalar_batch(name='is_scalar_batch')` {#LaplaceWithSoftplusScale.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.is_scalar_event(name='is_scalar_event')` {#LaplaceWithSoftplusScale.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.loc` {#LaplaceWithSoftplusScale.loc}
-
-Distribution parameter for the location.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.log_cdf(value, name='log_cdf')` {#LaplaceWithSoftplusScale.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.log_prob(value, name='log_prob')` {#LaplaceWithSoftplusScale.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.log_survival_function(value, name='log_survival_function')` {#LaplaceWithSoftplusScale.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.mean(name='mean')` {#LaplaceWithSoftplusScale.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.mode(name='mode')` {#LaplaceWithSoftplusScale.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.name` {#LaplaceWithSoftplusScale.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#LaplaceWithSoftplusScale.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.param_static_shapes(cls, sample_shape)` {#LaplaceWithSoftplusScale.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.parameters` {#LaplaceWithSoftplusScale.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.prob(value, name='prob')` {#LaplaceWithSoftplusScale.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.reparameterization_type` {#LaplaceWithSoftplusScale.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.sample(sample_shape=(), seed=None, name='sample')` {#LaplaceWithSoftplusScale.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.scale` {#LaplaceWithSoftplusScale.scale}
-
-Distribution parameter for scale.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.stddev(name='stddev')` {#LaplaceWithSoftplusScale.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.survival_function(value, name='survival_function')` {#LaplaceWithSoftplusScale.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.validate_args` {#LaplaceWithSoftplusScale.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.LaplaceWithSoftplusScale.variance(name='variance')` {#LaplaceWithSoftplusScale.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.Logistic.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.Logistic.md
deleted file mode 100644
index 6f0d4f2210..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.Logistic.md
+++ /dev/null
@@ -1,637 +0,0 @@
-The Logistic distribution with location `loc` and `scale` parameters.
-
-#### Mathematical details
-
-The cumulative density function of this distribution is:
-
-```none
-cdf(x; mu, sigma) = 1 / (1 + exp(-(x - mu) / sigma))
-```
-
-where `loc = mu` and `scale = sigma`.
-
-The Logistic distribution is a member of the [location-scale family](
-https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be
-constructed as,
-
-```none
-X ~ Logistic(loc=0, scale=1)
-Y = loc + scale * X
-```
-
-#### Examples
-
-Examples of initialization of one or a batch of distributions.
-
-```python
-# Define a single scalar Logistic distribution.
-dist = tf.contrib.distributions.Logistic(loc=0., scale=3.)
-
-# Evaluate the cdf at 1, returning a scalar.
-dist.cdf(1.)
-
-# Define a batch of two scalar valued Logistics.
-# The first has mean 1 and scale 11, the second 2 and 22.
-dist = tf.contrib.distributions.Logistic(loc=[1, 2.], scale=[11, 22.])
-
-# Evaluate the pdf of the first distribution on 0, and the second on 1.5,
-# returning a length two tensor.
-dist.prob([0, 1.5])
-
-# Get 3 samples, returning a 3 x 2 tensor.
-dist.sample([3])
-```
-
-Arguments are broadcast when possible.
-
-```python
-# Define a batch of two scalar valued Logistics.
-# Both have mean 1, but different scales.
-dist = tf.contrib.distributions.Logistic(loc=1., scale=[11, 22.])
-
-# Evaluate the pdf of both distributions on the same point, 3.0,
-# returning a length 2 tensor.
-dist.prob(3.0)
-```
-- - -
-
-#### `tf.contrib.distributions.Logistic.__init__(loc, scale, validate_args=False, allow_nan_stats=True, name='Logistic')` {#Logistic.__init__}
-
-Construct Logistic distributions with mean and scale `loc` and `scale`.
-
-The parameters `loc` and `scale` must be shaped in a way that supports
-broadcasting (e.g. `loc + scale` is a valid operation).
-
-##### Args:
-
-
-* <b>`loc`</b>: Floating point tensor, the means of the distribution(s).
-* <b>`scale`</b>: Floating point tensor, the scales of the distribution(s). Must
- contain only positive values.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: The name to give Ops created by the initializer.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if loc and scale are different dtypes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.allow_nan_stats` {#Logistic.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.batch_shape` {#Logistic.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.batch_shape_tensor(name='batch_shape_tensor')` {#Logistic.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.cdf(value, name='cdf')` {#Logistic.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.copy(**override_parameters_kwargs)` {#Logistic.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.covariance(name='covariance')` {#Logistic.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.dtype` {#Logistic.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.entropy(name='entropy')` {#Logistic.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.event_shape` {#Logistic.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.event_shape_tensor(name='event_shape_tensor')` {#Logistic.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.is_continuous` {#Logistic.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.is_scalar_batch(name='is_scalar_batch')` {#Logistic.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.is_scalar_event(name='is_scalar_event')` {#Logistic.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.loc` {#Logistic.loc}
-
-Distribution parameter for the location.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.log_cdf(value, name='log_cdf')` {#Logistic.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.log_prob(value, name='log_prob')` {#Logistic.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.log_survival_function(value, name='log_survival_function')` {#Logistic.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.mean(name='mean')` {#Logistic.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.mode(name='mode')` {#Logistic.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.name` {#Logistic.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Logistic.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.param_static_shapes(cls, sample_shape)` {#Logistic.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.parameters` {#Logistic.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.prob(value, name='prob')` {#Logistic.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.reparameterization_type` {#Logistic.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.sample(sample_shape=(), seed=None, name='sample')` {#Logistic.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.scale` {#Logistic.scale}
-
-Distribution parameter for scale.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.stddev(name='stddev')` {#Logistic.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.survival_function(value, name='survival_function')` {#Logistic.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.validate_args` {#Logistic.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Logistic.variance(name='variance')` {#Logistic.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.bijector.AffineLinearOperator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.bijector.AffineLinearOperator.md
deleted file mode 100644
index 034f96e6cf..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.bijector.AffineLinearOperator.md
+++ /dev/null
@@ -1,358 +0,0 @@
-Compute `Y = g(X; shift, scale) = scale @ X + shift`.
-
-`shift` is a numeric `Tensor` and `scale` is a `LinearOperator`.
-
-If `X` is a scalar then the forward transformation is: `scale * X + shift`
-where `*` denotes the scalar product.
-
-Note: we don't always simply transpose `X` (but write it this way for
-brevity). Actually the input `X` undergoes the following transformation
-before being premultiplied by `scale`:
-
-1. If there are no sample dims, we call `X = tf.expand_dims(X, 0)`, i.e.,
- `new_sample_shape = [1]`. Otherwise do nothing.
-2. The sample shape is flattened to have one dimension, i.e.,
- `new_sample_shape = [n]` where `n = tf.reduce_prod(old_sample_shape)`.
-3. The sample dim is cyclically rotated left by 1, i.e.,
- `new_shape = [B1,...,Bb, k, n]` where `n` is as above, `k` is the
- event_shape, and `B1,...,Bb` are the batch shapes for each of `b` batch
- dimensions.
-
-(For more details see `shape.make_batch_of_event_sample_matrices`.)
-
-The result of the above transformation is that `X` can be regarded as a batch
-of matrices where each column is a draw from the distribution. After
-premultiplying by `scale`, we take the inverse of this procedure. The input
-`Y` also undergoes the same transformation before/after premultiplying by
-`inv(scale)`.
-
-Example Use:
-
-```python
-linalg = tf.contrib.linalg
-
-x = [1., 2, 3]
-
-shift = [-1., 0., 1]
-diag = [1., 2, 3]
-scale = linalg.LinearOperatorDiag(diag)
-affine = AffineLinearOperator(shift, scale)
-# In this case, `forward` is equivalent to:
-# y = scale @ x + shift
-y = affine.forward(x) # [0., 4, 10]
-
-shift = [2., 3, 1]
-tril = [[1., 0, 0],
- [2, 1, 0],
- [3, 2, 1]]
-scale = linalg.LinearOperatorTriL(tril)
-affine = AffineLinearOperator(shift, scale)
-# In this case, `forward` is equivalent to:
-# np.squeeze(np.matmul(tril, np.expand_dims(x, -1)), -1) + shift
-y = affine.forward(x) # [3., 7, 11]
-```
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.__init__(shift=None, scale=None, event_ndims=1, validate_args=False, name='affine_linear_operator')` {#AffineLinearOperator.__init__}
-
-Instantiates the `AffineLinearOperator` bijector.
-
-##### Args:
-
-
-* <b>`shift`</b>: Floating-point `Tensor`.
-* <b>`scale`</b>: Subclass of `LinearOperator`. Represents the (batch) positive
- definite matrix `M` in `R^{k x k}`.
-* <b>`event_ndims`</b>: Scalar `integer` `Tensor` indicating the number of dimensions
- associated with a particular draw from the distribution. Must be 0 or 1.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str` name given to ops managed by this object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `event_ndims` is not 0 or 1.
-* <b>`TypeError`</b>: if `scale` is not a `LinearOperator`.
-* <b>`TypeError`</b>: if `shift.dtype` does not match `scale.dtype`.
-* <b>`ValueError`</b>: if not `scale.is_non_singular`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.dtype` {#AffineLinearOperator.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.event_ndims` {#AffineLinearOperator.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.forward(x, name='forward')` {#AffineLinearOperator.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.forward_event_shape(input_shape)` {#AffineLinearOperator.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#AffineLinearOperator.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#AffineLinearOperator.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.graph_parents` {#AffineLinearOperator.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.inverse(y, name='inverse')` {#AffineLinearOperator.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#AffineLinearOperator.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.inverse_event_shape(output_shape)` {#AffineLinearOperator.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#AffineLinearOperator.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#AffineLinearOperator.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.is_constant_jacobian` {#AffineLinearOperator.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.name` {#AffineLinearOperator.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.scale` {#AffineLinearOperator.scale}
-
-The `scale` `LinearOperator` in `Y = scale @ X + shift`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.shift` {#AffineLinearOperator.shift}
-
-The `shift` `Tensor` in `Y = scale @ X + shift`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.AffineLinearOperator.validate_args` {#AffineLinearOperator.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.bijector.Identity.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.bijector.Identity.md
deleted file mode 100644
index a6045db420..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.distributions.bijector.Identity.md
+++ /dev/null
@@ -1,283 +0,0 @@
-Compute Y = g(X) = X.
-
-Example Use:
-
-```python
-# Create the Y=g(X)=X transform which is intended for Tensors with 1 batch
-# ndim and 1 event ndim (i.e., vector of vectors).
-identity = Identity(event_ndims=1)
-x = [[1., 2],
- [3, 4]]
-x == identity.forward(x) == identity.inverse(x)
-```
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.__init__(validate_args=False, event_ndims=0, name='identity')` {#Identity.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.dtype` {#Identity.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.event_ndims` {#Identity.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.forward(x, name='forward')` {#Identity.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.forward_event_shape(input_shape)` {#Identity.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Identity.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Identity.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.graph_parents` {#Identity.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.inverse(y, name='inverse')` {#Identity.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Identity.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.inverse_event_shape(output_shape)` {#Identity.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Identity.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Identity.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.is_constant_jacobian` {#Identity.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.name` {#Identity.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Identity.validate_args` {#Identity.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.assert_scalar.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.assert_scalar.md
deleted file mode 100644
index d3618fa38c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.assert_scalar.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.framework.assert_scalar(tensor, name=None)` {#assert_scalar}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.assign_from_values.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.assign_from_values.md
deleted file mode 100644
index 6560f08281..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.assign_from_values.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.contrib.framework.assign_from_values(var_names_to_values)` {#assign_from_values}
-
-Creates an assignment operation from a given mapping.
-
-This function provides a mechanism for performing assignment of variables
-to values in a way that does not fill the graph with large assignment values.
-
-##### Args:
-
-
-* <b>`var_names_to_values`</b>: A map from variable names to values.
-
-##### Returns:
-
-
-* <b>`assign_op`</b>: An `Operation` that assigns each of the given variables to the
- requested values.
-* <b>`feed_dict`</b>: The feed dictionary to use when evaluating `assign_op`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if any of the given variable names were not found.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.create_global_step.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.create_global_step.md
deleted file mode 100644
index d41c8eb95a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.create_global_step.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.contrib.framework.create_global_step(graph=None)` {#create_global_step}
-
-Create global step tensor in graph.
-
-##### Args:
-
-
-* <b>`graph`</b>: The graph in which to create the global step. If missing, use default
- graph.
-
-##### Returns:
-
- Global step tensor.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if global step key is already defined.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.deprecated.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.deprecated.md
deleted file mode 100644
index 2daecf41e2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.deprecated.md
+++ /dev/null
@@ -1,34 +0,0 @@
-### `tf.contrib.framework.deprecated(date, instructions)` {#deprecated}
-
-Decorator for marking functions or methods deprecated.
-
-This decorator logs a deprecation warning whenever the decorated function is
-called. It has the following format:
-
- <function> (from <module>) is deprecated and will be removed after <date>.
- Instructions for updating:
- <instructions>
-
-<function> will include the class name if it is a method.
-
-It also edits the docstring of the function: ' (deprecated)' is appended
-to the first line of the docstring and a deprecation notice is prepended
-to the rest of the docstring.
-
-##### Args:
-
-
-* <b>`date`</b>: String. The date the function is scheduled to be removed. Must be
- ISO 8601 (YYYY-MM-DD).
-* <b>`instructions`</b>: String. Instructions on how to update code using the
- deprecated function.
-
-##### Returns:
-
- Decorated function or method.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If date is not in ISO 8601 format, or instructions are empty.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.reduce_sum_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.reduce_sum_n.md
deleted file mode 100644
index 06b9822278..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.framework.reduce_sum_n.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.contrib.framework.reduce_sum_n(tensors, name=None)` {#reduce_sum_n}
-
-Reduce tensors to a scalar sum.
-
-This reduces each tensor in `tensors` to a scalar via `tf.reduce_sum`, then
-adds them via `tf.add_n`.
-
-##### Args:
-
-
-* <b>`tensors`</b>: List of tensors, all of the same numeric type.
-* <b>`name`</b>: Tensor name, and scope for all other ops.
-
-##### Returns:
-
- Total loss tensor, or None if no losses have been configured.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `losses` is missing or empty.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.assign_renamed_collections_handler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.assign_renamed_collections_handler.md
deleted file mode 100644
index b1bab3eec0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.assign_renamed_collections_handler.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.contrib.graph_editor.assign_renamed_collections_handler(info, elem, elem_)` {#assign_renamed_collections_handler}
-
-Add the transformed elem to the (renamed) collections of elem.
-
-A collection is renamed only if is not a known key, as described in
-`tf.GraphKeys`.
-
-##### Args:
-
-
-* <b>`info`</b>: Transform._TmpInfo instance.
-* <b>`elem`</b>: the original element (`tf.Tensor` or `tf.Operation`)
-* <b>`elem_`</b>: the transformed element
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.detach_control_inputs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.detach_control_inputs.md
deleted file mode 100644
index cbdf5a943f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.detach_control_inputs.md
+++ /dev/null
@@ -1,10 +0,0 @@
-### `tf.contrib.graph_editor.detach_control_inputs(sgv)` {#detach_control_inputs}
-
-Detach all the external control inputs of the subgraph sgv.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the subgraph view to be detached. This argument is converted to a
- subgraph using the same rules as the function subgraph.make_view.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.get_forward_walk_ops.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.get_forward_walk_ops.md
deleted file mode 100644
index 7ac8cc0748..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.get_forward_walk_ops.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.contrib.graph_editor.get_forward_walk_ops(seed_ops, inclusive=True, within_ops=None, stop_at_ts=(), control_outputs=None)` {#get_forward_walk_ops}
-
-Do a forward graph walk and return all the visited ops.
-
-##### Args:
-
-
-* <b>`seed_ops`</b>: an iterable of operations from which the forward graph
- walk starts. If a list of tensors is given instead, the seed_ops are set
- to be the consumers of those tensors.
-* <b>`inclusive`</b>: if True the given seed_ops are also part of the resulting set.
-* <b>`within_ops`</b>: an iterable of `tf.Operation` within which the search is
- restricted. If `within_ops` is `None`, the search is performed within
- the whole graph.
-* <b>`stop_at_ts`</b>: an iterable of tensors at which the graph walk stops.
-* <b>`control_outputs`</b>: a `util.ControlOutputs` instance or None.
- If not `None`, it will be used while walking the graph forward.
-
-##### Returns:
-
- A Python set of all the `tf.Operation` ahead of `seed_ops`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `seed_ops` or `within_ops` cannot be converted to a list of
- `tf.Operation`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.get_tensors.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.get_tensors.md
deleted file mode 100644
index c000b26faa..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.get_tensors.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.contrib.graph_editor.get_tensors(graph)` {#get_tensors}
-
-get all the tensors which are input or output of an op in the graph.
-
-##### Args:
-
-
-* <b>`graph`</b>: a `tf.Graph`.
-
-##### Returns:
-
- A list of `tf.Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if graph is not a `tf.Graph`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.replace_t_with_placeholder_handler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.replace_t_with_placeholder_handler.md
deleted file mode 100644
index a808129fa1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.replace_t_with_placeholder_handler.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.contrib.graph_editor.replace_t_with_placeholder_handler(info, t)` {#replace_t_with_placeholder_handler}
-
-Transform a tensor into a placeholder tensor.
-
-This handler is typically used to transform a subgraph input tensor into a
-placeholder.
-
-##### Args:
-
-
-* <b>`info`</b>: Transform._TmpInfo instance.
-* <b>`t`</b>: tensor whose input must be transformed into a place holder.
-
-##### Returns:
-
- The tensor generated by the newly created place holder.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.select_ops_and_ts.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.select_ops_and_ts.md
deleted file mode 100644
index 02fae6be8f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.select_ops_and_ts.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.contrib.graph_editor.select_ops_and_ts(*args, **kwargs)` {#select_ops_and_ts}
-
-Helper to select operations and tensors.
-
-##### Args:
-
-
-* <b>`*args`</b>: list of 1) regular expressions (compiled or not) or 2) (array of)
- `tf.Operation` 3) (array of) tf.Tensor. Regular expressions matching
- tensors must start with the comment `"(?#ts)"`, for instance:
- `"(?#ts)^foo/.*"`.
-* <b>`**kwargs`</b>: 'graph': `tf.Graph` in which to perform the regex query.This is
- required when using regex.
- 'positive_filter': an elem if selected only if `positive_filter(elem)` is
- `True`. This is optional.
-
-##### Returns:
-
- A tuple `(ops, ts)` where:
- `ops` is a list of `tf.Operation`, and
- `ts` is a list of `tf.Tensor`
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if the optional keyword argument graph is not a `tf.Graph`
- or if an argument in args is not an (array of) `tf.Tensor`
- or an (array of) `tf.Operation` or a string or a regular expression.
-* <b>`ValueError`</b>: if one of the keyword arguments is unexpected or if a regular
- expression is used without passing a graph as a keyword argument.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.swap_ts.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.swap_ts.md
deleted file mode 100644
index 2f2883b76e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.graph_editor.swap_ts.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.contrib.graph_editor.swap_ts(ts0, ts1, can_modify=None, cannot_modify=None)` {#swap_ts}
-
-For each tensor's pair, swap the end of (t0,t1).
-
-B0 B1 B0 B1
-| | => X
-A0 A1 A0 A1
-
-##### Args:
-
-
-* <b>`ts0`</b>: an object convertible to a list of `tf.Tensor`.
-* <b>`ts1`</b>: an object convertible to a list of `tf.Tensor`.
-* <b>`can_modify`</b>: iterable of operations which can be modified. Any operation
- outside within_ops will be left untouched by this function.
-* <b>`cannot_modify`</b>: iterable of operations which cannot be modified.
- Any operation within cannot_modify will be left untouched by this
- function.
-
-##### Returns:
-
- The number of individual modifications made by the function.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ts0 or ts1 cannot be converted to a list of tf.Tensor.
-* <b>`TypeError`</b>: if can_modify or cannot_modify is not None and cannot be
- converted to a list of tf.Operation.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.bucketized_column.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.bucketized_column.md
deleted file mode 100644
index fa69df86d9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.bucketized_column.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.contrib.layers.bucketized_column(source_column, boundaries)` {#bucketized_column}
-
-Creates a _BucketizedColumn for discretizing dense input.
-
-##### Args:
-
-
-* <b>`source_column`</b>: A _RealValuedColumn defining dense column.
-* <b>`boundaries`</b>: A list of floats specifying the boundaries. It has to be sorted.
-
-##### Returns:
-
- A _BucketizedColumn.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if 'boundaries' is empty or not sorted.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.embed_sequence.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.embed_sequence.md
deleted file mode 100644
index 8ad845ba64..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.embed_sequence.md
+++ /dev/null
@@ -1,34 +0,0 @@
-### `tf.contrib.layers.embed_sequence(ids, vocab_size=None, embed_dim=None, unique=False, initializer=None, regularizer=None, trainable=True, scope=None, reuse=None)` {#embed_sequence}
-
-Maps a sequence of symbols to a sequence of embeddings.
-
-Typical use case would be reusing embeddings between an encoder and decoder.
-
-##### Args:
-
-
-* <b>`ids`</b>: `[batch_size, doc_length]` `Tensor` of type `int32` or `int64`
- with symbol ids.
-* <b>`vocab_size`</b>: Integer number of symbols in vocabulary.
-* <b>`embed_dim`</b>: Integer number of dimensions for embedding matrix.
-* <b>`unique`</b>: If `True`, will first compute the unique set of indices, and then
- lookup each embedding once, repeating them in the output as needed.
-* <b>`initializer`</b>: An initializer for the embeddings, if `None` default for
- current scope is used.
-* <b>`regularizer`</b>: Optional regularizer for the embeddings.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
-* <b>`scope`</b>: Optional string specifying the variable scope for the op, required
- if `reuse=True`.
-* <b>`reuse`</b>: If `True`, variables inside the op will be reused.
-
-##### Returns:
-
- `Tensor` of `[batch_size, doc_length, embed_dim]` with embedded sequences.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `embed_dim` or `vocab_size` are not specified when not
- `reuse` is `None` or `False`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.summarize_activations.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.summarize_activations.md
deleted file mode 100644
index dc2e7a6044..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.layers.summarize_activations.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.layers.summarize_activations(name_filter=None, summarizer=summarize_activation)` {#summarize_activations}
-
-Summarize activations, using `summarize_activation` to summarize.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.ExportStrategy.__new__.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.ExportStrategy.__new__.md
deleted file mode 100644
index 68f3c7c314..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.ExportStrategy.__new__.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.contrib.learn.ExportStrategy.__new__(_cls, name, export_fn)` {#ExportStrategy.__new__}
-
-Create new instance of ExportStrategy(name, export_fn)
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.MetricSpec.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.MetricSpec.md
deleted file mode 100644
index 20b689f0a3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.MetricSpec.md
+++ /dev/null
@@ -1,181 +0,0 @@
-MetricSpec connects a model to metric functions.
-
-The MetricSpec class contains all information necessary to connect the
-output of a `model_fn` to the metrics (usually, streaming metrics) that are
-used in evaluation.
-
-It is passed in the `metrics` argument of `Estimator.evaluate`. The
-`Estimator` then knows which predictions, labels, and weight to use to call a
-given metric function.
-
-When building the ops to run in evaluation, `Estimator` will call
-`create_metric_ops`, which will connect the given `metric_fn` to the model
-as detailed in the docstring for `create_metric_ops`, and return the metric.
-
-Example:
-
-Assuming a model has an input function which returns inputs containing
-(among other things) a tensor with key "input_key", and a labels dictionary
-containing "label_key". Let's assume that the `model_fn` for this model
-returns a prediction with key "prediction_key".
-
-In order to compute the accuracy of the "prediction_key" prediction, we
-would add
-
-```
-"prediction accuracy": MetricSpec(metric_fn=prediction_accuracy_fn,
- prediction_key="prediction_key",
- label_key="label_key")
-```
-
-to the metrics argument to `evaluate`. `prediction_accuracy_fn` can be either
-a predefined function in metric_ops (e.g., `streaming_accuracy`) or a custom
-function you define.
-
-If we would like the accuracy to be weighted by "input_key", we can add that
-as the `weight_key` argument.
-
-```
-"prediction accuracy": MetricSpec(metric_fn=prediction_accuracy_fn,
- prediction_key="prediction_key",
- label_key="label_key",
- weight_key="input_key")
-```
-
-An end-to-end example is as follows:
-
-```
-estimator = tf.contrib.learn.Estimator(...)
-estimator.fit(...)
-_ = estimator.evaluate(
- input_fn=input_fn,
- steps=1,
- metrics={
- 'prediction accuracy':
- metric_spec.MetricSpec(
- metric_fn=prediction_accuracy_fn,
- prediction_key="prediction_key",
- label_key="label_key")
- })
-```
-- - -
-
-#### `tf.contrib.learn.MetricSpec.__init__(metric_fn, prediction_key=None, label_key=None, weight_key=None)` {#MetricSpec.__init__}
-
-Constructor.
-
-Creates a MetricSpec.
-
-##### Args:
-
-
-* <b>`metric_fn`</b>: A function to use as a metric. See `_adapt_metric_fn` for
- rules on how `predictions`, `labels`, and `weights` are passed to this
- function. This must return either a single `Tensor`, which is
- interpreted as a value of this metric, or a pair
- `(value_op, update_op)`, where `value_op` is the op to call to
- obtain the value of the metric, and `update_op` should be run for
- each batch to update internal state.
-* <b>`prediction_key`</b>: The key for a tensor in the `predictions` dict (output
- from the `model_fn`) to use as the `predictions` input to the
- `metric_fn`. Optional. If `None`, the `model_fn` must return a single
- tensor or a dict with only a single entry as `predictions`.
-* <b>`label_key`</b>: The key for a tensor in the `labels` dict (output from the
- `input_fn`) to use as the `labels` input to the `metric_fn`.
- Optional. If `None`, the `input_fn` must return a single tensor or a
- dict with only a single entry as `labels`.
-* <b>`weight_key`</b>: The key for a tensor in the `inputs` dict (output from the
- `input_fn`) to use as the `weights` input to the `metric_fn`.
- Optional. If `None`, no weights will be passed to the `metric_fn`.
-
-
-- - -
-
-#### `tf.contrib.learn.MetricSpec.__str__()` {#MetricSpec.__str__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.MetricSpec.create_metric_ops(inputs, labels, predictions)` {#MetricSpec.create_metric_ops}
-
-Connect our `metric_fn` to the specified members of the given dicts.
-
-This function will call the `metric_fn` given in our constructor as follows:
-
-```
- metric_fn(predictions[self.prediction_key],
- labels[self.label_key],
- weights=weights[self.weight_key])
-```
-
-And returns the result. The `weights` argument is only passed if
-`self.weight_key` is not `None`.
-
-`predictions` and `labels` may be single tensors as well as dicts. If
-`predictions` is a single tensor, `self.prediction_key` must be `None`. If
-`predictions` is a single element dict, `self.prediction_key` is allowed to
-be `None`. Conversely, if `labels` is a single tensor, `self.label_key` must
-be `None`. If `labels` is a single element dict, `self.label_key` is allowed
-to be `None`.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A dict of inputs produced by the `input_fn`
-* <b>`labels`</b>: A dict of labels or a single label tensor produced by the
- `input_fn`.
-* <b>`predictions`</b>: A dict of predictions or a single tensor produced by the
- `model_fn`.
-
-##### Returns:
-
- The result of calling `metric_fn`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` or `labels` is a single `Tensor` and
- `self.prediction_key` or `self.label_key` is not `None`; or if
- `self.label_key` is `None` but `labels` is a dict with more than one
- element, or if `self.prediction_key` is `None` but `predictions` is a
- dict with more than one element.
-
-
-- - -
-
-#### `tf.contrib.learn.MetricSpec.label_key` {#MetricSpec.label_key}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.MetricSpec.metric_fn` {#MetricSpec.metric_fn}
-
-Metric function.
-
-This function accepts named args: `predictions`, `labels`, `weights`. It
-returns a single `Tensor` or `(value_op, update_op)` pair. See `metric_fn`
-constructor argument for more details.
-
-##### Returns:
-
- Function, see `metric_fn` constructor argument for more details.
-
-
-- - -
-
-#### `tf.contrib.learn.MetricSpec.prediction_key` {#MetricSpec.prediction_key}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.MetricSpec.weight_key` {#MetricSpec.weight_key}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.NotFittedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.NotFittedError.md
deleted file mode 100644
index 6101ade1da..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.NotFittedError.md
+++ /dev/null
@@ -1,17 +0,0 @@
-Exception class to raise if estimator is used before fitting.
-
-This class inherits from both ValueError and AttributeError to help with
-exception handling and backward compatibility.
-
-Examples:
->>> from sklearn.svm import LinearSVC
->>> from sklearn.exceptions import NotFittedError
->>> try:
-... LinearSVC().predict([[1, 2], [2, 3], [3, 4]])
-... except NotFittedError as e:
-... print(repr(e))
-... # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS
-NotFittedError('This LinearSVC instance is not fitted yet',)
-
-Copied from
-https://github.com/scikit-learn/scikit-learn/master/sklearn/exceptions.py
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.RunConfig.get_task_id.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.RunConfig.get_task_id.md
deleted file mode 100644
index 1c2856df21..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.RunConfig.get_task_id.md
+++ /dev/null
@@ -1,12 +0,0 @@
-#### `tf.contrib.learn.RunConfig.get_task_id()` {#RunConfig.get_task_id}
-
-Returns task index from `TF_CONFIG` environmental variable.
-
-If you have a ClusterConfig instance, you can just access its task_id
-property instead of calling this function and re-parsing the environmental
-variable.
-
-##### Returns:
-
- `TF_CONFIG['task']['index']`. Defaults to 0.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.extract_dask_labels.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.extract_dask_labels.md
deleted file mode 100644
index 2ccdd8a8e2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.extract_dask_labels.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.contrib.learn.extract_dask_labels(labels)` {#extract_dask_labels}
-
-Extract data from dask.Series or dask.DataFrame for labels.
-
-Given a distributed dask.DataFrame or dask.Series containing exactly one
-column or name, this operation returns a single dask.DataFrame or dask.Series
-that can be iterated over.
-
-##### Args:
-
-
-* <b>`labels`</b>: A distributed dask.DataFrame or dask.Series with exactly one
- column or name.
-
-##### Returns:
-
- A dask.DataFrame or dask.Series that can be iterated over.
- If the supplied argument is neither a dask.DataFrame nor a dask.Series this
- operation returns it without modification.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the supplied dask.DataFrame contains more than one
- column or the supplied dask.Series contains more than
- one name.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.monitors.PrintTensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.monitors.PrintTensor.md
deleted file mode 100644
index 4044c2ad30..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.monitors.PrintTensor.md
+++ /dev/null
@@ -1,187 +0,0 @@
-Prints given tensors every N steps.
-
-This is an `EveryN` monitor and has consistent semantic for `every_n`
-and `first_n`.
-
-The tensors will be printed to the log, with `INFO` severity.
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.__init__(tensor_names, every_n=100, first_n=1)` {#PrintTensor.__init__}
-
-Initializes a PrintTensor monitor.
-
-##### Args:
-
-
-* <b>`tensor_names`</b>: `dict` of tag to tensor names or
- `iterable` of tensor names (strings).
-* <b>`every_n`</b>: `int`, print every N steps. See `PrintN.`
-* <b>`first_n`</b>: `int`, also print the first N steps. See `PrintN.`
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.begin(max_steps=None)` {#PrintTensor.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.end(session=None)` {#PrintTensor.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.epoch_begin(epoch)` {#PrintTensor.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.epoch_end(epoch)` {#PrintTensor.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.every_n_post_step(step, session)` {#PrintTensor.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.every_n_step_begin(step)` {#PrintTensor.every_n_step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.every_n_step_end(step, outputs)` {#PrintTensor.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.post_step(step, session)` {#PrintTensor.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.run_on_all_workers` {#PrintTensor.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.set_estimator(estimator)` {#PrintTensor.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.step_begin(step)` {#PrintTensor.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.PrintTensor.step_end(step, output)` {#PrintTensor.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.read_batch_examples.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.read_batch_examples.md
deleted file mode 100644
index 4f389b2bc9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.learn.read_batch_examples.md
+++ /dev/null
@@ -1,46 +0,0 @@
-### `tf.contrib.learn.read_batch_examples(file_pattern, batch_size, reader, randomize_input=True, num_epochs=None, queue_capacity=10000, num_threads=1, read_batch_size=1, parse_fn=None, name=None, seed=None)` {#read_batch_examples}
-
-Adds operations to read, queue, batch `Example` protos.
-
-Given file pattern (or list of files), will setup a queue for file names,
-read `Example` proto using provided `reader`, use batch queue to create
-batches of examples of size `batch_size`.
-
-All queue runners are added to the queue runners collection, and may be
-started via `start_queue_runners`.
-
-All ops are added to the default graph.
-
-Use `parse_fn` if you need to do parsing / processing on single examples.
-
-##### Args:
-
-
-* <b>`file_pattern`</b>: List of files or pattern of file paths containing
- `Example` records. See `tf.gfile.Glob` for pattern rules.
-* <b>`batch_size`</b>: An int or scalar `Tensor` specifying the batch size to use.
-* <b>`reader`</b>: A function or class that returns an object with
- `read` method, (filename tensor) -> (example tensor).
-* <b>`randomize_input`</b>: Whether the input should be randomized.
-* <b>`num_epochs`</b>: Integer specifying the number of times to read through the
- dataset. If `None`, cycles through the dataset forever.
- NOTE - If specified, creates a variable that must be initialized, so call
- `tf.global_variables_initializer()` and run the op in a session.
-* <b>`queue_capacity`</b>: Capacity for input queue.
-* <b>`num_threads`</b>: The number of threads enqueuing examples.
-* <b>`read_batch_size`</b>: An int or scalar `Tensor` specifying the number of
- records to read at once
-* <b>`parse_fn`</b>: Parsing function, takes `Example` Tensor returns parsed
- representation. If `None`, no parsing is done.
-* <b>`name`</b>: Name of resulting op.
-* <b>`seed`</b>: An integer (optional). Seed used if randomize_input == True.
-
-##### Returns:
-
- String `Tensor` of batched `Example` proto.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: for invalid inputs.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.linalg.LinearOperatorMatrix.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.linalg.LinearOperatorMatrix.md
deleted file mode 100644
index 0f07a4457d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.linalg.LinearOperatorMatrix.md
+++ /dev/null
@@ -1,519 +0,0 @@
-`LinearOperator` that wraps a [batch] matrix.
-
-This operator wraps a [batch] matrix `A` (which is a `Tensor`) with shape
-`[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a
-batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is
-an `M x N` matrix.
-
-```python
-# Create a 2 x 2 linear operator.
-matrix = [[1., 2.], [3., 4.]]
-operator = LinearOperatorMatrix(matrix)
-
-operator.to_dense()
-==> [[1., 2.]
- [3., 4.]]
-
-operator.shape
-==> [2, 2]
-
-operator.log_determinant()
-==> scalar Tensor
-
-x = ... Shape [2, 4] Tensor
-operator.apply(x)
-==> Shape [2, 4] Tensor
-
-# Create a [2, 3] batch of 4 x 4 linear operators.
-matrix = tf.random_normal(shape=[2, 3, 4, 4])
-operator = LinearOperatorMatrix(matrix)
-```
-
-#### Shape compatibility
-
-This operator acts on [batch] matrix with compatible shape.
-`x` is a batch matrix with compatible shape for `apply` and `solve` if
-
-```
-operator.shape = [B1,...,Bb] + [M, N], with b >= 0
-x.shape = [B1,...,Bb] + [N, R], with R >= 0.
-```
-
-#### Performance
-
-`LinearOperatorMatrix` has exactly the same performance as would be achieved
-by using standard `TensorFlow` matrix ops. Intelligent choices are made
-based on the following initialization hints.
-
-* If `dtype` is real, and `is_self_adjoint` and `is_positive_definite`, a
- Cholesky factorization is used for the determinant and solve.
-
-In all cases, suppose `operator` is a `LinearOperatorMatrix` of shape
-`[M, N]`, and `x.shape = [N, R]`. Then
-
-* `operator.apply(x)` is `O(M * N * R)`.
-* If `M=N`, `operator.solve(x)` is `O(N^3 * R)`.
-* If `M=N`, `operator.determinant()` is `O(N^3)`.
-
-If instead `operator` and `x` have shape `[B1,...,Bb, M, N]` and
-`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite`.
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.__init__(matrix, is_non_singular=None, is_self_adjoint=None, is_positive_definite=None, name='LinearOperatorMatrix')` {#LinearOperatorMatrix.__init__}
-
-Initialize a `LinearOperatorMatrix`.
-
-##### Args:
-
-
-* <b>`matrix`</b>: Shape `[B1,...,Bb, M, N]` with `b >= 0`, `M, N >= 0`.
- Allowed dtypes: `float32`, `float64`, `complex64`, `complex128`.
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite,
- meaning the real part of all eigenvalues is positive. We do not require
- the operator to be self-adjoint to be positive-definite. See:
-* <b>`https`</b>: //en.wikipedia.org/wiki/Positive-definite_matrix
- #Extension_for_non_symmetric_matrices
-* <b>`name`</b>: A name for this `LinearOperator`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `diag.dtype` is not an allowed type.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.add_to_tensor(x, name='add_to_tensor')` {#LinearOperatorMatrix.add_to_tensor}
-
-Add matrix represented by this operator to `x`. Equivalent to `A + x`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with same `dtype` and shape broadcastable to `self.shape`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.apply(x, adjoint=False, name='apply')` {#LinearOperatorMatrix.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.assert_non_singular(name='assert_non_singular')` {#LinearOperatorMatrix.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.assert_positive_definite(name='assert_positive_definite')` {#LinearOperatorMatrix.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperatorMatrix.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.batch_shape` {#LinearOperatorMatrix.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperatorMatrix.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.determinant(name='det')` {#LinearOperatorMatrix.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.diag_part(name='diag_part')` {#LinearOperatorMatrix.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.domain_dimension` {#LinearOperatorMatrix.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperatorMatrix.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.dtype` {#LinearOperatorMatrix.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.graph_parents` {#LinearOperatorMatrix.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.is_non_singular` {#LinearOperatorMatrix.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.is_positive_definite` {#LinearOperatorMatrix.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.is_self_adjoint` {#LinearOperatorMatrix.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.is_square` {#LinearOperatorMatrix.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.log_abs_determinant(name='log_abs_det')` {#LinearOperatorMatrix.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.name` {#LinearOperatorMatrix.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.range_dimension` {#LinearOperatorMatrix.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperatorMatrix.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.shape` {#LinearOperatorMatrix.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.shape_tensor(name='shape_tensor')` {#LinearOperatorMatrix.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.solve(rhs, adjoint=False, name='solve')` {#LinearOperatorMatrix.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.tensor_rank` {#LinearOperatorMatrix.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperatorMatrix.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorMatrix.to_dense(name='to_dense')` {#LinearOperatorMatrix.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.losses.log_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.losses.log_loss.md
deleted file mode 100644
index 2d786d790f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.losses.log_loss.md
+++ /dev/null
@@ -1,36 +0,0 @@
-### `tf.contrib.losses.log_loss(*args, **kwargs)` {#log_loss}
-
-Adds a Log Loss term to the training procedure. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.log_loss instead. Note that the order of the predictions and labels arguments was changed.
-
-`weights` acts as a coefficient for the loss. If a scalar is provided, then
-the loss is simply scaled by the given value. If `weights` is a tensor of size
-[batch_size], then the total loss for each sample of the batch is rescaled
-by the corresponding element in the `weights` vector. If the shape of
-`weights` matches the shape of `predictions`, then the loss of each
-measurable element of `predictions` is scaled by the corresponding value of
-`weights`.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted outputs.
-* <b>`labels`</b>: The ground truth output tensor, same dimensions as 'predictions'.
-* <b>`weights`</b>: Coefficients for the loss a scalar, a tensor of shape
- [batch_size] or a tensor whose shape matches `predictions`.
-* <b>`epsilon`</b>: A small increment to add to avoid taking a log of zero.
-* <b>`scope`</b>: The scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `labels` or
- if the shape of `weights` is invalid.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.set_intersection.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.set_intersection.md
deleted file mode 100644
index fce0131626..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.set_intersection.md
+++ /dev/null
@@ -1,63 +0,0 @@
-### `tf.contrib.metrics.set_intersection(a, b, validate_indices=True)` {#set_intersection}
-
-Compute set intersection of elements in last dimension of `a` and `b`.
-
-All but the last dimension of `a` and `b` must match.
-
-Example:
-
-```python
- a = [
- [
- [
- [1, 2],
- [3],
- ],
- [
- [4],
- [5, 6],
- ],
- ],
- ]
- b = [
- [
- [
- [1, 3],
- [2],
- ],
- [
- [4, 5],
- [5, 6, 7, 8],
- ],
- ],
- ]
- set_intersection(a, b) = [
- [
- [
- [1],
- [],
- ],
- [
- [4],
- [5, 6],
- ],
- ],
- ]
-```
-
-##### Args:
-
-
-* <b>`a`</b>: `Tensor` or `SparseTensor` of the same type as `b`. If sparse, indices
- must be sorted in row-major order.
-* <b>`b`</b>: `Tensor` or `SparseTensor` of the same type as `a`. If sparse, indices
- must be sorted in row-major order.
-* <b>`validate_indices`</b>: Whether to validate the order and range of sparse indices
- in `a` and `b`.
-
-##### Returns:
-
- A `SparseTensor` whose shape is the same rank as `a` and `b`, and all but
- the last dimension the same. Elements along the last dimension contain the
- intersections.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_false_positives_at_thresholds.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_false_positives_at_thresholds.md
deleted file mode 100644
index c8a078eecd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_false_positives_at_thresholds.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.metrics.streaming_false_positives_at_thresholds(predictions, labels, thresholds, weights=None)` {#streaming_false_positives_at_thresholds}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_mean_iou.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_mean_iou.md
deleted file mode 100644
index e6e5cae097..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_mean_iou.md
+++ /dev/null
@@ -1,51 +0,0 @@
-### `tf.contrib.metrics.streaming_mean_iou(predictions, labels, num_classes, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_iou}
-
-Calculate per-step mean Intersection-Over-Union (mIOU).
-
-Mean Intersection-Over-Union is a common evaluation metric for
-semantic image segmentation, which first computes the IOU for each
-semantic class and then computes the average over classes.
-
-##### IOU is defined as follows:
-
- IOU = true_positive / (true_positive + false_positive + false_negative).
-The predictions are accumulated in a confusion matrix, weighted by `weights`,
-and mIOU is then calculated from it.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the `mean_iou`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of prediction results for semantic labels, whose
- shape is [batch size] and type `int32` or `int64`. The tensor will be
- flattened, if its rank > 1.
-* <b>`labels`</b>: A `Tensor` of ground truth labels with shape [batch size] and of
- type `int32` or `int64`. The tensor will be flattened, if its rank > 1.
-* <b>`num_classes`</b>: The possible number of labels the prediction task can
- have. This value must be provided, since a confusion matrix of
- dimension = [num_classes, num_classes] will be allocated.
-* <b>`weights`</b>: An optional `Tensor` whose shape is broadcastable to `predictions`.
-* <b>`metrics_collections`</b>: An optional list of collections that `mean_iou`
- should be added to.
-* <b>`updates_collections`</b>: An optional list of collections `update_op` should be
- added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`mean_iou`</b>: A `Tensor` representing the mean intersection-over-union.
-* <b>`update_op`</b>: An operation that increments the confusion matrix.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_recall.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_recall.md
deleted file mode 100644
index 7b9e286f13..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_recall.md
+++ /dev/null
@@ -1,47 +0,0 @@
-### `tf.contrib.metrics.streaming_recall(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_recall}
-
-Computes the recall of the predictions with respect to the labels.
-
-The `streaming_recall` function creates two local variables, `true_positives`
-and `false_negatives`, that are used to compute the recall. This value is
-ultimately returned as `recall`, an idempotent operation that simply divides
-`true_positives` by the sum of `true_positives` and `false_negatives`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` that updates these variables and returns the `recall`. `update_op`
-weights each prediction by the corresponding value in `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted values, a `bool` `Tensor` of arbitrary shape.
-* <b>`labels`</b>: The ground truth values, a `bool` `Tensor` whose dimensions must
- match `predictions`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `recall` should
- be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`recall`</b>: Scalar float `Tensor` with the value of `true_positives` divided
- by the sum of `true_positives` and `false_negatives`.
-* <b>`update_op`</b>: `Operation` that increments `true_positives` and
- `false_negatives` variables appropriately and whose value matches
- `recall`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_root_mean_squared_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_root_mean_squared_error.md
deleted file mode 100644
index 7bdd57690e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.metrics.streaming_root_mean_squared_error.md
+++ /dev/null
@@ -1,51 +0,0 @@
-### `tf.contrib.metrics.streaming_root_mean_squared_error(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_root_mean_squared_error}
-
-Computes the root mean squared error between the labels and predictions.
-
-The `streaming_root_mean_squared_error` function creates two local variables,
-`total` and `count` that are used to compute the root mean squared error.
-This average is weighted by `weights`, and it is ultimately returned as
-`root_mean_squared_error`: an idempotent operation that takes the square root
-of the division of `total` by `count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`root_mean_squared_error`. Internally, a `squared_error` operation computes
-the element-wise square of the difference between `predictions` and `labels`.
-Then `update_op` increments `total` with the reduced sum of the product of
-`weights` and `squared_error`, and it increments `count` with the reduced sum
-of `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of arbitrary shape.
-* <b>`labels`</b>: A `Tensor` of the same shape as `predictions`.
-* <b>`weights`</b>: Optional `Tensor` indicating the frequency with which an example is
- sampled. Rank must be 0, or the same rank as `labels`, and must be
- broadcastable to `labels` (i.e., all dimensions must be either `1`, or
- the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that
- `root_mean_squared_error` should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`root_mean_squared_error`</b>: A `Tensor` representing the current mean, the value
- of `total` divided by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `root_mean_squared_error`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.rnn.LSTMStateTuple.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.rnn.LSTMStateTuple.md
deleted file mode 100644
index 7db1e1277e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.rnn.LSTMStateTuple.md
+++ /dev/null
@@ -1,54 +0,0 @@
-Tuple used by LSTM Cells for `state_size`, `zero_state`, and output state.
-
-Stores two elements: `(c, h)`, in that order.
-
-Only used when `state_is_tuple=True`.
-- - -
-
-#### `tf.contrib.rnn.LSTMStateTuple.__getnewargs__()` {#LSTMStateTuple.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMStateTuple.__getstate__()` {#LSTMStateTuple.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMStateTuple.__new__(_cls, c, h)` {#LSTMStateTuple.__new__}
-
-Create new instance of LSTMStateTuple(c, h)
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMStateTuple.__repr__()` {#LSTMStateTuple.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMStateTuple.c` {#LSTMStateTuple.c}
-
-Alias for field number 0
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMStateTuple.dtype` {#LSTMStateTuple.dtype}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMStateTuple.h` {#LSTMStateTuple.h}
-
-Alias for field number 1
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.rnn.LayerNormBasicLSTMCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.rnn.LayerNormBasicLSTMCell.md
deleted file mode 100644
index 814388a1a2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.rnn.LayerNormBasicLSTMCell.md
+++ /dev/null
@@ -1,84 +0,0 @@
-LSTM unit with layer normalization and recurrent dropout.
-
-This class adds layer normalization and recurrent dropout to a
-basic LSTM unit. Layer normalization implementation is based on:
-
- https://arxiv.org/abs/1607.06450.
-
-"Layer Normalization"
-Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton
-
-and is applied before the internal nonlinearities.
-Recurrent dropout is base on:
-
- https://arxiv.org/abs/1603.05118
-
-"Recurrent Dropout without Memory Loss"
-Stanislau Semeniuta, Aliaksei Severyn, Erhardt Barth.
-- - -
-
-#### `tf.contrib.rnn.LayerNormBasicLSTMCell.__call__(inputs, state, scope=None)` {#LayerNormBasicLSTMCell.__call__}
-
-LSTM cell with layer normalization and recurrent dropout.
-
-
-- - -
-
-#### `tf.contrib.rnn.LayerNormBasicLSTMCell.__init__(num_units, forget_bias=1.0, input_size=None, activation=tanh, layer_norm=True, norm_gain=1.0, norm_shift=0.0, dropout_keep_prob=1.0, dropout_prob_seed=None)` {#LayerNormBasicLSTMCell.__init__}
-
-Initializes the basic LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell.
-* <b>`forget_bias`</b>: float, The bias added to forget gates (see above).
-* <b>`input_size`</b>: Deprecated and unused.
-* <b>`activation`</b>: Activation function of the inner states.
-* <b>`layer_norm`</b>: If `True`, layer normalization will be applied.
-* <b>`norm_gain`</b>: float, The layer normalization gain initial value. If
- `layer_norm` has been set to `False`, this argument will be ignored.
-* <b>`norm_shift`</b>: float, The layer normalization shift initial value. If
- `layer_norm` has been set to `False`, this argument will be ignored.
-* <b>`dropout_keep_prob`</b>: unit Tensor or float between 0 and 1 representing the
- recurrent dropout probability value. If float and 1.0, no dropout will
- be applied.
-* <b>`dropout_prob_seed`</b>: (optional) integer, the randomness seed.
-
-
-- - -
-
-#### `tf.contrib.rnn.LayerNormBasicLSTMCell.output_size` {#LayerNormBasicLSTMCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.LayerNormBasicLSTMCell.state_size` {#LayerNormBasicLSTMCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.LayerNormBasicLSTMCell.zero_state(batch_size, dtype)` {#LayerNormBasicLSTMCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.make_tensor_proto.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.make_tensor_proto.md
deleted file mode 100644
index 0f6470c317..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.contrib.util.make_tensor_proto.md
+++ /dev/null
@@ -1,46 +0,0 @@
-### `tf.contrib.util.make_tensor_proto(values, dtype=None, shape=None, verify_shape=False)` {#make_tensor_proto}
-
-Create a TensorProto.
-
-##### Args:
-
-
-* <b>`values`</b>: Values to put in the TensorProto.
-* <b>`dtype`</b>: Optional tensor_pb2 DataType value.
-* <b>`shape`</b>: List of integers representing the dimensions of tensor.
-* <b>`verify_shape`</b>: Boolean that enables verification of a shape of values.
-
-##### Returns:
-
- A TensorProto. Depending on the type, it may contain data in the
- "tensor_content" attribute, which is not directly useful to Python programs.
- To access the values you should convert the proto back to a numpy ndarray
- with tensor_util.MakeNdarray(proto).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if unsupported types are provided.
-* <b>`ValueError`</b>: if arguments have inappropriate values or if verify_shape is
- True and shape of values is not equals to a shape from the argument.
-
-make_tensor_proto accepts "values" of a python scalar, a python list, a
-numpy ndarray, or a numpy scalar.
-
-If "values" is a python scalar or a python list, make_tensor_proto
-first convert it to numpy ndarray. If dtype is None, the
-conversion tries its best to infer the right numpy data
-type. Otherwise, the resulting numpy array has a compatible data
-type with the given dtype.
-
-In either case above, the numpy ndarray (either the caller provided
-or the auto converted) must have the compatible type with dtype.
-
-make_tensor_proto then converts the numpy array to a tensor proto.
-
-If "shape" is None, the resulting tensor proto represents the numpy
-array precisely.
-
-Otherwise, "shape" specifies the tensor's shape and the numpy array
-can not have more elements than what "shape" specifies.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.cumsum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.cumsum.md
deleted file mode 100644
index baa00e57d5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.cumsum.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.cumsum(x, axis=0, exclusive=False, reverse=False, name=None)` {#cumsum}
-
-Compute the cumulative sum of the tensor `x` along `axis`.
-
-By default, this op performs an inclusive cumsum, which means that the first
-element of the input is identical to the first element of the output:
-```prettyprint
-tf.cumsum([a, b, c]) ==> [a, a + b, a + b + c]
-```
-
-By setting the `exclusive` kwarg to `True`, an exclusive cumsum is performed
-instead:
-```prettyprint
-tf.cumsum([a, b, c], exclusive=True) ==> [0, a, a + b]
-```
-
-By setting the `reverse` kwarg to `True`, the cumsum is performed in the
-opposite direction:
-```prettyprint
-tf.cumsum([a, b, c], reverse=True) ==> [a + b + c, b + c, c]
-```
-This is more efficient than using separate `tf.reverse` ops.
-
-The `reverse` and `exclusive` kwargs can also be combined:
-```prettyprint
-tf.cumsum([a, b, c], exclusive=True, reverse=True) ==> [b + c, c, 0]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`,
- `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,
- `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`axis`</b>: A `Tensor` of type `int32` (default: 0).
-* <b>`reverse`</b>: A `bool` (default: False).
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.decode_json_example.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.decode_json_example.md
deleted file mode 100644
index bf5184c40a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.decode_json_example.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.decode_json_example(json_examples, name=None)` {#decode_json_example}
-
-Convert JSON-encoded Example records to binary protocol buffer strings.
-
-This op translates a tensor containing Example records, encoded using
-the [standard JSON
-mapping](https://developers.google.com/protocol-buffers/docs/proto3#json),
-into a tensor containing the same records encoded as binary protocol
-buffers. The resulting tensor can then be fed to any of the other
-Example-parsing ops.
-
-##### Args:
-
-
-* <b>`json_examples`</b>: A `Tensor` of type `string`.
- Each string is a JSON object serialized according to the JSON
- mapping of the Example proto.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`.
- Each string is a binary Example protocol buffer corresponding
- to the respective element of `json_examples`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.dequantize.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.dequantize.md
deleted file mode 100644
index edf0de7a04..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.dequantize.md
+++ /dev/null
@@ -1,52 +0,0 @@
-### `tf.dequantize(input, min_range, max_range, mode=None, name=None)` {#dequantize}
-
-Dequantize the 'input' tensor into a float Tensor.
-
-[min_range, max_range] are scalar floats that specify the range for
-the 'input' data. The 'mode' attribute controls exactly which calculations are
-used to convert the float values to their quantized equivalents.
-
-In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:
-
-```
-if T == qint8, in[i] += (range(T) + 1)/ 2.0
-out[i] = min_range + (in[i]* (max_range - min_range) / range(T))
-```
-here `range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()`
-
-*MIN_COMBINED Mode Example*
-
-If the input comes from a QuantizedRelu6, the output type is
-quint8 (range of 0-255) but the possible range of QuantizedRelu6 is
-0-6. The min_range and max_range values are therefore 0.0 and 6.0.
-Dequantize on quint8 will take each value, cast to float, and multiply
-by 6 / 255.
-Note that if quantizedtype is qint8, the operation will additionally add
-each value by 128 prior to casting.
-
-If the mode is 'MIN_FIRST', then this approach is used:
-
-```
-number_of_steps = 1 << (# of bits in T)
-range_adjust = number_of_steps / (number_of_steps - 1)
-range = (range_max - range_min) * range_adjust
-range_scale = range / number_of_steps
-const double offset_input = static_cast<double>(input) - lowest_quantized;
-result = range_min + ((input - numeric_limits<T>::min()) * range_scale)
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
-* <b>`min_range`</b>: A `Tensor` of type `float32`.
- The minimum scalar value possibly produced for the input.
-* <b>`max_range`</b>: A `Tensor` of type `float32`.
- The maximum scalar value possibly produced for the input.
-* <b>`mode`</b>: An optional `string` from: `"MIN_COMBINED", "MIN_FIRST"`. Defaults to `"MIN_COMBINED"`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.dynamic_partition.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.dynamic_partition.md
deleted file mode 100644
index e24bc8c39e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.dynamic_partition.md
+++ /dev/null
@@ -1,54 +0,0 @@
-### `tf.dynamic_partition(data, partitions, num_partitions, name=None)` {#dynamic_partition}
-
-Partitions `data` into `num_partitions` tensors using indices from `partitions`.
-
-For each index tuple `js` of size `partitions.ndim`, the slice `data[js, ...]`
-becomes part of `outputs[partitions[js]]`. The slices with `partitions[js] = i`
-are placed in `outputs[i]` in lexicographic order of `js`, and the first
-dimension of `outputs[i]` is the number of entries in `partitions` equal to `i`.
-In detail,
-
-```python
- outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]
-
- outputs[i] = pack([data[js, ...] for js if partitions[js] == i])
-```
-
-`data.shape` must start with `partitions.shape`.
-
-For example:
-
-```python
- # Scalar partitions.
- partitions = 1
- num_partitions = 2
- data = [10, 20]
- outputs[0] = [] # Empty with shape [0, 2]
- outputs[1] = [[10, 20]]
-
- # Vector partitions.
- partitions = [0, 0, 1, 1, 0]
- num_partitions = 2
- data = [10, 20, 30, 40, 50]
- outputs[0] = [10, 20, 50]
- outputs[1] = [30, 40]
-```
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/DynamicPartition.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`.
-* <b>`partitions`</b>: A `Tensor` of type `int32`.
- Any shape. Indices in the range `[0, num_partitions)`.
-* <b>`num_partitions`</b>: An `int` that is `>= 1`.
- The number of partitions to output.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A list of `num_partitions` `Tensor` objects of the same type as data.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.erfc.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.erfc.md
deleted file mode 100644
index 62c13418f7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.erfc.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.erfc(x, name=None)` {#erfc}
-
-Computes the complementary error function of `x` element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.AbortedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.AbortedError.md
deleted file mode 100644
index f2bc775dcb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.AbortedError.md
+++ /dev/null
@@ -1,15 +0,0 @@
-The operation was aborted, typically due to a concurrent action.
-
-For example, running a
-[`queue.enqueue()`](../../api_docs/python/io_ops.md#QueueBase.enqueue)
-operation may raise `AbortedError` if a
-[`queue.close()`](../../api_docs/python/io_ops.md#QueueBase.close) operation
-previously ran.
-
-- - -
-
-#### `tf.errors.AbortedError.__init__(node_def, op, message)` {#AbortedError.__init__}
-
-Creates an `AbortedError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.InternalError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.InternalError.md
deleted file mode 100644
index dd229d2a3d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.InternalError.md
+++ /dev/null
@@ -1,12 +0,0 @@
-Raised when the system experiences an internal error.
-
-This exception is raised when some invariant expected by the runtime
-has been broken. Catching this exception is not recommended.
-
-- - -
-
-#### `tf.errors.InternalError.__init__(node_def, op, message)` {#InternalError.__init__}
-
-Creates an `InternalError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.NotFoundError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.NotFoundError.md
deleted file mode 100644
index 49fec3c55c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.NotFoundError.md
+++ /dev/null
@@ -1,14 +0,0 @@
-Raised when a requested entity (e.g., a file or directory) was not found.
-
-For example, running the
-[`tf.WholeFileReader.read()`](../../api_docs/python/io_ops.md#WholeFileReader)
-operation could raise `NotFoundError` if it receives the name of a file that
-does not exist.
-
-- - -
-
-#### `tf.errors.NotFoundError.__init__(node_def, op, message)` {#NotFoundError.__init__}
-
-Creates a `NotFoundError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.UnimplementedError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.UnimplementedError.md
deleted file mode 100644
index 945daa1a22..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.errors.UnimplementedError.md
+++ /dev/null
@@ -1,15 +0,0 @@
-Raised when an operation has not been implemented.
-
-Some operations may raise this error when passed otherwise-valid
-arguments that it does not currently support. For example, running
-the [`tf.nn.max_pool()`](../../api_docs/python/nn.md#max_pool) operation
-would raise this error if pooling was requested on the batch dimension,
-because this is not yet supported.
-
-- - -
-
-#### `tf.errors.UnimplementedError.__init__(node_def, op, message)` {#UnimplementedError.__init__}
-
-Creates an `UnimplementedError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.fake_quant_with_min_max_args.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.fake_quant_with_min_max_args.md
deleted file mode 100644
index fcad8cb500..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.fake_quant_with_min_max_args.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.fake_quant_with_min_max_args(inputs, min=None, max=None, name=None)` {#fake_quant_with_min_max_args}
-
-Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type.
-
-Attributes [min; max] define the clamping range for the 'inputs' data. Op
-divides this range into 255 steps (total of 256 values), then replaces each
-'inputs' value with the closest of the quantized step values.
-
-Quantization is called fake since the output is still in floating point.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A `Tensor` of type `float32`.
-* <b>`min`</b>: An optional `float`. Defaults to `-6`.
-* <b>`max`</b>: An optional `float`. Defaults to `6`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.igamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.igamma.md
deleted file mode 100644
index 92b5fbe851..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.igamma.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.igamma(a, x, name=None)` {#igamma}
-
-Compute the lower regularized incomplete Gamma function `Q(a, x)`.
-
-The lower regularized incomplete Gamma function is defined as:
-
-```
-P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x)
-```
-where
-```
-gamma(a, x) = int_{0}^{x} t^{a-1} exp(-t) dt
-```
-is the lower incomplete Gamma function.
-
-Note, above `Q(a, x)` (`Igammac`) is the upper regularized complete
-Gamma function.
-
-##### Args:
-
-
-* <b>`a`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`x`</b>: A `Tensor`. Must have the same type as `a`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `a`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.decode_gif.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.decode_gif.md
deleted file mode 100644
index 45e7ab9d22..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.decode_gif.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.image.decode_gif(contents, name=None)` {#decode_gif}
-
-Decode the first frame of a GIF-encoded image to a uint8 tensor.
-
-GIF with frame or transparency compression are not supported
-convert animated GIF from compressed to uncompressed by:
-
-convert $src.gif -coalesce $dst.gif
-
-##### Args:
-
-
-* <b>`contents`</b>: A `Tensor` of type `string`. 0-D. The GIF-encoded image.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `uint8`.
- 4-D with shape `[num_frames, height, width, 3]`. RGB order
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.extract_glimpse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.extract_glimpse.md
deleted file mode 100644
index 83482124e7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.extract_glimpse.md
+++ /dev/null
@@ -1,56 +0,0 @@
-### `tf.image.extract_glimpse(input, size, offsets, centered=None, normalized=None, uniform_noise=None, name=None)` {#extract_glimpse}
-
-Extracts a glimpse from the input tensor.
-
-Returns a set of windows called glimpses extracted at location
-`offsets` from the input tensor. If the windows only partially
-overlaps the inputs, the non overlapping areas will be filled with
-random noise.
-
-The result is a 4-D tensor of shape `[batch_size, glimpse_height,
-glimpse_width, channels]`. The channels and batch dimensions are the
-same as that of the input tensor. The height and width of the output
-windows are specified in the `size` parameter.
-
-The argument `normalized` and `centered` controls how the windows are built:
-
-* If the coordinates are normalized but not centered, 0.0 and 1.0
- correspond to the minimum and maximum of each height and width
- dimension.
-* If the coordinates are both normalized and centered, they range from
- -1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper
- left corner, the lower right corner is located at (1.0, 1.0) and the
- center is at (0, 0).
-* If the coordinates are not normalized they are interpreted as
- numbers of pixels.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `float32`.
- A 4-D float tensor of shape `[batch_size, height, width, channels]`.
-* <b>`size`</b>: A `Tensor` of type `int32`.
- A 1-D tensor of 2 elements containing the size of the glimpses
- to extract. The glimpse height must be specified first, following
- by the glimpse width.
-* <b>`offsets`</b>: A `Tensor` of type `float32`.
- A 2-D integer tensor of shape `[batch_size, 2]` containing
- the x, y locations of the center of each window.
-* <b>`centered`</b>: An optional `bool`. Defaults to `True`.
- indicates if the offset coordinates are centered relative to
- the image, in which case the (0, 0) offset is relative to the center
- of the input images. If false, the (0,0) offset corresponds to the
- upper left corner of the input images.
-* <b>`normalized`</b>: An optional `bool`. Defaults to `True`.
- indicates if the offset coordinates are normalized.
-* <b>`uniform_noise`</b>: An optional `bool`. Defaults to `True`.
- indicates if the noise should be generated using a
- uniform distribution or a Gaussian distribution.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`.
- A tensor representing the glimpses `[batch_size,
- glimpse_height, glimpse_width, channels]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.rgb_to_hsv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.rgb_to_hsv.md
deleted file mode 100644
index c08a086b88..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.image.rgb_to_hsv.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.image.rgb_to_hsv(images, name=None)` {#rgb_to_hsv}
-
-Converts one or more images from RGB to HSV.
-
-Outputs a tensor of the same shape as the `images` tensor, containing the HSV
-value of the pixels. The output is only well defined if the value in `images`
-are in `[0,1]`.
-
-`output[..., 0]` contains hue, `output[..., 1]` contains saturation, and
-`output[..., 2]` contains value. All HSV values are in `[0,1]`. A hue of 0
-corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
- 1-D or higher rank. RGB data to convert. Last dimension must be size 3.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `images`. `images` converted to HSV.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.import_graph_def.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.import_graph_def.md
deleted file mode 100644
index 6afe7e1fc5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.import_graph_def.md
+++ /dev/null
@@ -1,50 +0,0 @@
-### `tf.import_graph_def(graph_def, input_map=None, return_elements=None, name=None, op_dict=None, producer_op_list=None)` {#import_graph_def}
-
-Imports the graph from `graph_def` into the current default `Graph`.
-
-This function provides a way to import a serialized TensorFlow
-[`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto)
-protocol buffer, and extract individual objects in the `GraphDef` as
-[`Tensor`](#Tensor) and [`Operation`](#Operation) objects. Once extracted,
-these objects are placed into the current default `Graph`. See
-[`Graph.as_graph_def()`](#Graph.as_graph_def) for a way to create a `GraphDef`
-proto.
-
-##### Args:
-
-
-* <b>`graph_def`</b>: A `GraphDef` proto containing operations to be imported into
- the default graph.
-* <b>`input_map`</b>: A dictionary mapping input names (as strings) in `graph_def`
- to `Tensor` objects. The values of the named input tensors in the
- imported graph will be re-mapped to the respective `Tensor` values.
-* <b>`return_elements`</b>: A list of strings containing operation names in
- `graph_def` that will be returned as `Operation` objects; and/or
- tensor names in `graph_def` that will be returned as `Tensor` objects.
-* <b>`name`</b>: (Optional.) A prefix that will be prepended to the names in
- `graph_def`. Defaults to `"import"`.
-* <b>`op_dict`</b>: (Optional.) A dictionary mapping op type names to `OpDef` protos.
- Must contain an `OpDef` proto for each op type named in `graph_def`.
- If omitted, uses the `OpDef` protos registered in the global registry.
-* <b>`producer_op_list`</b>: (Optional.) An `OpList` proto with the (possibly stripped)
- list of `OpDef`s used by the producer of the graph. If provided, attrs
- for ops in `graph_def` that are not in `op_dict` that have their default
- value according to `producer_op_list` will be removed. This will allow
- some more `GraphDef`s produced by later binaries to be accepted by
- earlier binaries.
-
-##### Returns:
-
- A list of `Operation` and/or `Tensor` objects from the imported graph,
- corresponding to the names in `return_elements`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `graph_def` is not a `GraphDef` proto,
- `input_map` is not a dictionary mapping strings to `Tensor` objects,
- or `return_elements` is not a list of strings.
-* <b>`ValueError`</b>: If `input_map`, or `return_elements` contains names that
- do not appear in `graph_def`, or `graph_def` is not well-formed (e.g.
- it refers to an unknown tensor).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.initialize_all_tables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.initialize_all_tables.md
deleted file mode 100644
index 4309820b84..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.initialize_all_tables.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.initialize_all_tables(*args, **kwargs)` {#initialize_all_tables}
-
-Returns an Op that initializes all tables of the default graph. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02.
-Instructions for updating:
-Use `tf.tables_initializer` instead.
-
-##### Args:
-
-
-* <b>`name`</b>: Optional name for the initialization op.
-
-##### Returns:
-
- An Op that initializes all tables. Note that if there are
- not tables the returned Op is a NoOp.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.load_op_library.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.load_op_library.md
deleted file mode 100644
index 4d6c027482..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.load_op_library.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.load_op_library(library_filename)` {#load_op_library}
-
-Loads a TensorFlow plugin, containing custom ops and kernels.
-
-Pass "library_filename" to a platform-specific mechanism for dynamically
-loading a library. The rules for determining the exact location of the
-library are platform-specific and are not documented here. When the
-library is loaded, ops and kernels registered in the library via the
-`REGISTER_*` macros are made available in the TensorFlow process. Note
-that ops with the same name as an existing op are rejected and not
-registered with the process.
-
-##### Args:
-
-
-* <b>`library_filename`</b>: Path to the plugin.
- Relative or absolute filesystem path to a dynamic library file.
-
-##### Returns:
-
- A python module containing the Python wrappers for Ops defined in
- the plugin.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: when unable to load the library or get the python wrappers.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.maximum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.maximum.md
deleted file mode 100644
index aec816dcba..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.maximum.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.maximum(x, y, name=None)` {#maximum}
-
-Returns the max of x and y (i.e. x > y ? x : y) element-wise.
-
-*NOTE*: `Maximum` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.min_max_variable_partitioner.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.min_max_variable_partitioner.md
deleted file mode 100644
index c301044187..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.min_max_variable_partitioner.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.min_max_variable_partitioner(max_partitions=1, axis=0, min_slice_size=262144, bytes_per_string_element=16)` {#min_max_variable_partitioner}
-
-Partitioner to allocate minimum size per slice.
-
-Returns a partitioner that partitions the variable of given shape and dtype
-such that each partition has a minimum of `min_slice_size` slice of the
-variable. The maximum number of such partitions (upper bound) is given by
-`max_partitions`.
-
-##### Args:
-
-
-* <b>`max_partitions`</b>: Upper bound on the number of partitions. Defaults to 1.
-* <b>`axis`</b>: Axis along which to partition the variable. Defaults to 0.
-* <b>`min_slice_size`</b>: Minimum size of the variable slice per partition. Defaults
- to 256K.
-* <b>`bytes_per_string_element`</b>: If the `Variable` is of type string, this provides
- an estimate of how large each scalar in the `Variable` is.
-
-##### Returns:
-
- A partition function usable as the `partitioner` argument to
- `variable_scope`, `get_variable`, and `get_partitioned_variable_list`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.moving_average_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.moving_average_variables.md
deleted file mode 100644
index 467a666e2c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.moving_average_variables.md
+++ /dev/null
@@ -1,13 +0,0 @@
-### `tf.moving_average_variables()` {#moving_average_variables}
-
-Returns all variables that maintain their moving averages.
-
-If an `ExponentialMovingAverage` object is created and the `apply()`
-method is called on a list of variables, these variables will
-be added to the `GraphKeys.MOVING_AVERAGE_VARIABLES` collection.
-This convenience function returns the contents of that collection.
-
-##### Returns:
-
- A list of Variable objects.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.multiply.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.multiply.md
deleted file mode 100644
index f1647ee45b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.multiply.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.multiply(x, y, name=None)` {#multiply}
-
-Returns x * y element-wise.
-
-*NOTE*: ``tf.multiply`` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.batch_norm_with_global_normalization.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.batch_norm_with_global_normalization.md
deleted file mode 100644
index a95bd71a04..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.batch_norm_with_global_normalization.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.nn.batch_norm_with_global_normalization(t, m, v, beta, gamma, variance_epsilon, scale_after_normalization, name=None)` {#batch_norm_with_global_normalization}
-
-Batch normalization.
-
-This op is deprecated. See `tf.nn.batch_normalization`.
-
-##### Args:
-
-
-* <b>`t`</b>: A 4D input Tensor.
-* <b>`m`</b>: A 1D mean Tensor with size matching the last dimension of t.
- This is the first output from tf.nn.moments,
- or a saved moving average thereof.
-* <b>`v`</b>: A 1D variance Tensor with size matching the last dimension of t.
- This is the second output from tf.nn.moments,
- or a saved moving average thereof.
-* <b>`beta`</b>: A 1D beta Tensor with size matching the last dimension of t.
- An offset to be added to the normalized tensor.
-* <b>`gamma`</b>: A 1D gamma Tensor with size matching the last dimension of t.
- If "scale_after_normalization" is true, this tensor will be multiplied
- with the normalized tensor.
-* <b>`variance_epsilon`</b>: A small float number to avoid dividing by 0.
-* <b>`scale_after_normalization`</b>: A bool indicating whether the resulted tensor
- needs to be multiplied with gamma.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A batch-normalized `t`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.ctc_greedy_decoder.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.ctc_greedy_decoder.md
deleted file mode 100644
index 1435f8cb5e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.ctc_greedy_decoder.md
+++ /dev/null
@@ -1,40 +0,0 @@
-### `tf.nn.ctc_greedy_decoder(inputs, sequence_length, merge_repeated=True)` {#ctc_greedy_decoder}
-
-Performs greedy decoding on the logits given in input (best path).
-
-Note: Regardless of the value of merge_repeated, if the maximum index of a
-given time and batch corresponds to the blank index `(num_classes - 1)`, no
-new element is emitted.
-
-If `merge_repeated` is `True`, merge repeated classes in output.
-This means that if consecutive logits' maximum indices are the same,
-only the first of these is emitted. The sequence `A B B * B * B` (where '*'
-is the blank label) becomes
-
- * `A B B B` if `merge_repeated=True`.
- * `A B B B B` if `merge_repeated=False`.
-
-##### Args:
-
-
-* <b>`inputs`</b>: 3-D `float` `Tensor` sized
- `[max_time x batch_size x num_classes]`. The logits.
-* <b>`sequence_length`</b>: 1-D `int32` vector containing sequence lengths,
- having size `[batch_size]`.
-* <b>`merge_repeated`</b>: Boolean. Default: True.
-
-##### Returns:
-
- A tuple `(decoded, log_probabilities)` where
-
-* <b>`decoded`</b>: A single-element list. `decoded[0]`
- is an `SparseTensor` containing the decoded outputs s.t.:
- `decoded.indices`: Indices matrix `(total_decoded_outputs x 2)`.
- The rows store: `[batch, time]`.
- `decoded.values`: Values vector, size `(total_decoded_outputs)`.
- The vector stores the decoded classes.
- `decoded.shape`: Shape vector, size `(2)`.
- The shape values are: `[batch_size, max_decoded_length]`
-* <b>`log_probability`</b>: A `float` matrix `(batch_size x 1)` containing sequence
- log-probabilities.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.depthwise_conv2d_native_backprop_filter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.depthwise_conv2d_native_backprop_filter.md
deleted file mode 100644
index 5096756d7c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.depthwise_conv2d_native_backprop_filter.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.nn.depthwise_conv2d_native_backprop_filter(input, filter_sizes, out_backprop, strides, padding, name=None)` {#depthwise_conv2d_native_backprop_filter}
-
-Computes the gradients of depthwise convolution with respect to the filter.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
- 4-D with shape `[batch, in_height, in_width, in_channels]`.
-* <b>`filter_sizes`</b>: A `Tensor` of type `int32`.
- An integer vector representing the tensor shape of `filter`,
- where `filter` is a 4-D
- `[filter_height, filter_width, in_channels, depthwise_multiplier]` tensor.
-* <b>`out_backprop`</b>: A `Tensor`. Must have the same type as `input`.
- 4-D with shape `[batch, out_height, out_width, out_channels]`.
- Gradients w.r.t. the output of the convolution.
-* <b>`strides`</b>: A list of `ints`.
- The stride of the sliding window for each dimension of the input
- of the convolution.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`. 4-D with shape
- `[filter_height, filter_width, in_channels, out_channels]`. Gradient w.r.t.
- the `filter` input of the convolution.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.elu.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.elu.md
deleted file mode 100644
index 8ffeeca65c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.elu.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.nn.elu(features, name=None)` {#elu}
-
-Computes exponential linear: `exp(features) - 1` if < 0, `features` otherwise.
-
-See [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
-](http://arxiv.org/abs/1511.07289)
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `features`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.separable_conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.separable_conv2d.md
deleted file mode 100644
index 24e2688831..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.separable_conv2d.md
+++ /dev/null
@@ -1,54 +0,0 @@
-### `tf.nn.separable_conv2d(input, depthwise_filter, pointwise_filter, strides, padding, rate=None, name=None)` {#separable_conv2d}
-
-2-D convolution with separable filters.
-
-Performs a depthwise convolution that acts separately on channels followed by
-a pointwise convolution that mixes channels. Note that this is separability
-between dimensions `[1, 2]` and `3`, not spatial separability between
-dimensions `1` and `2`.
-
-In detail,
-
- output[b, i, j, k] = sum_{di, dj, q, r]
- input[b, strides[1] * i + di, strides[2] * j + dj, q] *
- depthwise_filter[di, dj, q, r] *
- pointwise_filter[0, 0, q * channel_multiplier + r, k]
-
-`strides` controls the strides for the depthwise convolution only, since
-the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have
-`strides[0] = strides[3] = 1`. For the most common case of the same
-horizontal and vertical strides, `strides = [1, stride, stride, 1]`.
-If any value in `rate` is greater than 1, we perform atrous depthwise
-convolution, in which case all values in the `strides` tensor must be equal
-to 1.
-
-##### Args:
-
-
-* <b>`input`</b>: 4-D `Tensor` with shape `[batch, in_height, in_width, in_channels]`.
-* <b>`depthwise_filter`</b>: 4-D `Tensor` with shape
- `[filter_height, filter_width, in_channels, channel_multiplier]`.
- Contains `in_channels` convolutional filters of depth 1.
-* <b>`pointwise_filter`</b>: 4-D `Tensor` with shape
- `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise
- filter to mix channels after `depthwise_filter` has convolved spatially.
-* <b>`strides`</b>: 1-D of size 4. The strides for the depthwise convolution for
- each dimension of `input`.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
- See the [comment
- here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`rate`</b>: 1-D of size 2. The dilation rate in which we sample input values
- across the `height` and `width` dimensions in atrous convolution. If it is
- greater than 1, then all values of strides must be 1.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A 4-D `Tensor` of shape `[batch, out_height, out_width, out_channels]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If channel_multiplier * in_channels > out_channels,
- which means that the separable convolution is overparameterized.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.softmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.softmax.md
deleted file mode 100644
index 65da6889d9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.softmax.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.nn.softmax(logits, dim=-1, name=None)` {#softmax}
-
-Computes softmax activations.
-
-For each batch `i` and class `j` we have
-
- softmax = exp(logits) / reduce_sum(exp(logits), dim)
-
-##### Args:
-
-
-* <b>`logits`</b>: A non-empty `Tensor`. Must be one of the following types: `half`,
- `float32`, `float64`.
-* <b>`dim`</b>: The dimension softmax would be performed on. The default is -1 which
- indicates the last dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `logits`. Same shape as `logits`.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: if `logits` is empty or `dim` is beyond the last
- dimension of `logits`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.with_space_to_batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.with_space_to_batch.md
deleted file mode 100644
index ced972a78d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.nn.with_space_to_batch.md
+++ /dev/null
@@ -1,133 +0,0 @@
-### `tf.nn.with_space_to_batch(input, dilation_rate, padding, op, filter_shape=None, spatial_dims=None)` {#with_space_to_batch}
-
-Performs `op` on the space-to-batch representation of `input`.
-
-This has the effect of transforming sliding window operations into the
-corresponding "atrous" operation in which the input is sampled at the
-specified `dilation_rate`.
-
-In the special case that `dilation_rate` is uniformly 1, this simply returns:
-
- op(input, num_spatial_dims, padding)
-
-Otherwise, it returns:
-
- batch_to_space_nd(
- op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings),
- num_spatial_dims,
- "VALID")
- adjusted_dilation_rate,
- adjusted_crops),
-
-where:
-
- adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)],
- adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]
-
-defined as follows:
-
-We first define two int64 tensors `paddings` and `crops` of shape
-`[num_spatial_dims, 2]` based on the value of `padding` and the spatial
-dimensions of the `input`:
-
-If `padding = "VALID"`, then:
-
- paddings, crops = required_space_to_batch_paddings(
- input_shape[spatial_dims],
- dilation_rate)
-
-If `padding = "SAME"`, then:
-
- dilated_filter_shape =
- filter_shape + (filter_shape - 1) * (dilation_rate - 1)
-
- paddings, crops = required_space_to_batch_paddings(
- input_shape[spatial_dims],
- dilation_rate,
- [(dilated_filter_shape - 1) // 2,
- dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])
-
-Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial
-dimensions are contiguous starting at the second dimension, but the specified
-`spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and
-`crops` in order to be usable with these operations. For a given dimension,
-if the block size is 1, and both the starting and ending padding and crop
-amounts are 0, then space_to_batch_nd effectively leaves that dimension alone,
-which is what is needed for dimensions not part of `spatial_dims`.
-Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case
-efficiently for any number of leading and trailing dimensions.
-
-For 0 <= i < len(spatial_dims), we assign:
-
- adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i]
- adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :]
- adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]
-
-All unassigned values of `adjusted_dilation_rate` default to 1, while all
-unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.
-
-Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID"
-padding is equivalent to specifying `padding = "SAME"` with a filter_shape of
-`[1]*N`.
-
-Advanced usage. Note the following optimization: A sequence of
-`with_space_to_batch` operations with identical (not uniformly 1)
-`dilation_rate` parameters and "VALID" padding
-
- net = with_space_to_batch(net, dilation_rate, "VALID", op_1)
- ...
- net = with_space_to_batch(net, dilation_rate, "VALID", op_k)
-
-can be combined into a single `with_space_to_batch` operation as follows:
-
- def combined_op(converted_input, num_spatial_dims, _):
- result = op_1(converted_input, num_spatial_dims, "VALID")
- ...
- result = op_k(result, num_spatial_dims, "VALID")
-
- net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
-
-This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and
-`batch_to_space_nd`.
-
-Similarly, a sequence of `with_space_to_batch` operations with identical (not
-uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter
-dimensions
-
- net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1)
- ...
- net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)
-
-can be combined into a single `with_space_to_batch` operation as follows:
-
- def combined_op(converted_input, num_spatial_dims, _):
- result = op_1(converted_input, num_spatial_dims, "SAME")
- ...
- result = op_k(result, num_spatial_dims, "SAME")
-
- net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
-
-##### Args:
-
-
-* <b>`input`</b>: Tensor of rank > max(spatial_dims).
-* <b>`dilation_rate`</b>: int32 Tensor of *known* shape [num_spatial_dims].
-* <b>`padding`</b>: str constant equal to "VALID" or "SAME"
-* <b>`op`</b>: Function that maps (input, num_spatial_dims, padding) -> output
-* <b>`filter_shape`</b>: If padding = "SAME", specifies the shape of the convolution
- kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims].
- If padding = "VALID", filter_shape is ignored and need not be specified.
-* <b>`spatial_dims`</b>: Monotonically increasing sequence of `num_spatial_dims`
- integers (which are >= 1) specifying the spatial dimensions of `input`
- and output. Defaults to: `range(1, num_spatial_dims+1)`.
-
-##### Returns:
-
- The output Tensor as described above.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `padding` is invalid or the arguments are incompatible.
-* <b>`ValueError`</b>: if `spatial_dims` are invalid.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.one_hot.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.one_hot.md
deleted file mode 100644
index 7fe09d8cd0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.one_hot.md
+++ /dev/null
@@ -1,131 +0,0 @@
-### `tf.one_hot(indices, depth, on_value=None, off_value=None, axis=None, dtype=None, name=None)` {#one_hot}
-
-Returns a one-hot tensor.
-
-The locations represented by indices in `indices` take value `on_value`,
-while all other locations take value `off_value`.
-
-`on_value` and `off_value` must have matching data types. If `dtype` is also
-provided, they must be the same data type as specified by `dtype`.
-
-If `on_value` is not provided, it will default to the value `1` with type
-`dtype`
-
-If `off_value` is not provided, it will default to the value `0` with type
-`dtype`
-
-If the input `indices` is rank `N`, the output will have rank `N+1`. The
-new axis is created at dimension `axis` (default: the new axis is appended
-at the end).
-
-If `indices` is a scalar the output shape will be a vector of length `depth`
-
-If `indices` is a vector of length `features`, the output shape will be:
-
-```
- features x depth if axis == -1
- depth x features if axis == 0
-```
-
-If `indices` is a matrix (batch) with shape `[batch, features]`, the output
-shape will be:
-
-```
- batch x features x depth if axis == -1
- batch x depth x features if axis == 1
- depth x batch x features if axis == 0
-```
-
-If `dtype` is not provided, it will attempt to assume the data type of
-`on_value` or `off_value`, if one or both are passed in. If none of
-`on_value`, `off_value`, or `dtype` are provided, `dtype` will default to the
-value `tf.float32`.
-
-Note: If a non-numeric data type output is desired (`tf.string`, `tf.bool`,
-etc.), both `on_value` and `off_value` _must_ be provided to `one_hot`.
-
-Examples
-=========
-
-Suppose that
-
-```python
- indices = [0, 2, -1, 1]
- depth = 3
- on_value = 5.0
- off_value = 0.0
- axis = -1
-```
-
-Then output is `[4 x 3]`:
-
-```python
- output =
- [5.0 0.0 0.0] // one_hot(0)
- [0.0 0.0 5.0] // one_hot(2)
- [0.0 0.0 0.0] // one_hot(-1)
- [0.0 5.0 0.0] // one_hot(1)
-```
-
-Suppose that
-
-```python
- indices = [[0, 2], [1, -1]]
- depth = 3
- on_value = 1.0
- off_value = 0.0
- axis = -1
-```
-
-Then output is `[2 x 2 x 3]`:
-
-```python
- output =
- [
- [1.0, 0.0, 0.0] // one_hot(0)
- [0.0, 0.0, 1.0] // one_hot(2)
- ][
- [0.0, 1.0, 0.0] // one_hot(1)
- [0.0, 0.0, 0.0] // one_hot(-1)
- ]
-```
-
-Using default values for `on_value` and `off_value`:
-
-```python
- indices = [0, 1, 2]
- depth = 3
-```
-
-The output will be
-
-```python
- output =
- [[1., 0., 0.],
- [0., 1., 0.],
- [0., 0., 1.]]
-```
-
-##### Args:
-
-
-* <b>`indices`</b>: A `Tensor` of indices.
-* <b>`depth`</b>: A scalar defining the depth of the one hot dimension.
-* <b>`on_value`</b>: A scalar defining the value to fill in output when `indices[j]
- = i`. (default: 1)
-* <b>`off_value`</b>: A scalar defining the value to fill in output when `indices[j]
- != i`. (default: 0)
-* <b>`axis`</b>: The axis to fill (default: -1, a new inner-most axis).
-* <b>`dtype`</b>: The data type of the output tensor.
-
-##### Returns:
-
-
-* <b>`output`</b>: The one-hot tensor.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If dtype of either `on_value` or `off_value` don't match `dtype`
-* <b>`TypeError`</b>: If dtype of `on_value` and `off_value` don't match one another
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.op_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.op_scope.md
deleted file mode 100644
index 0aaac5e657..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.op_scope.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.op_scope(values, name, default_name=None)` {#op_scope}
-
-DEPRECATED. Same as name_scope above, just different argument order.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.parse_example.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.parse_example.md
deleted file mode 100644
index 3616124c18..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.parse_example.md
+++ /dev/null
@@ -1,197 +0,0 @@
-### `tf.parse_example(serialized, features, name=None, example_names=None)` {#parse_example}
-
-Parses `Example` protos into a `dict` of tensors.
-
-Parses a number of serialized [`Example`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto)
-protos given in `serialized`.
-
-`example_names` may contain descriptive names for the corresponding serialized
-protos. These may be useful for debugging purposes, but they have no effect on
-the output. If not `None`, `example_names` must be the same length as
-`serialized`.
-
-This op parses serialized examples into a dictionary mapping keys to `Tensor`
-and `SparseTensor` objects. `features` is a dict from keys to `VarLenFeature`,
-`SparseFeature`, and `FixedLenFeature` objects. Each `VarLenFeature`
-and `SparseFeature` is mapped to a `SparseTensor`, and each
-`FixedLenFeature` is mapped to a `Tensor`.
-
-Each `VarLenFeature` maps to a `SparseTensor` of the specified type
-representing a ragged matrix. Its indices are `[batch, index]` where `batch`
-is the batch entry the value is from in `serialized`, and `index` is the
-value's index in the list of values associated with that feature and example.
-
-Each `SparseFeature` maps to a `SparseTensor` of the specified type
-representing a sparse matrix of shape
-`(serialized.size(), SparseFeature.size)`. Its indices are `[batch, index]`
-where `batch` is the batch entry the value is from in `serialized`, and
-`index` is the value's index is given by the values in the
-`SparseFeature.index_key` feature column.
-
-Each `FixedLenFeature` `df` maps to a `Tensor` of the specified type (or
-`tf.float32` if not specified) and shape `(serialized.size(),) + df.shape`.
-
-`FixedLenFeature` entries with a `default_value` are optional. With no default
-value, we will fail if that `Feature` is missing from any example in
-`serialized`.
-
-Examples:
-
-For example, if one expects a `tf.float32` sparse feature `ft` and three
-serialized `Example`s are provided:
-
-```
-serialized = [
- features
- { feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } },
- features
- { feature []},
- features
- { feature { key: "ft" value { float_list { value: [3.0] } } }
-]
-```
-
-then the output will look like:
-
-```
-{"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]],
- values=[1.0, 2.0, 3.0],
- dense_shape=(3, 2)) }
-```
-
-Given two `Example` input protos in `serialized`:
-
-```
-[
- features {
- feature { key: "kw" value { bytes_list { value: [ "knit", "big" ] } } }
- feature { key: "gps" value { float_list { value: [] } } }
- },
- features {
- feature { key: "kw" value { bytes_list { value: [ "emmy" ] } } }
- feature { key: "dank" value { int64_list { value: [ 42 ] } } }
- feature { key: "gps" value { } }
- }
-]
-```
-
-And arguments
-
-```
-example_names: ["input0", "input1"],
-features: {
- "kw": VarLenFeature(tf.string),
- "dank": VarLenFeature(tf.int64),
- "gps": VarLenFeature(tf.float32),
-}
-```
-
-Then the output is a dictionary:
-
-```python
-{
- "kw": SparseTensor(
- indices=[[0, 0], [0, 1], [1, 0]],
- values=["knit", "big", "emmy"]
- dense_shape=[2, 2]),
- "dank": SparseTensor(
- indices=[[1, 0]],
- values=[42],
- dense_shape=[2, 1]),
- "gps": SparseTensor(
- indices=[],
- values=[],
- dense_shape=[2, 0]),
-}
-```
-
-For dense results in two serialized `Example`s:
-
-```
-[
- features {
- feature { key: "age" value { int64_list { value: [ 0 ] } } }
- feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
- },
- features {
- feature { key: "age" value { int64_list { value: [] } } }
- feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
- }
-]
-```
-
-We can use arguments:
-
-```
-example_names: ["input0", "input1"],
-features: {
- "age": FixedLenFeature([], dtype=tf.int64, default_value=-1),
- "gender": FixedLenFeature([], dtype=tf.string),
-}
-```
-
-And the expected output is:
-
-```python
-{
- "age": [[0], [-1]],
- "gender": [["f"], ["f"]],
-}
-```
-
-Given two `Example` input protos in `serialized`:
-
-```
-[
- features {
- feature { key: "val" value { float_list { value: [ 0.5, -1.0 ] } } }
- feature { key: "ix" value { int64_list { value: [ 3, 20 ] } } }
- },
- features {
- feature { key: "val" value { float_list { value: [ 0.0 ] } } }
- feature { key: "ix" value { int64_list { value: [ 42 ] } } }
- }
-]
-```
-
-And arguments
-
-```
-example_names: ["input0", "input1"],
-features: {
- "sparse": SparseFeature(
- index_key="ix", value_key="val", dtype=tf.float32, size=100),
-}
-```
-
-Then the output is a dictionary:
-
-```python
-{
- "sparse": SparseTensor(
- indices=[[0, 3], [0, 20], [1, 42]],
- values=[0.5, -1.0, 0.0]
- dense_shape=[2, 100]),
-}
-```
-
-##### Args:
-
-
-* <b>`serialized`</b>: A vector (1-D Tensor) of strings, a batch of binary
- serialized `Example` protos.
-* <b>`features`</b>: A `dict` mapping feature keys to `FixedLenFeature`,
- `VarLenFeature`, and `SparseFeature` values.
-* <b>`name`</b>: A name for this operation (optional).
-* <b>`example_names`</b>: A vector (1-D Tensor) of strings (optional), the names of
- the serialized protos in the batch.
-
-##### Returns:
-
- A `dict` mapping feature keys to `Tensor` and `SparseTensor` values.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if any feature is invalid.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.pow.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.pow.md
deleted file mode 100644
index fbb53fc9a1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.pow.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.pow(x, y, name=None)` {#pow}
-
-Computes the power of one value to another.
-
-Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for
-corresponding elements in `x` and `y`. For example:
-
-```
-# tensor 'x' is [[2, 2], [3, 3]]
-# tensor 'y' is [[8, 16], [2, 3]]
-tf.pow(x, y) ==> [[256, 65536], [9, 27]]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`y`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.python_io.tf_record_iterator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.python_io.tf_record_iterator.md
deleted file mode 100644
index 92550fe57a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.python_io.tf_record_iterator.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.python_io.tf_record_iterator(path, options=None)` {#tf_record_iterator}
-
-An iterator that read the records from a TFRecords file.
-
-##### Args:
-
-
-* <b>`path`</b>: The path to the TFRecords file.
-* <b>`options`</b>: (optional) A TFRecordOptions object.
-
-##### Yields:
-
- Strings.
-
-##### Raises:
-
-
-* <b>`IOError`</b>: If `path` cannot be opened for reading.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_crop.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_crop.md
deleted file mode 100644
index d389872919..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_crop.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.random_crop(value, size, seed=None, name=None)` {#random_crop}
-
-Randomly crops a tensor to a given size.
-
-Slices a shape `size` portion out of `value` at a uniformly chosen offset.
-Requires `value.shape >= size`.
-
-If a dimension should not be cropped, pass the full size of that dimension.
-For example, RGB images can be cropped with
-`size = [crop_height, crop_width, 3]`.
-
-##### Args:
-
-
-* <b>`value`</b>: Input tensor to crop.
-* <b>`size`</b>: 1-D tensor with size the rank of `value`.
-* <b>`seed`</b>: Python integer. Used to create a random seed. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A cropped tensor of the same rank as `value` and shape `size`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_normal_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_normal_initializer.md
deleted file mode 100644
index f8932bee2e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.random_normal_initializer.md
+++ /dev/null
@@ -1,25 +0,0 @@
-Initializer that generates tensors with a normal distribution.
-
-Args:
- mean: a python scalar or a scalar tensor. Mean of the random values
- to generate.
- stddev: a python scalar or a scalar tensor. Standard deviation of the
- random values to generate.
- seed: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
- dtype: The data type. Only floating point types are supported.
-- - -
-
-#### `tf.random_normal_initializer.__call__(shape, dtype=None, partition_info=None)` {#random_normal_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.random_normal_initializer.__init__(mean=0.0, stddev=1.0, seed=None, dtype=tf.float32)` {#random_normal_initializer.__init__}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.rank.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.rank.md
deleted file mode 100644
index 32f62a93a0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.rank.md
+++ /dev/null
@@ -1,32 +0,0 @@
-### `tf.rank(input, name=None)` {#rank}
-
-Returns the rank of a tensor.
-
-This operation returns an integer representing the rank of `input`.
-
-For example:
-
-```python
-# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
-# shape of tensor 't' is [2, 2, 3]
-rank(t) ==> 3
-```
-
-**Note**: The rank of a tensor is not the same as the rank of a matrix. The
-rank of a tensor is the number of indices required to uniquely select each
-element of the tensor. Rank is also known as "order", "degree", or "ndims."
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `int32`.
-
-@compatibility(numpy)
-Equivalent to np.ndim
-@end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reciprocal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reciprocal.md
deleted file mode 100644
index d340aa5178..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.reciprocal.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.reciprocal(x, name=None)` {#reciprocal}
-
-Computes the reciprocal of x element-wise.
-
-I.e., \\(y = 1 / x\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.self_adjoint_eig.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.self_adjoint_eig.md
deleted file mode 100644
index 08d5903aa9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.self_adjoint_eig.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.self_adjoint_eig(tensor, name=None)` {#self_adjoint_eig}
-
-Computes the eigen decomposition of a batch of self-adjoint matrices.
-
-Computes the eigenvalues and eigenvectors of the innermost N-by-N matrices
-in `tensor` such that
-`tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i]`, for i=0...N-1.
-
-##### Args:
-
-
-* <b>`tensor`</b>: `Tensor` of shape `[..., N, N]`. Only the lower triangular part of
- each inner inner matrix is referenced.
-* <b>`name`</b>: string, optional name of the operation.
-
-##### Returns:
-
-
-* <b>`e`</b>: Eigenvalues. Shape is `[..., N]`.
-* <b>`v`</b>: Eigenvectors. Shape is `[..., N, N]`. The columns of the inner most
- matrices contain eigenvectors of the corresponding matrices in `tensor`
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sigmoid.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sigmoid.md
deleted file mode 100644
index 8ee71e1370..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sigmoid.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.sigmoid(x, name=None)` {#sigmoid}
-
-Computes sigmoid of `x` element-wise.
-
-Specifically, `y = 1 / (1 + exp(-x))`.
-
-##### Args:
-
-
-* <b>`x`</b>: A Tensor with type `float32`, `float64`, `int32`, `complex64`, `int64`,
- or `qint32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A Tensor with the same type as `x` if `x.dtype != qint32`
- otherwise the return type is `quint8`.
-
-@compatibility(numpy)
-Equivalent to np.scipy.special.expit
-@end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.slice.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.slice.md
deleted file mode 100644
index aaaf6208e3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.slice.md
+++ /dev/null
@@ -1,47 +0,0 @@
-### `tf.slice(input_, begin, size, name=None)` {#slice}
-
-Extracts a slice from a tensor.
-
-This operation extracts a slice of size `size` from a tensor `input` starting
-at the location specified by `begin`. The slice `size` is represented as a
-tensor shape, where `size[i]` is the number of elements of the 'i'th dimension
-of `input` that you want to slice. The starting location (`begin`) for the
-slice is represented as an offset in each dimension of `input`. In other
-words, `begin[i]` is the offset into the 'i'th dimension of `input` that you
-want to slice from.
-
-`begin` is zero-based; `size` is one-based. If `size[i]` is -1,
-all remaining elements in dimension i are included in the
-slice. In other words, this is equivalent to setting:
-
-`size[i] = input.dim_size(i) - begin[i]`
-
-This operation requires that:
-
-`0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]`
-
-For example:
-
-```python
-# 'input' is [[[1, 1, 1], [2, 2, 2]],
-# [[3, 3, 3], [4, 4, 4]],
-# [[5, 5, 5], [6, 6, 6]]]
-tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]]
-tf.slice(input, [1, 0, 0], [1, 2, 3]) ==> [[[3, 3, 3],
- [4, 4, 4]]]
-tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]],
- [[5, 5, 5]]]
-```
-
-##### Args:
-
-
-* <b>`input_`</b>: A `Tensor`.
-* <b>`begin`</b>: An `int32` or `int64` `Tensor`.
-* <b>`size`</b>: An `int32` or `int64` `Tensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` the same type as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.space_to_depth.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.space_to_depth.md
deleted file mode 100644
index afffa0d148..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.space_to_depth.md
+++ /dev/null
@@ -1,87 +0,0 @@
-### `tf.space_to_depth(input, block_size, name=None)` {#space_to_depth}
-
-SpaceToDepth for tensors of type T.
-
-Rearranges blocks of spatial data, into depth. More specifically,
-this op outputs a copy of the input tensor where values from the `height`
-and `width` dimensions are moved to the `depth` dimension.
-The attr `block_size` indicates the input block size and how the data is moved.
-
- * Non-overlapping blocks of size `block_size x block size` are rearranged
- into depth at each location.
- * The depth of the output tensor is `input_depth * block_size * block_size`.
- * The input tensor's height and width must be divisible by block_size.
-
-That is, assuming the input is in the shape:
-`[batch, height, width, depth]`,
-the shape of the output will be:
-`[batch, height/block_size, width/block_size, depth*block_size*block_size]`
-
-This operation requires that the input tensor be of rank 4, and that
-`block_size` be >=1 and a divisor of both the input `height` and `width`.
-
-This operation is useful for resizing the activations between convolutions
-(but keeping all data), e.g. instead of pooling. It is also useful for training
-purely convolutional models.
-
-For example, given this input of shape `[1, 2, 2, 1]`, and block_size of 2:
-
-```prettyprint
-x = [[[[1], [2]],
- [[3], [4]]]]
-```
-
-This operation will output a tensor of shape `[1, 1, 1, 4]`:
-
-```prettyprint
-[[[[1, 2, 3, 4]]]]
-```
-
-Here, the input has a batch of 1 and each batch element has shape `[2, 2, 1]`,
-the corresponding output will have a single element (i.e. width and height are
-both 1) and will have a depth of 4 channels (1 * block_size * block_size).
-The output element shape is `[1, 1, 4]`.
-
-For an input tensor with larger depth, here of shape `[1, 2, 2, 3]`, e.g.
-
-```prettyprint
-x = [[[[1, 2, 3], [4, 5, 6]],
- [[7, 8, 9], [10, 11, 12]]]]
-```
-
-This operation, for block_size of 2, will return the following tensor of shape
-`[1, 1, 1, 12]`
-
-```prettyprint
-[[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]
-```
-
-Similarly, for the following input of shape `[1 4 4 1]`, and a block size of 2:
-
-```prettyprint
-x = [[[[1], [2], [5], [6]],
- [[3], [4], [7], [8]],
- [[9], [10], [13], [14]],
- [[11], [12], [15], [16]]]]
-```
-
-the operator will return the following tensor of shape `[1 2 2 4]`:
-
-```prettyprint
-x = [[[[1, 2, 3, 4],
- [5, 6, 7, 8]],
- [[9, 10, 11, 12],
- [13, 14, 15, 16]]]]
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`.
-* <b>`block_size`</b>: An `int` that is `>= 2`. The size of the spatial block.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_reduce_sum_sparse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_reduce_sum_sparse.md
deleted file mode 100644
index 96a53cc87a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_reduce_sum_sparse.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.sparse_reduce_sum_sparse(sp_input, axis=None, keep_dims=False, reduction_axes=None)` {#sparse_reduce_sum_sparse}
-
-Computes the sum of elements across dimensions of a SparseTensor.
-
-This Op takes a SparseTensor and is the sparse counterpart to
-`tf.reduce_sum()`. In contrast to SparseReduceSum, this Op returns a
-SparseTensor.
-
-Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless
-`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in
-`reduction_axes`. If `keep_dims` is true, the reduced dimensions are retained
-with length 1.
-
-If `reduction_axes` has no entries, all dimensions are reduced, and a tensor
-with a single element is returned. Additionally, the axes can be negative,
-which are interpreted according to the indexing rules in Python.
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The SparseTensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce; list or scalar. If `None` (the
- default), reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retain reduced dimensions with length 1.
-* <b>`reduction_axes`</b>: Deprecated name of axis
-
-##### Returns:
-
- The reduced SparseTensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_reset_shape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_reset_shape.md
deleted file mode 100644
index 363b4cc9e3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_reset_shape.md
+++ /dev/null
@@ -1,60 +0,0 @@
-### `tf.sparse_reset_shape(sp_input, new_shape=None)` {#sparse_reset_shape}
-
-Resets the shape of a `SparseTensor` with indices and values unchanged.
-
-If `new_shape` is None, returns a copy of `sp_input` with its shape reset
-to the tight bounding box of `sp_input`.
-
-If `new_shape` is provided, then it must be larger or equal in all dimensions
-compared to the shape of `sp_input`. When this condition is met, the returned
-SparseTensor will have its shape reset to `new_shape` and its indices and
-values unchanged from that of `sp_input.`
-
-For example:
-
- Consider a `sp_input` with shape [2, 3, 5]:
-
- [0, 0, 1]: a
- [0, 1, 0]: b
- [0, 2, 2]: c
- [1, 0, 3]: d
-
- - It is an error to set `new_shape` as [3, 7] since this represents a
- rank-2 tensor while `sp_input` is rank-3. This is either a ValueError
- during graph construction (if both shapes are known) or an OpError during
- run time.
-
- - Setting `new_shape` as [2, 3, 6] will be fine as this shape is larger or
- equal in every dimension compared to the original shape [2, 3, 5].
-
- - On the other hand, setting new_shape as [2, 3, 4] is also an error: The
- third dimension is smaller than the original shape [2, 3, 5] (and an
- `InvalidArgumentError` will be raised).
-
- - If `new_shape` is None, the returned SparseTensor will have a shape
- [2, 3, 4], which is the tight bounding box of `sp_input`.
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The input `SparseTensor`.
-* <b>`new_shape`</b>: None or a vector representing the new shape for the returned
- `SparseTensor`.
-
-##### Returns:
-
- A `SparseTensor` indices and values unchanged from `input_sp`. Its shape is
- `new_shape` if that is set. Otherwise it is the tight bounding box of
- `input_sp`
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-* <b>`ValueError`</b>: If `new_shape` represents a tensor with a different rank from
- that of `sp_input` (if shapes are known when graph is constructed).
-* <b>`OpError`</b>:
- - If `new_shape` has dimension sizes that are too small.
- - If shapes are not known during graph construction time, and during run
- time it is found out that the ranks do not match.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_segment_mean.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_segment_mean.md
deleted file mode 100644
index af7affaa9f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_segment_mean.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.sparse_segment_mean(data, indices, segment_ids, name=None)` {#sparse_segment_mean}
-
-Computes the mean along sparse segments of a tensor.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-Like `SegmentMean`, but `segment_ids` can have rank less than `data`'s first
-dimension, selecting a subset of dimension 0, specified by `indices`.
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor. Has same rank as `segment_ids`.
-* <b>`segment_ids`</b>: A `Tensor` of type `int32`.
- A 1-D tensor. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_segment_sum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_segment_sum.md
deleted file mode 100644
index e48ae891c3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_segment_sum.md
+++ /dev/null
@@ -1,50 +0,0 @@
-### `tf.sparse_segment_sum(data, indices, segment_ids, name=None)` {#sparse_segment_sum}
-
-Computes the sum along sparse segments of a tensor.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-Like `SegmentSum`, but `segment_ids` can have rank less than `data`'s first
-dimension, selecting a subset of dimension 0, specified by `indices`.
-
-For example:
-
-```prettyprint
-c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])
-
-# Select two rows, one segment.
-tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0]))
- ==> [[0 0 0 0]]
-
-# Select two rows, two segment.
-tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1]))
- ==> [[ 1 2 3 4]
- [-1 -2 -3 -4]]
-
-# Select all rows, two segments.
-tf.sparse_segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1]))
- ==> [[0 0 0 0]
- [5 6 7 8]]
-
-# Which is equivalent to:
-tf.segment_sum(c, tf.constant([0, 0, 1]))
-```
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor. Has same rank as `segment_ids`.
-* <b>`segment_ids`</b>: A `Tensor` of type `int32`.
- A 1-D tensor. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_transpose.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_transpose.md
deleted file mode 100644
index fa4176a764..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.sparse_transpose.md
+++ /dev/null
@@ -1,40 +0,0 @@
-### `tf.sparse_transpose(sp_input, perm=None, name=None)` {#sparse_transpose}
-
-Transposes a `SparseTensor`
-
-The returned tensor's dimension i will correspond to the input dimension
-`perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is
-the rank of the input tensor. Hence by default, this operation performs a
-regular matrix transpose on 2-D input Tensors.
-
-For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`:
-
- [0, 3]: b
- [0, 1]: a
- [3, 1]: d
- [2, 0]: c
-
-then the output will be a `SparseTensor` of shape `[5, 4]` and
-`indices` / `values`:
-
- [0, 2]: c
- [1, 0]: a
- [1, 3]: d
- [3, 0]: b
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The input `SparseTensor`.
-* <b>`perm`</b>: A permutation of the dimensions of `sp_input`.
-* <b>`name`</b>: A name prefix for the returned tensors (optional)
-
-##### Returns:
-
- A transposed `SparseTensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.string_join.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.string_join.md
deleted file mode 100644
index b81537a70c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.string_join.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.string_join(inputs, separator=None, name=None)` {#string_join}
-
-Joins the strings in the given list of string tensors into one tensor;
-
-with the given separator (default is an empty separator).
-
-##### Args:
-
-
-* <b>`inputs`</b>: A list of at least 1 `Tensor` objects of type `string`.
- A list of string tensors. The tensors must all have the same shape,
- or be scalars. Scalars may be mixed in; these will be broadcast to the shape
- of non-scalar inputs.
-* <b>`separator`</b>: An optional `string`. Defaults to `""`.
- string, an optional join separator.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.TaggedRunMetadata.RegisterExtension.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.TaggedRunMetadata.RegisterExtension.md
deleted file mode 100644
index f2d0c042d7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.TaggedRunMetadata.RegisterExtension.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.summary.TaggedRunMetadata.RegisterExtension(extension_handle)` {#TaggedRunMetadata.RegisterExtension}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.get_summary_description.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.get_summary_description.md
deleted file mode 100644
index 2e0189dfa8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.get_summary_description.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.summary.get_summary_description(node_def)` {#get_summary_description}
-
-Given a TensorSummary node_def, retrieve its SummaryDescription.
-
-When a Summary op is instantiated, a SummaryDescription of associated
-metadata is stored in its NodeDef. This method retrieves the description.
-
-##### Args:
-
-
-* <b>`node_def`</b>: the node_def_pb2.NodeDef of a TensorSummary op
-
-##### Returns:
-
- a summary_pb2.SummaryDescription
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the node is not a summary op.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.scalar.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.scalar.md
deleted file mode 100644
index 3ae39cb1d9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.summary.scalar.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.summary.scalar(name, tensor, collections=None)` {#scalar}
-
-Outputs a `Summary` protocol buffer containing a single scalar value.
-
-The generated Summary has a Tensor.proto containing the input Tensor.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the generated node. Will also serve as the series name in
- TensorBoard.
-* <b>`tensor`</b>: A real numeric Tensor containing a single value.
-* <b>`collections`</b>: Optional list of graph collections keys. The new summary op is
- added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.
-
-##### Returns:
-
- A scalar `Tensor` of type `string`. Which contains a `Summary` protobuf.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If tensor has the wrong shape or type.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.to_float.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.to_float.md
deleted file mode 100644
index b45b49b982..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.to_float.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.to_float(x, name='ToFloat')` {#to_float}
-
-Casts a tensor to type `float32`.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` with same shape as `x` with type `float32`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` cannot be cast to the `float32`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.AdamOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.AdamOptimizer.md
deleted file mode 100644
index 04d2ec6d0b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.AdamOptimizer.md
+++ /dev/null
@@ -1,206 +0,0 @@
-Optimizer that implements the Adam algorithm.
-
-See [Kingma et. al., 2014](http://arxiv.org/abs/1412.6980)
-([pdf](http://arxiv.org/pdf/1412.6980.pdf)).
-- - -
-
-#### `tf.train.AdamOptimizer.__init__(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False, name='Adam')` {#AdamOptimizer.__init__}
-
-Construct a new Adam optimizer.
-
-Initialization:
-
-```
-m_0 <- 0 (Initialize initial 1st moment vector)
-v_0 <- 0 (Initialize initial 2nd moment vector)
-t <- 0 (Initialize timestep)
-```
-
-The update rule for `variable` with gradient `g` uses an optimization
-described at the end of section2 of the paper:
-
-```
-t <- t + 1
-lr_t <- learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t)
-
-m_t <- beta1 * m_{t-1} + (1 - beta1) * g
-v_t <- beta2 * v_{t-1} + (1 - beta2) * g * g
-variable <- variable - lr_t * m_t / (sqrt(v_t) + epsilon)
-```
-
-The default value of 1e-8 for epsilon might not be a good default in
-general. For example, when training an Inception network on ImageNet a
-current good choice is 1.0 or 0.1.
-
-Note that in dense implement of this algorithm, m_t, v_t and variable will
-update even if g is zero, but in sparse implement, m_t, v_t and variable
-will not update in iterations g is zero.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A Tensor or a floating point value. The learning rate.
-* <b>`beta1`</b>: A float value or a constant float tensor.
- The exponential decay rate for the 1st moment estimates.
-* <b>`beta2`</b>: A float value or a constant float tensor.
- The exponential decay rate for the 2nd moment estimates.
-* <b>`epsilon`</b>: A small constant for numerical stability.
-* <b>`use_locking`</b>: If True use locks for update operations.
-* <b>`name`</b>: Optional name for the operations created when applying gradients.
- Defaults to "Adam".
-
-
-- - -
-
-#### `tf.train.AdamOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdamOptimizer.apply_gradients}
-
-Apply gradients to variables.
-
-This is the second part of `minimize()`. It returns an `Operation` that
-applies gradients.
-
-##### Args:
-
-
-* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
- `compute_gradients()`.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`name`</b>: Optional name for the returned operation. Default to the
- name passed to the `Optimizer` constructor.
-
-##### Returns:
-
- An `Operation` that applies the specified gradients. If `global_step`
- was not None, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
-* <b>`ValueError`</b>: If none of the variables have gradients.
-
-
-- - -
-
-#### `tf.train.AdamOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdamOptimizer.compute_gradients}
-
-Compute gradients of `loss` for the variables in `var_list`.
-
-This is the first part of `minimize()`. It returns a list
-of (gradient, variable) pairs where "gradient" is the gradient
-for "variable". Note that "gradient" can be a `Tensor`, an
-`IndexedSlices`, or `None` if there is no gradient for the
-given variable.
-
-##### Args:
-
-
-* <b>`loss`</b>: A Tensor containing the value to minimize.
-* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKey.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- A list of (gradient, variable) pairs. Variable is always present, but
- gradient can be `None`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
-* <b>`ValueError`</b>: If some arguments are invalid.
-
-
-- - -
-
-#### `tf.train.AdamOptimizer.get_name()` {#AdamOptimizer.get_name}
-
-
-
-
-- - -
-
-#### `tf.train.AdamOptimizer.get_slot(var, name)` {#AdamOptimizer.get_slot}
-
-Return a slot named `name` created for `var` by the Optimizer.
-
-Some `Optimizer` subclasses use additional variables. For example
-`Momentum` and `Adagrad` use variables to accumulate updates. This method
-gives access to these `Variable` objects if for some reason you need them.
-
-Use `get_slot_names()` to get the list of slot names created by the
-`Optimizer`.
-
-##### Args:
-
-
-* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
-* <b>`name`</b>: A string.
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-- - -
-
-#### `tf.train.AdamOptimizer.get_slot_names()` {#AdamOptimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-See `get_slot()`.
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.train.AdamOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdamOptimizer.minimize}
-
-Add operations to minimize `loss` by updating `var_list`.
-
-This method simply combines calls `compute_gradients()` and
-`apply_gradients()`. If you want to process the gradient before applying
-them call `compute_gradients()` and `apply_gradients()` explicitly instead
-of using this function.
-
-##### Args:
-
-
-* <b>`loss`</b>: A `Tensor` containing the value to minimize.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`name`</b>: Optional name for the returned operation.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- An Operation that updates the variables in `var_list`. If `global_step`
- was not `None`, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Coordinator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Coordinator.md
deleted file mode 100644
index 25a4025fc9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Coordinator.md
+++ /dev/null
@@ -1,266 +0,0 @@
-A coordinator for threads.
-
-This class implements a simple mechanism to coordinate the termination of a
-set of threads.
-
-#### Usage:
-
-```python
-# Create a coordinator.
-coord = Coordinator()
-# Start a number of threads, passing the coordinator to each of them.
-...start thread 1...(coord, ...)
-...start thread N...(coord, ...)
-# Wait for all the threads to terminate.
-coord.join(threads)
-```
-
-Any of the threads can call `coord.request_stop()` to ask for all the threads
-to stop. To cooperate with the requests, each thread must check for
-`coord.should_stop()` on a regular basis. `coord.should_stop()` returns
-`True` as soon as `coord.request_stop()` has been called.
-
-A typical thread running with a coordinator will do something like:
-
-```python
-while not coord.should_stop():
- ...do some work...
-```
-
-#### Exception handling:
-
-A thread can report an exception to the coordinator as part of the
-`should_stop()` call. The exception will be re-raised from the
-`coord.join()` call.
-
-Thread code:
-
-```python
-try:
- while not coord.should_stop():
- ...do some work...
-except Exception as e:
- coord.request_stop(e)
-```
-
-Main code:
-
-```python
-try:
- ...
- coord = Coordinator()
- # Start a number of threads, passing the coordinator to each of them.
- ...start thread 1...(coord, ...)
- ...start thread N...(coord, ...)
- # Wait for all the threads to terminate.
- coord.join(threads)
-except Exception as e:
- ...exception that was passed to coord.request_stop()
-```
-
-To simplify the thread implementation, the Coordinator provides a
-context handler `stop_on_exception()` that automatically requests a stop if
-an exception is raised. Using the context handler the thread code above
-can be written as:
-
-```python
-with coord.stop_on_exception():
- while not coord.should_stop():
- ...do some work...
-```
-
-#### Grace period for stopping:
-
-After a thread has called `coord.request_stop()` the other threads have a
-fixed time to stop, this is called the 'stop grace period' and defaults to 2
-minutes. If any of the threads is still alive after the grace period expires
-`coord.join()` raises a RuntimeException reporting the laggards.
-
-```python
-try:
- ...
- coord = Coordinator()
- # Start a number of threads, passing the coordinator to each of them.
- ...start thread 1...(coord, ...)
- ...start thread N...(coord, ...)
- # Wait for all the threads to terminate, give them 10s grace period
- coord.join(threads, stop_grace_period_secs=10)
-except RuntimeException:
- ...one of the threads took more than 10s to stop after request_stop()
- ...was called.
-except Exception:
- ...exception that was passed to coord.request_stop()
-```
-- - -
-
-#### `tf.train.Coordinator.__init__(clean_stop_exception_types=None)` {#Coordinator.__init__}
-
-Create a new Coordinator.
-
-##### Args:
-
-
-* <b>`clean_stop_exception_types`</b>: Optional tuple of Exception types that should
- cause a clean stop of the coordinator. If an exception of one of these
- types is reported to `request_stop(ex)` the coordinator will behave as
- if `request_stop(None)` was called. Defaults to
- `(tf.errors.OutOfRangeError,)` which is used by input queues to signal
- the end of input. When feeding training data from a Python iterator it
- is common to add `StopIteration` to this list.
-
-
-- - -
-
-#### `tf.train.Coordinator.clear_stop()` {#Coordinator.clear_stop}
-
-Clears the stop flag.
-
-After this is called, calls to `should_stop()` will return `False`.
-
-
-- - -
-
-#### `tf.train.Coordinator.join(threads=None, stop_grace_period_secs=120, ignore_live_threads=False)` {#Coordinator.join}
-
-Wait for threads to terminate.
-
-This call blocks until a set of threads have terminated. The set of thread
-is the union of the threads passed in the `threads` argument and the list
-of threads that registered with the coordinator by calling
-`Coordinator.register_thread()`.
-
-After the threads stop, if an `exc_info` was passed to `request_stop`, that
-exception is re-raised.
-
-Grace period handling: When `request_stop()` is called, threads are given
-'stop_grace_period_secs' seconds to terminate. If any of them is still
-alive after that period expires, a `RuntimeError` is raised. Note that if
-an `exc_info` was passed to `request_stop()` then it is raised instead of
-that `RuntimeError`.
-
-##### Args:
-
-
-* <b>`threads`</b>: List of `threading.Threads`. The started threads to join in
- addition to the registered threads.
-* <b>`stop_grace_period_secs`</b>: Number of seconds given to threads to stop after
- `request_stop()` has been called.
-* <b>`ignore_live_threads`</b>: If `False`, raises an error if any of the threads are
- still alive after `stop_grace_period_secs`.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If any thread is still alive after `request_stop()`
- is called and the grace period expires.
-
-
-- - -
-
-#### `tf.train.Coordinator.joined` {#Coordinator.joined}
-
-
-
-
-- - -
-
-#### `tf.train.Coordinator.raise_requested_exception()` {#Coordinator.raise_requested_exception}
-
-If an exception has been passed to `request_stop`, this raises it.
-
-
-- - -
-
-#### `tf.train.Coordinator.register_thread(thread)` {#Coordinator.register_thread}
-
-Register a thread to join.
-
-##### Args:
-
-
-* <b>`thread`</b>: A Python thread to join.
-
-
-- - -
-
-#### `tf.train.Coordinator.request_stop(ex=None)` {#Coordinator.request_stop}
-
-Request that the threads stop.
-
-After this is called, calls to `should_stop()` will return `True`.
-
-Note: If an exception is being passed in, in must be in the context of
-handling the exception (i.e. `try: ... except Exception as ex: ...`) and not
-a newly created one.
-
-##### Args:
-
-
-* <b>`ex`</b>: Optional `Exception`, or Python `exc_info` tuple as returned by
- `sys.exc_info()`. If this is the first call to `request_stop()` the
- corresponding exception is recorded and re-raised from `join()`.
-
-
-- - -
-
-#### `tf.train.Coordinator.should_stop()` {#Coordinator.should_stop}
-
-Check if stop was requested.
-
-##### Returns:
-
- True if a stop was requested.
-
-
-- - -
-
-#### `tf.train.Coordinator.stop_on_exception()` {#Coordinator.stop_on_exception}
-
-Context manager to request stop when an Exception is raised.
-
-Code that uses a coordinator must catch exceptions and pass
-them to the `request_stop()` method to stop the other threads
-managed by the coordinator.
-
-This context handler simplifies the exception handling.
-Use it as follows:
-
-```python
-with coord.stop_on_exception():
- # Any exception raised in the body of the with
- # clause is reported to the coordinator before terminating
- # the execution of the body.
- ...body...
-```
-
-This is completely equivalent to the slightly longer code:
-
-```python
-try:
- ...body...
-exception Exception as ex:
- coord.request_stop(ex)
-```
-
-##### Yields:
-
- nothing.
-
-
-- - -
-
-#### `tf.train.Coordinator.wait_for_stop(timeout=None)` {#Coordinator.wait_for_stop}
-
-Wait till the Coordinator is told to stop.
-
-##### Args:
-
-
-* <b>`timeout`</b>: Float. Sleep for up to that many seconds waiting for
- should_stop() to become True.
-
-##### Returns:
-
- True if the Coordinator is told stop, False if the timeout expired.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.QueueRunner.from_proto.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.QueueRunner.from_proto.md
deleted file mode 100644
index e7b5bc70e3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.QueueRunner.from_proto.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.train.QueueRunner.from_proto(queue_runner_def, import_scope=None)` {#QueueRunner.from_proto}
-
-Returns a `QueueRunner` object created from `queue_runner_def`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.RMSPropOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.RMSPropOptimizer.md
deleted file mode 100644
index 499b65cc84..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.RMSPropOptimizer.md
+++ /dev/null
@@ -1,30 +0,0 @@
-Optimizer that implements the RMSProp algorithm.
-
-See the [paper](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf).
-
-- - -
-
-#### `tf.train.RMSPropOptimizer.__init__(learning_rate, decay=0.9, momentum=0.0, epsilon=1e-10, use_locking=False, centered=False, name='RMSProp')` {#RMSPropOptimizer.__init__}
-
-Construct a new RMSProp optimizer.
-
-Note that in dense implement of this algorithm, m_t and v_t will
-update even if g is zero, but in sparse implement, m_t and v_t
-will not update in iterations g is zero.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A Tensor or a floating point value. The learning rate.
-* <b>`decay`</b>: Discounting factor for the history/coming gradient
-* <b>`momentum`</b>: A scalar tensor.
-* <b>`epsilon`</b>: Small value to avoid zero denominator.
-* <b>`use_locking`</b>: If True use locks for update operation.
-* <b>`centered`</b>: If True, gradients are normalized by the estimated variance of
- the gradient; if False, by the uncentered second moment. Setting this to
- True may help with training, but is slightly more expensive in terms of
- computation and memory. Defaults to False.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "RMSProp".
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Scaffold.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Scaffold.md
deleted file mode 100644
index 8882f4710d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Scaffold.md
+++ /dev/null
@@ -1,144 +0,0 @@
-Structure to create or gather pieces commonly needed to train a model.
-
-When you build a model for training you usually need ops to initialize
-variables, a `Saver` to checkpoint them, an op to collect summaries for
-the visualizer, and so on.
-
-Various libraries built on top of the core TensorFlow library take care of
-creating some or all of these pieces and storing them in well known
-collections in the graph. The `Scaffold` class helps pick these pieces from
-the graph collections, creating and adding them to the collections if needed.
-
-If you call the scaffold constructor without any arguments, it will pick
-pieces from the collections, creating default ones if needed when
-`scaffold.finalize()` is called. You can pass arguments to the constructor to
-provide your own pieces. Pieces that you pass to the constructor are not
-added to the graph collections.
-
-The following pieces are directly accessible as attributes of the `Scaffold`
-object:
-
-* `saver`: A `tf.Saver` object taking care of saving the variables. Picked
- from and stored into the `SAVERS` collection in the graph by default.
-* `init_op`: An op to run to initialize the variables. Picked from and
- stored into the `INIT_OP` collection in the graph by default.
-* `ready_op`: An op to verify that the variables are initialized. Picked
- from and stored into the `READY_OP` collection in the graph by default.
-* `ready_for_local_init_op`: An op to verify that global state has been
- initialized and it is alright to run `local_init_op`. Picked from and
- stored into the `READY_FOR_LOCAL_INIT_OP` collection in the graph by
- default. This is needed when the initialization of local variables depends
- on the values of global variables.
-* `local_init_op`: An op to initialize the local variables. Picked
- from and stored into the `LOCAL_INIT_OP` collection in the graph by default.
-* `summary_op`: An op to run and merge the summaries in the graph. Picked
- from and stored into the `SUMMARY_OP` collection in the graph by default.
-* `global_step`: A tensor containing the global step counter. Picked
- from and stored into the `GLOBAL_STEP` collection in the graph by default.
-
-You can also pass the following additional pieces to the constructor:
-
-* `init_feed_dict`: A sessionn feed dictionary that should be used when
- running the init op.
-* `init_fn`: A callable to run run after the init op to perform additional
- initializations. The callable will be called as
- `init_fn(scaffold, session)`.
-- - -
-
-#### `tf.train.Scaffold.__init__(init_op=None, init_feed_dict=None, init_fn=None, ready_op=None, ready_for_local_init_op=None, local_init_op=None, summary_op=None, saver=None)` {#Scaffold.__init__}
-
-Create a scaffold.
-
-##### Args:
-
-
-* <b>`init_op`</b>: Optional op for initializing variables.
-* <b>`init_feed_dict`</b>: Optional session feed dictionary to use when running the
- init_op.
-* <b>`init_fn`</b>: Optional function to use to initialize the model after running
- the init_op. Will be called as `init_fn(scaffold, session)`.
-* <b>`ready_op`</b>: Optional op to verify that the variables are initialized. Must
- return an empty 1D string tensor when the variables are initialized, or
- a non-empty 1D string tensor listing the names of the non-initialized
- variables.
-* <b>`ready_for_local_init_op`</b>: Optional op to verify that the global variables
- are initialized and `local_init_op` can be run. Must return an empty
- 1D string tensor when the global variables are initialized, or a
- non-empty 1D string tensor listing the names of the non-initialized
- global variables.
-* <b>`local_init_op`</b>: Optional op to initialize local variables.
-* <b>`summary_op`</b>: Optional op to gather all summaries. Must return a scalar
- string tensor containing a serialized `Summary` proto.
-* <b>`saver`</b>: Optional `tf.Saver` object to use to save and restore variables.
-
-
-- - -
-
-#### `tf.train.Scaffold.finalize()` {#Scaffold.finalize}
-
-Creates operations if needed and finalizes the graph.
-
-
-- - -
-
-#### `tf.train.Scaffold.get_or_default(arg_name, collection_key, default_constructor)` {#Scaffold.get_or_default}
-
-Get from cache or create a default operation.
-
-
-- - -
-
-#### `tf.train.Scaffold.init_feed_dict` {#Scaffold.init_feed_dict}
-
-
-
-
-- - -
-
-#### `tf.train.Scaffold.init_fn` {#Scaffold.init_fn}
-
-
-
-
-- - -
-
-#### `tf.train.Scaffold.init_op` {#Scaffold.init_op}
-
-
-
-
-- - -
-
-#### `tf.train.Scaffold.local_init_op` {#Scaffold.local_init_op}
-
-
-
-
-- - -
-
-#### `tf.train.Scaffold.ready_for_local_init_op` {#Scaffold.ready_for_local_init_op}
-
-
-
-
-- - -
-
-#### `tf.train.Scaffold.ready_op` {#Scaffold.ready_op}
-
-
-
-
-- - -
-
-#### `tf.train.Scaffold.saver` {#Scaffold.saver}
-
-
-
-
-- - -
-
-#### `tf.train.Scaffold.summary_op` {#Scaffold.summary_op}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.SessionRunContext.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.SessionRunContext.md
deleted file mode 100644
index ce3e764795..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.SessionRunContext.md
+++ /dev/null
@@ -1,57 +0,0 @@
-Provides information about the `session.run()` call being made.
-
-Provides information about original request to `Session.Run()` function.
-SessionRunHook objects can stop the loop by calling `request_stop()` of
-`run_context`. In the future we may use this object to add more information
-about run without changing the Hook API.
-- - -
-
-#### `tf.train.SessionRunContext.__init__(original_args, session)` {#SessionRunContext.__init__}
-
-Initializes SessionRunContext.
-
-
-- - -
-
-#### `tf.train.SessionRunContext.original_args` {#SessionRunContext.original_args}
-
-A `SessionRunArgs` object holding the original arguments of `run()`.
-
-If user called `MonitoredSession.run(fetches=a, feed_dict=b)`, then this
-field is equal to SessionRunArgs(a, b).
-
-##### Returns:
-
- A `SessionRunArgs` object
-
-
-- - -
-
-#### `tf.train.SessionRunContext.request_stop()` {#SessionRunContext.request_stop}
-
-Sets stop requested field.
-
-Hooks can use this function to request stop of iterations.
-`MonitoredSession` checks whether this is called or not.
-
-
-- - -
-
-#### `tf.train.SessionRunContext.session` {#SessionRunContext.session}
-
-A TensorFlow session object which will execute the `run`.
-
-
-- - -
-
-#### `tf.train.SessionRunContext.stop_requested` {#SessionRunContext.stop_requested}
-
-Returns whether a stop is requested or not.
-
-If true, `MonitoredSession` stops iterations.
-
-##### Returns:
-
- A `bool`
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.StopAtStepHook.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.StopAtStepHook.md
deleted file mode 100644
index e599bfc21a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.StopAtStepHook.md
+++ /dev/null
@@ -1,85 +0,0 @@
-Monitor to request stop at a specified step.
-- - -
-
-#### `tf.train.StopAtStepHook.__init__(num_steps=None, last_step=None)` {#StopAtStepHook.__init__}
-
-Create a StopAtStep Hook.
-
-This hook requests stop after either a number of steps have been
-executed or a last step has been reached. Only of the two options can be
-specified.
-
-if `num_steps` is specified, it indicates the number of steps to execute
-after `begin()` is called. If instead `last_step` is specified, it
-indicates the last step we want to execute, as passed to the `after_run()`
-call.
-
-##### Args:
-
-
-* <b>`num_steps`</b>: Number of steps to execute.
-* <b>`last_step`</b>: Step after which to stop.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If one of the arguments is invalid.
-
-
-- - -
-
-#### `tf.train.StopAtStepHook.after_create_session(session, coord)` {#StopAtStepHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.StopAtStepHook.after_run(run_context, run_values)` {#StopAtStepHook.after_run}
-
-
-
-
-- - -
-
-#### `tf.train.StopAtStepHook.before_run(run_context)` {#StopAtStepHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.StopAtStepHook.begin()` {#StopAtStepHook.begin}
-
-
-
-
-- - -
-
-#### `tf.train.StopAtStepHook.end(session)` {#StopAtStepHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Supervisor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Supervisor.md
deleted file mode 100644
index d6c6693a5a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Supervisor.md
+++ /dev/null
@@ -1,859 +0,0 @@
-A training helper that checkpoints models and computes summaries.
-
-The Supervisor is a small wrapper around a `Coordinator`, a `Saver`,
-and a `SessionManager` that takes care of common needs of TensorFlow
-training programs.
-
-#### Use for a single program
-
-```python
-with tf.Graph().as_default():
- ...add operations to the graph...
- # Create a Supervisor that will checkpoint the model in '/tmp/mydir'.
- sv = Supervisor(logdir='/tmp/mydir')
- # Get a TensorFlow session managed by the supervisor.
- with sv.managed_session(FLAGS.master) as sess:
- # Use the session to train the graph.
- while not sv.should_stop():
- sess.run(<my_train_op>)
-```
-
-Within the `with sv.managed_session()` block all variables in the graph have
-been initialized. In addition, a few services have been started to
-checkpoint the model and add summaries to the event log.
-
-If the program crashes and is restarted, the managed session automatically
-reinitialize variables from the most recent checkpoint.
-
-The supervisor is notified of any exception raised by one of the services.
-After an exception is raised, `should_stop()` returns `True`. In that case
-the training loop should also stop. This is why the training loop has to
-check for `sv.should_stop()`.
-
-Exceptions that indicate that the training inputs have been exhausted,
-`tf.errors.OutOfRangeError`, also cause `sv.should_stop()` to return `True`
-but are not re-raised from the `with` block: they indicate a normal
-termination.
-
-#### Use for multiple replicas
-
-To train with replicas you deploy the same program in a `Cluster`.
-One of the tasks must be identified as the *chief*: the task that handles
-initialization, checkpoints, summaries, and recovery. The other tasks
-depend on the *chief* for these services.
-
-The only change you have to do to the single program code is to indicate
-if the program is running as the *chief*.
-
-```python
-# Choose a task as the chief. This could be based on server_def.task_index,
-# or job_def.name, or job_def.tasks. It's entirely up to the end user.
-# But there can be only one *chief*.
-is_chief = (server_def.task_index == 0)
-server = tf.train.Server(server_def)
-
-with tf.Graph().as_default():
- ...add operations to the graph...
- # Create a Supervisor that uses log directory on a shared file system.
- # Indicate if you are the 'chief'
- sv = Supervisor(logdir='/shared_directory/...', is_chief=is_chief)
- # Get a Session in a TensorFlow server on the cluster.
- with sv.managed_session(server.target) as sess:
- # Use the session to train the graph.
- while not sv.should_stop():
- sess.run(<my_train_op>)
-```
-
-In the *chief* task, the `Supervisor` works exactly as in the first example
-above. In the other tasks `sv.managed_session()` waits for the Model to have
-been initialized before returning a session to the training code. The
-non-chief tasks depend on the chief task for initializing the model.
-
-If one of the tasks crashes and restarts, `managed_session()`
-checks if the Model is initialized. If yes, it just creates a session and
-returns it to the training code that proceeds normally. If the model needs
-to be initialized, the chief task takes care of reinitializing it; the other
-tasks just wait for the model to have been initialized.
-
-NOTE: This modified program still works fine as a single program.
-The single program marks itself as the chief.
-
-#### What `master` string to use
-
-Whether you are running on your machine or in the cluster you can use the
-following values for the --master flag:
-
-* Specifying `''` requests an in-process session that does not use RPC.
-
-* Specifying `'local'` requests a session that uses the RPC-based
- "Master interface" to run TensorFlow programs. See
- [`tf.train.Server.create_local_server()`](#Server.create_local_server) for
- details.
-
-* Specifying `'grpc://hostname:port'` requests a session that uses
- the RPC interface to a specific host, and also allows the in-process
- master to access remote tensorflow workers. Often, it is
- appropriate to pass `server.target` (for some `tf.train.Server`
- named `server).
-
-#### Advanced use
-
-##### Launching additional services
-
-`managed_session()` launches the Checkpoint and Summary services (threads).
-If you need more services to run you can simply launch them in the block
-controlled by `managed_session()`.
-
-Example: Start a thread to print losses. We want this thread to run
-every 60 seconds, so we launch it with `sv.loop()`.
-
- ```python
- ...
- sv = Supervisor(logdir='/tmp/mydir')
- with sv.managed_session(FLAGS.master) as sess:
- sv.loop(60, print_loss, (sess, ))
- while not sv.should_stop():
- sess.run(my_train_op)
- ```
-
-##### Launching fewer services
-
-`managed_session()` launches the "summary" and "checkpoint" threads which use
-either the optionally `summary_op` and `saver` passed to the constructor, or
-default ones created automatically by the supervisor. If you want to run
-your own summary and checkpointing logic, disable these services by passing
-`None` to the `summary_op` and `saver` parameters.
-
-Example: Create summaries manually every 100 steps in the chief.
-
- ```python
- # Create a Supervisor with no automatic summaries.
- sv = Supervisor(logdir='/tmp/mydir', is_chief=is_chief, summary_op=None)
- # As summary_op was None, managed_session() does not start the
- # summary thread.
- with sv.managed_session(FLAGS.master) as sess:
- for step in xrange(1000000):
- if sv.should_stop():
- break
- if is_chief and step % 100 == 0:
- # Create the summary every 100 chief steps.
- sv.summary_computed(sess, sess.run(my_summary_op))
- else:
- # Train normally
- sess.run(my_train_op)
- ```
-
-##### Custom model initialization
-
-`managed_session()` only supports initializing the model by running an
-`init_op` or restoring from the latest checkpoint. If you have special
-initialization needs, see how to specify a `local_init_op` when creating the
-supervisor. You can also use the `SessionManager` directly to create a
-session and check if it could be initialized automatically.
-
-- - -
-
-#### `tf.train.Supervisor.__init__(graph=None, ready_op=0, ready_for_local_init_op=0, is_chief=True, init_op=0, init_feed_dict=None, local_init_op=0, logdir=None, summary_op=0, saver=0, global_step=0, save_summaries_secs=120, save_model_secs=600, recovery_wait_secs=30, stop_grace_secs=120, checkpoint_basename='model.ckpt', session_manager=None, summary_writer=0, init_fn=None)` {#Supervisor.__init__}
-
-Create a `Supervisor`.
-
-##### Args:
-
-
-* <b>`graph`</b>: A `Graph`. The graph that the model will use. Defaults to the
- default `Graph`. The supervisor may add operations to the graph before
- creating a session, but the graph should not be modified by the caller
- after passing it to the supervisor.
-* <b>`ready_op`</b>: 1-D string `Tensor`. This tensor is evaluated by supervisors in
- `prepare_or_wait_for_session()` to check if the model is ready to use.
- The model is considered ready if it returns an empty array. Defaults to
- the tensor returned from `tf.report_uninitialized_variables()` If
- `None`, the model is not checked for readiness.
-* <b>`ready_for_local_init_op`</b>: 1-D string `Tensor`. This tensor is evaluated by
- supervisors in `prepare_or_wait_for_session()` to check if the model is
- ready to run the local_init_op.
- The model is considered ready if it returns an empty array. Defaults to
- the tensor returned from
- `tf.report_uninitialized_variables(tf.global_variables())`. If `None`,
- the model is not checked for readiness before running local_init_op.
-* <b>`is_chief`</b>: If True, create a chief supervisor in charge of initializing
- and restoring the model. If False, create a supervisor that relies
- on a chief supervisor for inits and restore.
-* <b>`init_op`</b>: `Operation`. Used by chief supervisors to initialize the model
- when it can not be recovered. Defaults to an `Operation` that
- initializes all variables. If `None`, no initialization is done
- automatically unless you pass a value for `init_fn`, see below.
-* <b>`init_feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
- This feed dictionary will be used when `init_op` is evaluated.
-* <b>`local_init_op`</b>: `Operation`. Used by all supervisors to run initializations
- that should run for every new supervisor instance. By default these
- are table initializers and initializers for local variables.
- If `None`, no further per supervisor-instance initialization is
- done automatically.
-* <b>`logdir`</b>: A string. Optional path to a directory where to checkpoint the
- model and log events for the visualizer. Used by chief supervisors.
- The directory will be created if it does not exist.
-* <b>`summary_op`</b>: An `Operation` that returns a Summary for the event logs.
- Used by chief supervisors if a `logdir` was specified. Defaults to the
- operation returned from summary.merge_all(). If `None`, summaries are
- not computed automatically.
-* <b>`saver`</b>: A Saver object. Used by chief supervisors if a `logdir` was
- specified. Defaults to the saved returned by Saver().
- If `None`, the model is not saved automatically.
-* <b>`global_step`</b>: An integer Tensor of size 1 that counts steps. The value
- from 'global_step' is used in summaries and checkpoint filenames.
- Default to the op named 'global_step' in the graph if it exists, is of
- rank 1, size 1, and of type tf.int32 or tf.int64. If `None` the global
- step is not recorded in summaries and checkpoint files. Used by chief
- supervisors if a `logdir` was specified.
-* <b>`save_summaries_secs`</b>: Number of seconds between the computation of
- summaries for the event log. Defaults to 120 seconds. Pass 0 to
- disable summaries.
-* <b>`save_model_secs`</b>: Number of seconds between the creation of model
- checkpoints. Defaults to 600 seconds. Pass 0 to disable checkpoints.
-* <b>`recovery_wait_secs`</b>: Number of seconds between checks that the model
- is ready. Used by supervisors when waiting for a chief supervisor
- to initialize or restore the model. Defaults to 30 seconds.
-* <b>`stop_grace_secs`</b>: Grace period, in seconds, given to running threads to
- stop when `stop()` is called. Defaults to 120 seconds.
-* <b>`checkpoint_basename`</b>: The basename for checkpoint saving.
-* <b>`session_manager`</b>: `SessionManager`, which manages Session creation and
- recovery. If it is `None`, a default `SessionManager` will be created
- with the set of arguments passed in for backwards compatibility.
-* <b>`summary_writer`</b>: `SummaryWriter` to use or `USE_DEFAULT`. Can be `None`
- to indicate that no summaries should be written.
-* <b>`init_fn`</b>: Optional callable used to initialize the model. Called
- after the optional `init_op` is called. The callable must accept one
- argument, the session being initialized.
-
-##### Returns:
-
- A `Supervisor`.
-
-
-- - -
-
-#### `tf.train.Supervisor.managed_session(master='', config=None, start_standard_services=True, close_summary_writer=True)` {#Supervisor.managed_session}
-
-Returns a context manager for a managed session.
-
-This context manager creates and automatically recovers a session. It
-optionally starts the standard services that handle checkpoints and
-summaries. It monitors exceptions raised from the `with` block or from the
-services and stops the supervisor as needed.
-
-The context manager is typically used as follows:
-
-```python
-def train():
- sv = tf.train.Supervisor(...)
- with sv.managed_session(<master>) as sess:
- for step in xrange(..):
- if sv.should_stop():
- break
- sess.run(<my training op>)
- ...do other things needed at each training step...
-```
-
-An exception raised from the `with` block or one of the service threads is
-raised again when the block exits. This is done after stopping all threads
-and closing the session. For example, an `AbortedError` exception, raised
-in case of preemption of one of the workers in a distributed model, is
-raised again when the block exits.
-
-If you want to retry the training loop in case of preemption you can do it
-as follows:
-
-```python
-def main(...):
- while True
- try:
- train()
- except tf.errors.Aborted:
- pass
-```
-
-As a special case, exceptions used for control flow, such as
-`OutOfRangeError` which reports that input queues are exhausted, are not
-raised again from the `with` block: they indicate a clean termination of
-the training loop and are considered normal termination.
-
-##### Args:
-
-
-* <b>`master`</b>: name of the TensorFlow master to use. See the `tf.Session`
- constructor for how this is interpreted.
-* <b>`config`</b>: Optional `ConfigProto` proto used to configure the session.
- Passed as-is to create the session.
-* <b>`start_standard_services`</b>: Whether to start the standard services,
- such as checkpoint, summary and step counter.
-* <b>`close_summary_writer`</b>: Whether to close the summary writer when
- closing the session. Defaults to True.
-
-##### Returns:
-
- A context manager that yields a `Session` restored from the latest
- checkpoint or initialized from scratch if not checkpoint exists. The
- session is closed when the `with` block exits.
-
-
-- - -
-
-#### `tf.train.Supervisor.prepare_or_wait_for_session(master='', config=None, wait_for_checkpoint=False, max_wait_secs=7200, start_standard_services=True)` {#Supervisor.prepare_or_wait_for_session}
-
-Make sure the model is ready to be used.
-
-Create a session on 'master', recovering or initializing the model as
-needed, or wait for a session to be ready. If running as the chief
-and `start_standard_service` is set to True, also call the session
-manager to start the standard services.
-
-##### Args:
-
-
-* <b>`master`</b>: name of the TensorFlow master to use. See the `tf.Session`
- constructor for how this is interpreted.
-* <b>`config`</b>: Optional ConfigProto proto used to configure the session,
- which is passed as-is to create the session.
-* <b>`wait_for_checkpoint`</b>: Whether we should wait for the availability of a
- checkpoint before creating Session. Defaults to False.
-* <b>`max_wait_secs`</b>: Maximum time to wait for the session to become available.
-* <b>`start_standard_services`</b>: Whether to start the standard services and the
- queue runners.
-
-##### Returns:
-
- A Session object that can be used to drive the model.
-
-
-- - -
-
-#### `tf.train.Supervisor.start_standard_services(sess)` {#Supervisor.start_standard_services}
-
-Start the standard services for 'sess'.
-
-This starts services in the background. The services started depend
-on the parameters to the constructor and may include:
-
- - A Summary thread computing summaries every save_summaries_secs.
- - A Checkpoint thread saving the model every save_model_secs.
- - A StepCounter thread measure step time.
-
-##### Args:
-
-
-* <b>`sess`</b>: A Session.
-
-##### Returns:
-
- A list of threads that are running the standard services. You can use
- the Supervisor's Coordinator to join these threads with:
- sv.coord.Join(<list of threads>)
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If called with a non-chief Supervisor.
-* <b>`ValueError`</b>: If not `logdir` was passed to the constructor as the
- services need a log directory.
-
-
-- - -
-
-#### `tf.train.Supervisor.start_queue_runners(sess, queue_runners=None)` {#Supervisor.start_queue_runners}
-
-Start threads for `QueueRunners`.
-
-Note that the queue runners collected in the graph key `QUEUE_RUNNERS`
-are already started automatically when you create a session with the
-supervisor, so unless you have non-collected queue runners to start
-you do not need to call this explicitly.
-
-##### Args:
-
-
-* <b>`sess`</b>: A `Session`.
-* <b>`queue_runners`</b>: A list of `QueueRunners`. If not specified, we'll use the
- list of queue runners gathered in the graph under the key
- `GraphKeys.QUEUE_RUNNERS`.
-
-##### Returns:
-
- The list of threads started for the `QueueRunners`.
-
-
-- - -
-
-#### `tf.train.Supervisor.summary_computed(sess, summary, global_step=None)` {#Supervisor.summary_computed}
-
-Indicate that a summary was computed.
-
-##### Args:
-
-
-* <b>`sess`</b>: A `Session` object.
-* <b>`summary`</b>: A Summary proto, or a string holding a serialized summary proto.
-* <b>`global_step`</b>: Int. global step this summary is associated with. If `None`,
- it will try to fetch the current step.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if 'summary' is not a Summary proto or a string.
-* <b>`RuntimeError`</b>: if the Supervisor was created without a `logdir`.
-
-
-
-- - -
-
-#### `tf.train.Supervisor.stop(threads=None, close_summary_writer=True)` {#Supervisor.stop}
-
-Stop the services and the coordinator.
-
-This does not close the session.
-
-##### Args:
-
-
-* <b>`threads`</b>: Optional list of threads to join with the coordinator. If
- `None`, defaults to the threads running the standard services, the
- threads started for `QueueRunners`, and the threads started by the
- `loop()` method. To wait on additional threads, pass the
- list in this parameter.
-* <b>`close_summary_writer`</b>: Whether to close the `summary_writer`. Defaults to
- `True` if the summary writer was created by the supervisor, `False`
- otherwise.
-
-
-- - -
-
-#### `tf.train.Supervisor.request_stop(ex=None)` {#Supervisor.request_stop}
-
-Request that the coordinator stop the threads.
-
-See `Coordinator.request_stop()`.
-
-##### Args:
-
-
-* <b>`ex`</b>: Optional `Exception`, or Python `exc_info` tuple as returned by
- `sys.exc_info()`. If this is the first call to `request_stop()` the
- corresponding exception is recorded and re-raised from `join()`.
-
-
-- - -
-
-#### `tf.train.Supervisor.should_stop()` {#Supervisor.should_stop}
-
-Check if the coordinator was told to stop.
-
-See `Coordinator.should_stop()`.
-
-##### Returns:
-
- True if the coordinator was told to stop, False otherwise.
-
-
-- - -
-
-#### `tf.train.Supervisor.stop_on_exception()` {#Supervisor.stop_on_exception}
-
-Context handler to stop the supervisor when an exception is raised.
-
-See `Coordinator.stop_on_exception()`.
-
-##### Returns:
-
- A context handler.
-
-
-- - -
-
-#### `tf.train.Supervisor.wait_for_stop()` {#Supervisor.wait_for_stop}
-
-Block waiting for the coordinator to stop.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.train.Supervisor.Loop(timer_interval_secs, target, args=None, kwargs=None)` {#Supervisor.Loop}
-
-Start a LooperThread that calls a function periodically.
-
-If `timer_interval_secs` is None the thread calls `target(*args, **kwargs)`
-repeatedly. Otherwise it calls it every `timer_interval_secs`
-seconds. The thread terminates when a stop is requested.
-
-The started thread is added to the list of threads managed by the supervisor
-so it does not need to be passed to the `stop()` method.
-
-##### Args:
-
-
-* <b>`timer_interval_secs`</b>: Number. Time boundaries at which to call `target`.
-* <b>`target`</b>: A callable object.
-* <b>`args`</b>: Optional arguments to pass to `target` when calling it.
-* <b>`kwargs`</b>: Optional keyword arguments to pass to `target` when calling it.
-
-##### Returns:
-
- The started thread.
-
-
-- - -
-
-#### `tf.train.Supervisor.PrepareSession(master='', config=None, wait_for_checkpoint=False, max_wait_secs=7200, start_standard_services=True)` {#Supervisor.PrepareSession}
-
-Make sure the model is ready to be used.
-
-Create a session on 'master', recovering or initializing the model as
-needed, or wait for a session to be ready. If running as the chief
-and `start_standard_service` is set to True, also call the session
-manager to start the standard services.
-
-##### Args:
-
-
-* <b>`master`</b>: name of the TensorFlow master to use. See the `tf.Session`
- constructor for how this is interpreted.
-* <b>`config`</b>: Optional ConfigProto proto used to configure the session,
- which is passed as-is to create the session.
-* <b>`wait_for_checkpoint`</b>: Whether we should wait for the availability of a
- checkpoint before creating Session. Defaults to False.
-* <b>`max_wait_secs`</b>: Maximum time to wait for the session to become available.
-* <b>`start_standard_services`</b>: Whether to start the standard services and the
- queue runners.
-
-##### Returns:
-
- A Session object that can be used to drive the model.
-
-
-- - -
-
-#### `tf.train.Supervisor.RequestStop(ex=None)` {#Supervisor.RequestStop}
-
-Request that the coordinator stop the threads.
-
-See `Coordinator.request_stop()`.
-
-##### Args:
-
-
-* <b>`ex`</b>: Optional `Exception`, or Python `exc_info` tuple as returned by
- `sys.exc_info()`. If this is the first call to `request_stop()` the
- corresponding exception is recorded and re-raised from `join()`.
-
-
-- - -
-
-#### `tf.train.Supervisor.ShouldStop()` {#Supervisor.ShouldStop}
-
-Check if the coordinator was told to stop.
-
-See `Coordinator.should_stop()`.
-
-##### Returns:
-
- True if the coordinator was told to stop, False otherwise.
-
-
-- - -
-
-#### `tf.train.Supervisor.StartQueueRunners(sess, queue_runners=None)` {#Supervisor.StartQueueRunners}
-
-Start threads for `QueueRunners`.
-
-Note that the queue runners collected in the graph key `QUEUE_RUNNERS`
-are already started automatically when you create a session with the
-supervisor, so unless you have non-collected queue runners to start
-you do not need to call this explicitly.
-
-##### Args:
-
-
-* <b>`sess`</b>: A `Session`.
-* <b>`queue_runners`</b>: A list of `QueueRunners`. If not specified, we'll use the
- list of queue runners gathered in the graph under the key
- `GraphKeys.QUEUE_RUNNERS`.
-
-##### Returns:
-
- The list of threads started for the `QueueRunners`.
-
-
-- - -
-
-#### `tf.train.Supervisor.StartStandardServices(sess)` {#Supervisor.StartStandardServices}
-
-Start the standard services for 'sess'.
-
-This starts services in the background. The services started depend
-on the parameters to the constructor and may include:
-
- - A Summary thread computing summaries every save_summaries_secs.
- - A Checkpoint thread saving the model every save_model_secs.
- - A StepCounter thread measure step time.
-
-##### Args:
-
-
-* <b>`sess`</b>: A Session.
-
-##### Returns:
-
- A list of threads that are running the standard services. You can use
- the Supervisor's Coordinator to join these threads with:
- sv.coord.Join(<list of threads>)
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If called with a non-chief Supervisor.
-* <b>`ValueError`</b>: If not `logdir` was passed to the constructor as the
- services need a log directory.
-
-
-- - -
-
-#### `tf.train.Supervisor.Stop(threads=None, close_summary_writer=True)` {#Supervisor.Stop}
-
-Stop the services and the coordinator.
-
-This does not close the session.
-
-##### Args:
-
-
-* <b>`threads`</b>: Optional list of threads to join with the coordinator. If
- `None`, defaults to the threads running the standard services, the
- threads started for `QueueRunners`, and the threads started by the
- `loop()` method. To wait on additional threads, pass the
- list in this parameter.
-* <b>`close_summary_writer`</b>: Whether to close the `summary_writer`. Defaults to
- `True` if the summary writer was created by the supervisor, `False`
- otherwise.
-
-
-- - -
-
-#### `tf.train.Supervisor.StopOnException()` {#Supervisor.StopOnException}
-
-Context handler to stop the supervisor when an exception is raised.
-
-See `Coordinator.stop_on_exception()`.
-
-##### Returns:
-
- A context handler.
-
-
-- - -
-
-#### `tf.train.Supervisor.SummaryComputed(sess, summary, global_step=None)` {#Supervisor.SummaryComputed}
-
-Indicate that a summary was computed.
-
-##### Args:
-
-
-* <b>`sess`</b>: A `Session` object.
-* <b>`summary`</b>: A Summary proto, or a string holding a serialized summary proto.
-* <b>`global_step`</b>: Int. global step this summary is associated with. If `None`,
- it will try to fetch the current step.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if 'summary' is not a Summary proto or a string.
-* <b>`RuntimeError`</b>: if the Supervisor was created without a `logdir`.
-
-
-- - -
-
-#### `tf.train.Supervisor.WaitForStop()` {#Supervisor.WaitForStop}
-
-Block waiting for the coordinator to stop.
-
-
-- - -
-
-#### `tf.train.Supervisor.coord` {#Supervisor.coord}
-
-Return the Coordinator used by the Supervisor.
-
-The Coordinator can be useful if you want to run multiple threads
-during your training.
-
-##### Returns:
-
- A Coordinator object.
-
-
-- - -
-
-#### `tf.train.Supervisor.global_step` {#Supervisor.global_step}
-
-Return the global_step Tensor used by the supervisor.
-
-##### Returns:
-
- An integer Tensor for the global_step.
-
-
-- - -
-
-#### `tf.train.Supervisor.init_feed_dict` {#Supervisor.init_feed_dict}
-
-Return the feed dictionary used when evaluating the `init_op`.
-
-##### Returns:
-
- A feed dictionary or `None`.
-
-
-- - -
-
-#### `tf.train.Supervisor.init_op` {#Supervisor.init_op}
-
-Return the Init Op used by the supervisor.
-
-##### Returns:
-
- An Op or `None`.
-
-
-- - -
-
-#### `tf.train.Supervisor.is_chief` {#Supervisor.is_chief}
-
-Return True if this is a chief supervisor.
-
-##### Returns:
-
- A bool.
-
-
-- - -
-
-#### `tf.train.Supervisor.loop(timer_interval_secs, target, args=None, kwargs=None)` {#Supervisor.loop}
-
-Start a LooperThread that calls a function periodically.
-
-If `timer_interval_secs` is None the thread calls `target(*args, **kwargs)`
-repeatedly. Otherwise it calls it every `timer_interval_secs`
-seconds. The thread terminates when a stop is requested.
-
-The started thread is added to the list of threads managed by the supervisor
-so it does not need to be passed to the `stop()` method.
-
-##### Args:
-
-
-* <b>`timer_interval_secs`</b>: Number. Time boundaries at which to call `target`.
-* <b>`target`</b>: A callable object.
-* <b>`args`</b>: Optional arguments to pass to `target` when calling it.
-* <b>`kwargs`</b>: Optional keyword arguments to pass to `target` when calling it.
-
-##### Returns:
-
- The started thread.
-
-
-- - -
-
-#### `tf.train.Supervisor.ready_for_local_init_op` {#Supervisor.ready_for_local_init_op}
-
-
-
-
-- - -
-
-#### `tf.train.Supervisor.ready_op` {#Supervisor.ready_op}
-
-Return the Ready Op used by the supervisor.
-
-##### Returns:
-
- An Op or `None`.
-
-
-- - -
-
-#### `tf.train.Supervisor.save_model_secs` {#Supervisor.save_model_secs}
-
-Return the delay between checkpoints.
-
-##### Returns:
-
- A timestamp.
-
-
-- - -
-
-#### `tf.train.Supervisor.save_path` {#Supervisor.save_path}
-
-Return the save path used by the supervisor.
-
-##### Returns:
-
- A string.
-
-
-- - -
-
-#### `tf.train.Supervisor.save_summaries_secs` {#Supervisor.save_summaries_secs}
-
-Return the delay between summary computations.
-
-##### Returns:
-
- A timestamp.
-
-
-- - -
-
-#### `tf.train.Supervisor.saver` {#Supervisor.saver}
-
-Return the Saver used by the supervisor.
-
-##### Returns:
-
- A Saver object.
-
-
-- - -
-
-#### `tf.train.Supervisor.session_manager` {#Supervisor.session_manager}
-
-Return the SessionManager used by the Supervisor.
-
-##### Returns:
-
- A SessionManager object.
-
-
-- - -
-
-#### `tf.train.Supervisor.summary_op` {#Supervisor.summary_op}
-
-Return the Summary Tensor used by the chief supervisor.
-
-##### Returns:
-
- A string Tensor for the summary or `None`.
-
-
-- - -
-
-#### `tf.train.Supervisor.summary_writer` {#Supervisor.summary_writer}
-
-Return the SummaryWriter used by the chief supervisor.
-
-##### Returns:
-
- A SummaryWriter.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.exponential_decay.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.exponential_decay.md
deleted file mode 100644
index 4fb1a2b575..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.exponential_decay.md
+++ /dev/null
@@ -1,60 +0,0 @@
-### `tf.train.exponential_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None)` {#exponential_decay}
-
-Applies exponential decay to the learning rate.
-
-When training a model, it is often recommended to lower the learning rate as
-the training progresses. This function applies an exponential decay function
-to a provided initial learning rate. It requires a `global_step` value to
-compute the decayed learning rate. You can just pass a TensorFlow variable
-that you increment at each training step.
-
-The function returns the decayed learning rate. It is computed as:
-
-```python
-decayed_learning_rate = learning_rate *
- decay_rate ^ (global_step / decay_steps)
-```
-
-If the argument `staircase` is `True`, then `global_step / decay_steps` is an
-integer division and the decayed learning rate follows a staircase function.
-
-Example: decay every 100000 steps with a base of 0.96:
-
-```python
-...
-global_step = tf.Variable(0, trainable=False)
-starter_learning_rate = 0.1
-learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
- 100000, 0.96, staircase=True)
-# Passing global_step to minimize() will increment it at each step.
-learning_step = (
- tf.train.GradientDescentOptimizer(learning_rate)
- .minimize(...my loss..., global_step=global_step)
-)
-```
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A scalar `float32` or `float64` `Tensor` or a
- Python number. The initial learning rate.
-* <b>`global_step`</b>: A scalar `int32` or `int64` `Tensor` or a Python number.
- Global step to use for the decay computation. Must not be negative.
-* <b>`decay_steps`</b>: A scalar `int32` or `int64` `Tensor` or a Python number.
- Must be positive. See the decay computation above.
-* <b>`decay_rate`</b>: A scalar `float32` or `float64` `Tensor` or a
- Python number. The decay rate.
-* <b>`staircase`</b>: Boolean. If `True` decay the learning rate at discrete intervals
-* <b>`name`</b>: String. Optional name of the operation. Defaults to
- 'ExponentialDecay'.
-
-##### Returns:
-
- A scalar `Tensor` of the same type as `learning_rate`. The decayed
- learning rate.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `global_step` is not supplied.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.slice_input_producer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.slice_input_producer.md
deleted file mode 100644
index da888d0fc2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.slice_input_producer.md
+++ /dev/null
@@ -1,35 +0,0 @@
-### `tf.train.slice_input_producer(tensor_list, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None)` {#slice_input_producer}
-
-Produces a slice of each `Tensor` in `tensor_list`.
-
-Implemented using a Queue -- a `QueueRunner` for the Queue
-is added to the current `Graph`'s `QUEUE_RUNNER` collection.
-
-##### Args:
-
-
-* <b>`tensor_list`</b>: A list of `Tensor` objects. Every `Tensor` in
- `tensor_list` must have the same size in the first dimension.
-* <b>`num_epochs`</b>: An integer (optional). If specified, `slice_input_producer`
- produces each slice `num_epochs` times before generating
- an `OutOfRange` error. If not specified, `slice_input_producer` can cycle
- through the slices an unlimited number of times.
-* <b>`shuffle`</b>: Boolean. If true, the integers are randomly shuffled within each
- epoch.
-* <b>`seed`</b>: An integer (optional). Seed used if shuffle == True.
-* <b>`capacity`</b>: An integer. Sets the queue capacity.
-* <b>`shared_name`</b>: (optional). If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: A name for the operations (optional).
-
-##### Returns:
-
- A list of tensors, one for each element of `tensor_list`. If the tensor
- in `tensor_list` has shape `[N, a, b, .., z]`, then the corresponding output
- tensor will have shape `[a, b, ..., z]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `slice_input_producer` produces nothing from `tensor_list`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.trainable_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.trainable_variables.md
deleted file mode 100644
index 894d64a2b4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.trainable_variables.md
+++ /dev/null
@@ -1,13 +0,0 @@
-### `tf.trainable_variables()` {#trainable_variables}
-
-Returns all variables created with `trainable=True`.
-
-When passed `trainable=True`, the `Variable()` constructor automatically
-adds new variables to the graph collection
-`GraphKeys.TRAINABLE_VARIABLES`. This convenience function returns the
-contents of that collection.
-
-##### Returns:
-
- A list of Variable objects.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.truncated_normal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.truncated_normal.md
deleted file mode 100644
index 9ae13882d3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.truncated_normal.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)` {#truncated_normal}
-
-Outputs random values from a truncated normal distribution.
-
-The generated values follow a normal distribution with specified mean and
-standard deviation, except that values whose magnitude is more than 2 standard
-deviations from the mean are dropped and re-picked.
-
-##### Args:
-
-
-* <b>`shape`</b>: A 1-D integer Tensor or Python array. The shape of the output tensor.
-* <b>`mean`</b>: A 0-D Tensor or Python value of type `dtype`. The mean of the
- truncated normal distribution.
-* <b>`stddev`</b>: A 0-D Tensor or Python value of type `dtype`. The standard deviation
- of the truncated normal distribution.
-* <b>`dtype`</b>: The type of the output.
-* <b>`seed`</b>: A Python integer. Used to create a random seed for the distribution.
- See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tensor of the specified shape filled with random truncated normal values.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf_debug.watch_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf_debug.watch_graph.md
deleted file mode 100644
index 5f206435bd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf_debug.watch_graph.md
+++ /dev/null
@@ -1,32 +0,0 @@
-### `tf_debug.watch_graph(run_options, graph, debug_ops='DebugIdentity', debug_urls=None, node_name_regex_whitelist=None, op_type_regex_whitelist=None, global_step=-1)` {#watch_graph}
-
-Add debug watches to `RunOptions` for a TensorFlow graph.
-
-To watch all `Tensor`s on the graph, let both `node_name_regex_whitelist`
-and `op_type_regex_whitelist` be the default (`None`).
-
-N.B.: Under certain circumstances, not all specified `Tensor`s will be
- actually watched (e.g., nodes that are constant-folded during runtime will
- not be watched).
-
-##### Args:
-
-
-* <b>`run_options`</b>: An instance of `config_pb2.RunOptions` to be modified.
-* <b>`graph`</b>: An instance of `ops.Graph`.
-* <b>`debug_ops`</b>: (`str` or `list` of `str`) name(s) of the debug op(s) to use.
-* <b>`debug_urls`</b>: URLs to send debug values to. Can be a list of strings,
- a single string, or None. The case of a single string is equivalent to
- a list consisting of a single string, e.g., `file:///tmp/tfdbg_dump_1`,
- `grpc://localhost:12345`.
-* <b>`node_name_regex_whitelist`</b>: Regular-expression whitelist for node_name,
- e.g., `"(weight_[0-9]+|bias_.*)"`
-* <b>`op_type_regex_whitelist`</b>: Regular-expression whitelist for the op type of
- nodes, e.g., `"(Variable|Add)"`.
- If both `node_name_regex_whitelist` and `op_type_regex_whitelist`
- are set, the two filtering operations will occur in a logical `AND`
- relation. In other words, a node will be included if and only if it
- hits both whitelists.
-* <b>`global_step`</b>: (`int`) Optional global_step count for this debug tensor
- watch.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.DeviceSpec.from_string.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.DeviceSpec.from_string.md
deleted file mode 100644
index 5cbba0ada6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.DeviceSpec.from_string.md
+++ /dev/null
@@ -1,18 +0,0 @@
-#### `tf.DeviceSpec.from_string(spec)` {#DeviceSpec.from_string}
-
-Construct a `DeviceSpec` from a string.
-
-##### Args:
-
-
-* <b>`spec`</b>: a string of the form
- /job:<name>/replica:<id>/task:<id>/device:CPU:<id>
- or
- /job:<name>/replica:<id>/task:<id>/device:GPU:<id>
- as cpu and gpu are mutually exclusive.
- All entries are optional.
-
-##### Returns:
-
- A DeviceSpec.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.FixedLenFeature.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.FixedLenFeature.md
deleted file mode 100644
index 55a007852a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.FixedLenFeature.md
+++ /dev/null
@@ -1,59 +0,0 @@
-Configuration for parsing a fixed-length input feature.
-
-To treat sparse input as dense, provide a `default_value`; otherwise,
-the parse functions will fail on any examples missing this feature.
-
-Fields:
- shape: Shape of input data.
- dtype: Data type of input.
- default_value: Value to be used if an example is missing this feature. It
- must be compatible with `dtype`.
-- - -
-
-#### `tf.FixedLenFeature.__getnewargs__()` {#FixedLenFeature.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.FixedLenFeature.__getstate__()` {#FixedLenFeature.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.FixedLenFeature.__new__(_cls, shape, dtype, default_value=None)` {#FixedLenFeature.__new__}
-
-Create new instance of FixedLenFeature(shape, dtype, default_value)
-
-
-- - -
-
-#### `tf.FixedLenFeature.__repr__()` {#FixedLenFeature.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.FixedLenFeature.default_value` {#FixedLenFeature.default_value}
-
-Alias for field number 2
-
-
-- - -
-
-#### `tf.FixedLenFeature.dtype` {#FixedLenFeature.dtype}
-
-Alias for field number 1
-
-
-- - -
-
-#### `tf.FixedLenFeature.shape` {#FixedLenFeature.shape}
-
-Alias for field number 0
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.Operation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.Operation.md
deleted file mode 100644
index 08323b592f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.Operation.md
+++ /dev/null
@@ -1,234 +0,0 @@
-Represents a graph node that performs computation on tensors.
-
-An `Operation` is a node in a TensorFlow `Graph` that takes zero or
-more `Tensor` objects as input, and produces zero or more `Tensor`
-objects as output. Objects of type `Operation` are created by
-calling a Python op constructor (such as
-[`tf.matmul()`](../../api_docs/python/math_ops.md#matmul))
-or [`Graph.create_op()`](../../api_docs/python/framework.md#Graph.create_op).
-
-For example `c = tf.matmul(a, b)` creates an `Operation` of type
-"MatMul" that takes tensors `a` and `b` as input, and produces `c`
-as output.
-
-After the graph has been launched in a session, an `Operation` can
-be executed by passing it to
-[`Session.run()`](../../api_docs/python/client.md#Session.run).
-`op.run()` is a shortcut for calling `tf.get_default_session().run(op)`.
-- - -
-
-#### `tf.Operation.__init__(node_def, g, inputs=None, output_types=None, control_inputs=None, input_types=None, original_op=None, op_def=None)` {#Operation.__init__}
-
-Creates an `Operation`.
-
-NOTE: This constructor validates the name of the `Operation` (passed
-as `node_def.name`). Valid `Operation` names match the following
-regular expression:
-
- [A-Za-z0-9.][A-Za-z0-9_.\\-/]*
-
-##### Args:
-
-
-* <b>`node_def`</b>: `node_def_pb2.NodeDef`. `NodeDef` for the `Operation`.
- Used for attributes of `node_def_pb2.NodeDef`, typically `name`,
- `op`, and `device`. The `input` attribute is irrelevant here
- as it will be computed when generating the model.
-* <b>`g`</b>: `Graph`. The parent graph.
-* <b>`inputs`</b>: list of `Tensor` objects. The inputs to this `Operation`.
-* <b>`output_types`</b>: list of `DType` objects. List of the types of the
- `Tensors` computed by this operation. The length of this list indicates
- the number of output endpoints of the `Operation`.
-* <b>`control_inputs`</b>: list of operations or tensors from which to have a
- control dependency.
-* <b>`input_types`</b>: List of `DType` objects representing the
- types of the tensors accepted by the `Operation`. By default
- uses `[x.dtype.base_dtype for x in inputs]`. Operations that expect
- reference-typed inputs must specify these explicitly.
-* <b>`original_op`</b>: Optional. Used to associate the new `Operation` with an
- existing `Operation` (for example, a replica with the op that was
- replicated).
-* <b>`op_def`</b>: Optional. The `op_def_pb2.OpDef` proto that describes the
- op type that this `Operation` represents.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if control inputs are not Operations or Tensors,
- or if `node_def` is not a `NodeDef`,
- or if `g` is not a `Graph`,
- or if `inputs` are not tensors,
- or if `inputs` and `input_types` are incompatible.
-* <b>`ValueError`</b>: if the `node_def` name is not valid.
-
-
-- - -
-
-#### `tf.Operation.__repr__()` {#Operation.__repr__}
-
-
-
-
-- - -
-
-#### `tf.Operation.__str__()` {#Operation.__str__}
-
-
-
-
-- - -
-
-#### `tf.Operation.colocation_groups()` {#Operation.colocation_groups}
-
-Returns the list of colocation groups of the op.
-
-
-- - -
-
-#### `tf.Operation.control_inputs` {#Operation.control_inputs}
-
-The `Operation` objects on which this op has a control dependency.
-
-Before this op is executed, TensorFlow will ensure that the
-operations in `self.control_inputs` have finished executing. This
-mechanism can be used to run ops sequentially for performance
-reasons, or to ensure that the side effects of an op are observed
-in the correct order.
-
-##### Returns:
-
- A list of `Operation` objects.
-
-
-- - -
-
-#### `tf.Operation.device` {#Operation.device}
-
-The name of the device to which this op has been assigned, if any.
-
-##### Returns:
-
- The string name of the device to which this op has been
- assigned, or an empty string if it has not been assigned to a
- device.
-
-
-- - -
-
-#### `tf.Operation.get_attr(name)` {#Operation.get_attr}
-
-Returns the value of the attr of this op with the given `name`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name of the attr to fetch.
-
-##### Returns:
-
- The value of the attr, as a Python object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If this op does not have an attr with the given `name`.
-
-
-- - -
-
-#### `tf.Operation.graph` {#Operation.graph}
-
-The `Graph` that contains this operation.
-
-
-- - -
-
-#### `tf.Operation.inputs` {#Operation.inputs}
-
-The list of `Tensor` objects representing the data inputs of this op.
-
-
-- - -
-
-#### `tf.Operation.name` {#Operation.name}
-
-The full name of this operation.
-
-
-- - -
-
-#### `tf.Operation.node_def` {#Operation.node_def}
-
-Returns a serialized `NodeDef` representation of this operation.
-
-##### Returns:
-
- A
- [`NodeDef`](https://www.tensorflow.org/code/tensorflow/core/framework/node_def.proto)
- protocol buffer.
-
-
-- - -
-
-#### `tf.Operation.op_def` {#Operation.op_def}
-
-Returns the `OpDef` proto that represents the type of this op.
-
-##### Returns:
-
- An
- [`OpDef`](https://www.tensorflow.org/code/tensorflow/core/framework/op_def.proto)
- protocol buffer.
-
-
-- - -
-
-#### `tf.Operation.outputs` {#Operation.outputs}
-
-The list of `Tensor` objects representing the outputs of this op.
-
-
-- - -
-
-#### `tf.Operation.run(feed_dict=None, session=None)` {#Operation.run}
-
-Runs this operation in a `Session`.
-
-Calling this method will execute all preceding operations that
-produce the inputs needed for this operation.
-
-*N.B.* Before invoking `Operation.run()`, its graph must have been
-launched in a session, and either a default session must be
-available, or `session` must be specified explicitly.
-
-##### Args:
-
-
-* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
- See [`Session.run()`](../../api_docs/python/client.md#Session.run)
- for a description of the valid feed values.
-* <b>`session`</b>: (Optional.) The `Session` to be used to run to this operation. If
- none, the default session will be used.
-
-
-- - -
-
-#### `tf.Operation.traceback` {#Operation.traceback}
-
-Returns the call stack from when this operation was constructed.
-
-
-- - -
-
-#### `tf.Operation.type` {#Operation.type}
-
-The type of the op (e.g. `"MatMul"`).
-
-
-- - -
-
-#### `tf.Operation.values()` {#Operation.values}
-
-DEPRECATED: Use outputs.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.PaddingFIFOQueue.from_list.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.PaddingFIFOQueue.from_list.md
deleted file mode 100644
index 105b0fd4c6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.PaddingFIFOQueue.from_list.md
+++ /dev/null
@@ -1,21 +0,0 @@
-#### `tf.PaddingFIFOQueue.from_list(index, queues)` {#PaddingFIFOQueue.from_list}
-
-Create a queue using the queue reference from `queues[index]`.
-
-##### Args:
-
-
-* <b>`index`</b>: An integer scalar tensor that determines the input that gets
- selected.
-* <b>`queues`</b>: A list of `QueueBase` objects.
-
-##### Returns:
-
- A `QueueBase` object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
- or when the data types of `queues` are not all the same.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.QueueBase.from_list.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.QueueBase.from_list.md
deleted file mode 100644
index d9a2e7c71f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.QueueBase.from_list.md
+++ /dev/null
@@ -1,21 +0,0 @@
-#### `tf.QueueBase.from_list(index, queues)` {#QueueBase.from_list}
-
-Create a queue using the queue reference from `queues[index]`.
-
-##### Args:
-
-
-* <b>`index`</b>: An integer scalar tensor that determines the input that gets
- selected.
-* <b>`queues`</b>: A list of `QueueBase` objects.
-
-##### Returns:
-
- A `QueueBase` object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
- or when the data types of `queues` are not all the same.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.as_dtype.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.as_dtype.md
deleted file mode 100644
index 50a048aacb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.as_dtype.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.as_dtype(type_value)` {#as_dtype}
-
-Converts the given `type_value` to a `DType`.
-
-##### Args:
-
-
-* <b>`type_value`</b>: A value that can be converted to a `tf.DType`
- object. This may currently be a `tf.DType` object, a
- [`DataType` enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto),
- a string type name, or a `numpy.dtype`.
-
-##### Returns:
-
- A `DType` corresponding to `type_value`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `type_value` cannot be converted to a `DType`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_equal.md
deleted file mode 100644
index b50abb29dd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_equal.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.assert_equal(x, y, data=None, summarize=None, message=None, name=None)` {#assert_equal}
-
-Assert the condition `x == y` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_equal(x, y)]):
- output = tf.reduce_sum(x)
-```
-
-This condition holds if for every pair of (possibly broadcast) elements
-`x[i]`, `y[i]`, we have `x[i] == y[i]`.
-If both `x` and `y` are empty, this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`y`</b>: Numeric `Tensor`, same dtype as and broadcastable to `x`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`, `y`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_equal".
-
-##### Returns:
-
- Op that raises `InvalidArgumentError` if `x == y` is False.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_variables_initialized.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_variables_initialized.md
deleted file mode 100644
index ac8604579d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.assert_variables_initialized.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.assert_variables_initialized(var_list=None)` {#assert_variables_initialized}
-
-Returns an Op to check if variables are initialized.
-
-NOTE: This function is obsolete and will be removed in 6 months. Please
-change your implementation to use `report_uninitialized_variables()`.
-
-When run, the returned Op will raise the exception `FailedPreconditionError`
-if any of the variables has not yet been initialized.
-
-Note: This function is implemented by trying to fetch the values of the
-variables. If one of the variables is not initialized a message may be
-logged by the C++ runtime. This is expected.
-
-##### Args:
-
-
-* <b>`var_list`</b>: List of `Variable` objects to check. Defaults to the
- value of `global_variables().`
-
-##### Returns:
-
- An Op, or None if there are no variables.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.bayesflow.entropy.renyi_ratio.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.bayesflow.entropy.renyi_ratio.md
deleted file mode 100644
index 5b801848fc..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.bayesflow.entropy.renyi_ratio.md
+++ /dev/null
@@ -1,103 +0,0 @@
-### `tf.contrib.bayesflow.entropy.renyi_ratio(log_p, q, alpha, z=None, n=None, seed=None, name='renyi_ratio')` {#renyi_ratio}
-
-Monte Carlo estimate of the ratio appearing in Renyi divergence.
-
-This can be used to compute the Renyi (alpha) divergence, or a log evidence
-approximation based on Renyi divergence.
-
-#### Definition
-
-With `z_i` iid samples from `q`, and `exp{log_p(z)} = p(z)`, this `Op` returns
-the (biased for finite `n`) estimate:
-
-```
-(1 - alpha)^{-1} Log[ n^{-1} sum_{i=1}^n ( p(z_i) / q(z_i) )^{1 - alpha},
-\approx (1 - alpha)^{-1} Log[ E_q[ (p(Z) / q(Z))^{1 - alpha} ] ]
-```
-
-This ratio appears in different contexts:
-
-#### Renyi divergence
-
-If `log_p(z) = Log[p(z)]` is the log prob of a distribution, and
-`alpha > 0`, `alpha != 1`, this `Op` approximates `-1` times Renyi divergence:
-
-```
-# Choose reasonably high n to limit bias, see below.
-renyi_ratio(log_p, q, alpha, n=100)
- \approx -1 * D_alpha[q || p], where
-D_alpha[q || p] := (1 - alpha)^{-1} Log E_q[(p(Z) / q(Z))^{1 - alpha}]
-```
-
-The Renyi (or "alpha") divergence is non-negative and equal to zero iff
-`q = p`. Various limits of `alpha` lead to different special case results:
-
-```
-alpha D_alpha[q || p]
------ ---------------
---> 0 Log[ int_{q > 0} p(z) dz ]
-= 0.5, -2 Log[1 - Hel^2[q || p]], (\propto squared Hellinger distance)
---> 1 KL[q || p]
-= 2 Log[ 1 + chi^2[q || p] ], (\propto squared Chi-2 divergence)
---> infty Log[ max_z{q(z) / p(z)} ], (min description length principle).
-```
-
-See "Renyi Divergence Variational Inference", by Li and Turner.
-
-#### Log evidence approximation
-
-If `log_p(z) = Log[p(z, x)]` is the log of the joint distribution `p`, this is
-an alternative to the ELBO common in variational inference.
-
-```
-L_alpha(q, p) = Log[p(x)] - D_alpha[q || p]
-```
-
-If `q` and `p` have the same support, and `0 < a <= b < 1`, one can show
-`ELBO <= D_b <= D_a <= Log[p(x)]`. Thus, this `Op` allows a smooth
-interpolation between the ELBO and the true evidence.
-
-#### Stability notes
-
-Note that when `1 - alpha` is not small, the ratio `(p(z) / q(z))^{1 - alpha}`
-is subject to underflow/overflow issues. For that reason, it is evaluated in
-log-space after centering. Nonetheless, infinite/NaN results may occur. For
-that reason, one may wish to shrink `alpha` gradually. See the `Op`
-`renyi_alpha`. Using `float64` will also help.
-
-
-#### Bias for finite sample size
-
-Due to nonlinearity of the logarithm, for random variables `{X_1,...,X_n}`,
-`E[ Log[sum_{i=1}^n X_i] ] != Log[ E[sum_{i=1}^n X_i] ]`. As a result, this
-estimate is biased for finite `n`. For `alpha < 1`, it is non-decreasing
-with `n` (in expectation). For example, if `n = 1`, this estimator yields the
-same result as `elbo_ratio`, and as `n` increases the expected value
-of the estimator increases.
-
-#### Call signature
-
-User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
-
-##### Args:
-
-
-* <b>`log_p`</b>: Callable mapping samples from `q` to `Tensors` with
- shape broadcastable to `q.batch_shape`.
- For example, `log_p` works "just like" `q.log_prob`.
-* <b>`q`</b>: `tf.contrib.distributions.Distribution`.
- `float64` `dtype` recommended.
- `log_p` and `q` should be supported on the same set.
-* <b>`alpha`</b>: `Tensor` with shape `q.batch_shape` and values not equal to 1.
-* <b>`z`</b>: `Tensor` of samples from `q`, produced by `q.sample` for some `n`.
-* <b>`n`</b>: Integer `Tensor`. The number of samples to use if `z` is not provided.
- Note that this can be highly biased for small `n`, see docstring.
-* <b>`seed`</b>: Python integer to seed the random number generator.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
-
-* <b>`renyi_result`</b>: The scaled log of sample mean. `Tensor` with `shape` equal
- to batch shape of `q`, and `dtype` = `q.dtype`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.ConditionalTransformedDistribution.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.ConditionalTransformedDistribution.md
deleted file mode 100644
index 6607f5a275..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.ConditionalTransformedDistribution.md
+++ /dev/null
@@ -1,489 +0,0 @@
-A TransformedDistribution that allows intrinsic conditioning.
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.__init__(distribution, bijector=None, batch_shape=None, event_shape=None, validate_args=False, name=None)` {#ConditionalTransformedDistribution.__init__}
-
-Construct a Transformed Distribution.
-
-##### Args:
-
-
-* <b>`distribution`</b>: The base distribution instance to transform. Typically an
- instance of `Distribution`.
-* <b>`bijector`</b>: The object responsible for calculating the transformation.
- Typically an instance of `Bijector`. `None` means `Identity()`.
-* <b>`batch_shape`</b>: `integer` vector `Tensor` which overrides `distribution`
- `batch_shape`; valid only if `distribution.is_scalar_batch()`.
-* <b>`event_shape`</b>: `integer` vector `Tensor` which overrides `distribution`
- `event_shape`; valid only if `distribution.is_scalar_event()`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class. Default:
- `bijector.name + distribution.name`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.allow_nan_stats` {#ConditionalTransformedDistribution.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.batch_shape` {#ConditionalTransformedDistribution.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.batch_shape_tensor(name='batch_shape_tensor')` {#ConditionalTransformedDistribution.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.bijector` {#ConditionalTransformedDistribution.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.cdf(*args, **kwargs)` {#ConditionalTransformedDistribution.cdf}
-
-Additional documentation from `ConditionalTransformedDistribution`:
-
-##### `kwargs`:
-
-* `bijector_kwargs`: Python dictionary of arg names/values forwarded to the bijector.
-* `distribution_kwargs`: Python dictionary of arg names/values forwarded to the distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.copy(**override_parameters_kwargs)` {#ConditionalTransformedDistribution.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.covariance(name='covariance')` {#ConditionalTransformedDistribution.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.distribution` {#ConditionalTransformedDistribution.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.dtype` {#ConditionalTransformedDistribution.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.entropy(name='entropy')` {#ConditionalTransformedDistribution.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.event_shape` {#ConditionalTransformedDistribution.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.event_shape_tensor(name='event_shape_tensor')` {#ConditionalTransformedDistribution.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.is_continuous` {#ConditionalTransformedDistribution.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.is_scalar_batch(name='is_scalar_batch')` {#ConditionalTransformedDistribution.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.is_scalar_event(name='is_scalar_event')` {#ConditionalTransformedDistribution.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.log_cdf(*args, **kwargs)` {#ConditionalTransformedDistribution.log_cdf}
-
-Additional documentation from `ConditionalTransformedDistribution`:
-
-##### `kwargs`:
-
-* `bijector_kwargs`: Python dictionary of arg names/values forwarded to the bijector.
-* `distribution_kwargs`: Python dictionary of arg names/values forwarded to the distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.log_prob(*args, **kwargs)` {#ConditionalTransformedDistribution.log_prob}
-
-Additional documentation from `ConditionalTransformedDistribution`:
-
-##### `kwargs`:
-
-* `bijector_kwargs`: Python dictionary of arg names/values forwarded to the bijector.
-* `distribution_kwargs`: Python dictionary of arg names/values forwarded to the distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.log_survival_function(*args, **kwargs)` {#ConditionalTransformedDistribution.log_survival_function}
-
-Additional documentation from `ConditionalTransformedDistribution`:
-
-##### `kwargs`:
-
-* `bijector_kwargs`: Python dictionary of arg names/values forwarded to the bijector.
-* `distribution_kwargs`: Python dictionary of arg names/values forwarded to the distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.mean(name='mean')` {#ConditionalTransformedDistribution.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.mode(name='mode')` {#ConditionalTransformedDistribution.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.name` {#ConditionalTransformedDistribution.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#ConditionalTransformedDistribution.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.param_static_shapes(cls, sample_shape)` {#ConditionalTransformedDistribution.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.parameters` {#ConditionalTransformedDistribution.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.prob(*args, **kwargs)` {#ConditionalTransformedDistribution.prob}
-
-Additional documentation from `ConditionalTransformedDistribution`:
-
-##### `kwargs`:
-
-* `bijector_kwargs`: Python dictionary of arg names/values forwarded to the bijector.
-* `distribution_kwargs`: Python dictionary of arg names/values forwarded to the distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.reparameterization_type` {#ConditionalTransformedDistribution.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.sample(*args, **kwargs)` {#ConditionalTransformedDistribution.sample}
-
-##### `kwargs`:
-
-* `**condition_kwargs`: Named arguments forwarded to subclass implementation.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.stddev(name='stddev')` {#ConditionalTransformedDistribution.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.survival_function(*args, **kwargs)` {#ConditionalTransformedDistribution.survival_function}
-
-Additional documentation from `ConditionalTransformedDistribution`:
-
-##### `kwargs`:
-
-* `bijector_kwargs`: Python dictionary of arg names/values forwarded to the bijector.
-* `distribution_kwargs`: Python dictionary of arg names/values forwarded to the distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.validate_args` {#ConditionalTransformedDistribution.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.ConditionalTransformedDistribution.variance(name='variance')` {#ConditionalTransformedDistribution.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.ExponentialWithSoftplusRate.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.ExponentialWithSoftplusRate.md
deleted file mode 100644
index d5ccf96744..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.ExponentialWithSoftplusRate.md
+++ /dev/null
@@ -1,565 +0,0 @@
-Exponential with softplus transform on `rate`.
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.__init__(rate, validate_args=False, allow_nan_stats=True, name='ExponentialWithSoftplusRate')` {#ExponentialWithSoftplusRate.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.allow_nan_stats` {#ExponentialWithSoftplusRate.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.batch_shape` {#ExponentialWithSoftplusRate.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.batch_shape_tensor(name='batch_shape_tensor')` {#ExponentialWithSoftplusRate.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.cdf(value, name='cdf')` {#ExponentialWithSoftplusRate.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.concentration` {#ExponentialWithSoftplusRate.concentration}
-
-Concentration parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.copy(**override_parameters_kwargs)` {#ExponentialWithSoftplusRate.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.covariance(name='covariance')` {#ExponentialWithSoftplusRate.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.dtype` {#ExponentialWithSoftplusRate.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.entropy(name='entropy')` {#ExponentialWithSoftplusRate.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.event_shape` {#ExponentialWithSoftplusRate.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.event_shape_tensor(name='event_shape_tensor')` {#ExponentialWithSoftplusRate.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.is_continuous` {#ExponentialWithSoftplusRate.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.is_scalar_batch(name='is_scalar_batch')` {#ExponentialWithSoftplusRate.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.is_scalar_event(name='is_scalar_event')` {#ExponentialWithSoftplusRate.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.log_cdf(value, name='log_cdf')` {#ExponentialWithSoftplusRate.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.log_prob(value, name='log_prob')` {#ExponentialWithSoftplusRate.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.log_survival_function(value, name='log_survival_function')` {#ExponentialWithSoftplusRate.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.mean(name='mean')` {#ExponentialWithSoftplusRate.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.mode(name='mode')` {#ExponentialWithSoftplusRate.mode}
-
-Mode.
-
-Additional documentation from `Gamma`:
-
-The mode of a gamma distribution is `(shape - 1) / rate` when
-`shape > 1`, and `NaN` otherwise. If `self.allow_nan_stats` is `False`,
-an exception will be raised rather than returning `NaN`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.name` {#ExponentialWithSoftplusRate.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#ExponentialWithSoftplusRate.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.param_static_shapes(cls, sample_shape)` {#ExponentialWithSoftplusRate.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.parameters` {#ExponentialWithSoftplusRate.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.prob(value, name='prob')` {#ExponentialWithSoftplusRate.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.rate` {#ExponentialWithSoftplusRate.rate}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.reparameterization_type` {#ExponentialWithSoftplusRate.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.sample(sample_shape=(), seed=None, name='sample')` {#ExponentialWithSoftplusRate.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.stddev(name='stddev')` {#ExponentialWithSoftplusRate.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.survival_function(value, name='survival_function')` {#ExponentialWithSoftplusRate.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.validate_args` {#ExponentialWithSoftplusRate.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.ExponentialWithSoftplusRate.variance(name='variance')` {#ExponentialWithSoftplusRate.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.md
deleted file mode 100644
index c5650c8055..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.md
+++ /dev/null
@@ -1,792 +0,0 @@
-The multivariate normal distribution on `R^k`.
-
-The Multivariate Normal distribution is defined over `R^k` and parameterized
-by a (batch of) length-`k` `loc` vector (aka "mu") and a (batch of) `k x k`
-`scale` matrix; `covariance = scale @ scale.T` where `@` denotes
-matrix-multiplication.
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; loc, scale) = exp(-0.5 ||y||**2) / Z,
-y = inv(scale) @ (x - loc),
-Z = (2 pi)**(0.5 k) |det(scale)|,
-```
-
-where:
-
-* `loc` is a vector in `R^k`,
-* `scale` is a linear operator in `R^{k x k}`, `cov = scale @ scale.T`,
-* `Z` denotes the normalization constant, and,
-* `||y||**2` denotes the squared Euclidean norm of `y`.
-
-A (non-batch) `scale` matrix is:
-
-```none
-scale = diag(scale_diag + scale_identity_multiplier ones(k)) +
- scale_perturb_factor @ diag(scale_perturb_diag) @ scale_perturb_factor.T
-```
-
-where:
-
-* `scale_diag.shape = [k]`,
-* `scale_identity_multiplier.shape = []`,
-* `scale_perturb_factor.shape = [k, r]`, typically `k >> r`, and,
-* `scale_perturb_diag.shape = [r]`.
-
-Additional leading dimensions (if any) will index batches.
-
-If both `scale_diag` and `scale_identity_multiplier` are `None`, then
-`scale` is the Identity matrix.
-
-The MultivariateNormal distribution is a member of the [location-scale
-family](https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be
-constructed as,
-
-```none
-X ~ MultivariateNormal(loc=0, scale=1) # Identity scale, zero shift.
-Y = scale @ X + loc
-```
-
-#### Examples
-
-```python
-ds = tf.contrib.distributions
-
-# Initialize a single 3-variate Gaussian with covariance `cov = S @ S.T`,
-# `S = diag(d) + U @ diag(m) @ U.T`. The perturbation, `U @ diag(m) @ U.T`, is
-# a rank-2 update.
-mu = [-0.5., 0, 0.5] # shape: [3]
-d = [1.5, 0.5, 2] # shape: [3]
-U = [[1., 2],
- [-1, 1],
- [2, -0.5]] # shape: [3, 2]
-m = [4., 5] # shape: [2]
-mvn = ds.MultivariateNormalDiagPlusLowRank(
- loc=mu
- scale_diag=d
- scale_perturb_factor=U,
- scale_perturb_diag=m)
-
-# Evaluate this on an observation in `R^3`, returning a scalar.
-mvn.prob([-1, 0, 1]).eval() # shape: []
-
-# Initialize a 2-batch of 3-variate Gaussians; `S = diag(d) + U @ U.T`.
-mu = [[1., 2, 3],
- [11, 22, 33]] # shape: [b, k] = [2, 3]
-U = [[[1., 2],
- [3, 4],
- [5, 6]],
- [[0.5, 0.75],
- [1,0, 0.25],
- [1.5, 1.25]]] # shape: [b, k, r] = [2, 3, 2]
-m = [[0.1, 0.2],
- [0.4, 0.5]] # shape: [b, r] = [2, 2]
-
-mvn = ds.MultivariateNormalDiagPlusLowRank(
- loc=mu,
- scale_perturb_factor=U,
- scale_perturb_diag=m)
-
-mvn.covariance().eval() # shape: [2, 3, 3]
-# ==> [[[ 15.63 31.57 48.51]
-# [ 31.57 69.31 105.05]
-# [ 48.51 105.05 162.59]]
-#
-# [[ 2.59 1.41 3.35]
-# [ 1.41 2.71 3.34]
-# [ 3.35 3.34 8.35]]]
-
-# Compute the pdf of two `R^3` observations (one from each batch);
-# return a length-2 vector.
-x = [[-0.9, 0, 0.1],
- [-10, 0, 9]] # shape: [2, 3]
-mvn.prob(x).eval() # shape: [2]
-```
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.__init__(loc=None, scale_diag=None, scale_identity_multiplier=None, scale_perturb_factor=None, scale_perturb_diag=None, validate_args=False, allow_nan_stats=True, name='MultivariateNormalDiagPlusLowRank')` {#MultivariateNormalDiagPlusLowRank.__init__}
-
-Construct Multivariate Normal distribution on `R^k`.
-
-The `batch_shape` is the broadcast shape between `loc` and `scale`
-arguments.
-
-The `event_shape` is given by the last dimension of `loc` or the last
-dimension of the matrix implied by `scale`.
-
-Recall that `covariance = scale @ scale.T`. A (non-batch) `scale` matrix is:
-
-```none
-scale = diag(scale_diag + scale_identity_multiplier ones(k)) +
- scale_perturb_factor @ diag(scale_perturb_diag) @ scale_perturb_factor.T
-```
-
-where:
-
-* `scale_diag.shape = [k]`,
-* `scale_identity_multiplier.shape = []`,
-* `scale_perturb_factor.shape = [k, r]`, typically `k >> r`, and,
-* `scale_perturb_diag.shape = [r]`.
-
-Additional leading dimensions (if any) will index batches.
-
-If both `scale_diag` and `scale_identity_multiplier` are `None`, then
-`scale` is the Identity matrix.
-
-##### Args:
-
-
-* <b>`loc`</b>: Floating-point `Tensor`. If this is set to `None`, `loc` is
- implicitly `0`. When specified, may have shape `[B1, ..., Bb, k]` where
- `b >= 0` and `k` is the event size.
-* <b>`scale_diag`</b>: Non-zero, floating-point `Tensor` representing a diagonal
- matrix added to `scale`. May have shape `[B1, ..., Bb, k]`, `b >= 0`,
- and characterizes `b`-batches of `k x k` diagonal matrices added to
- `scale`. When both `scale_identity_multiplier` and `scale_diag` are
- `None` then `scale` is the `Identity`.
-* <b>`scale_identity_multiplier`</b>: Non-zero, floating-point `Tensor` representing
- a scaled-identity-matrix added to `scale`. May have shape
- `[B1, ..., Bb]`, `b >= 0`, and characterizes `b`-batches of scaled
- `k x k` identity matrices added to `scale`. When both
- `scale_identity_multiplier` and `scale_diag` are `None` then `scale` is
- the `Identity`.
-* <b>`scale_perturb_factor`</b>: Floating-point `Tensor` representing a rank-`r`
- perturbation added to `scale`. May have shape `[B1, ..., Bb, k, r]`,
- `b >= 0`, and characterizes `b`-batches of rank-`r` updates to `scale`.
- When `None`, no rank-`r` update is added to `scale`.
-* <b>`scale_perturb_diag`</b>: Floating-point `Tensor` representing a diagonal matrix
- inside the rank-`r` perturbation added to `scale`. May have shape
- `[B1, ..., Bb, r]`, `b >= 0`, and characterizes `b`-batches of `r x r`
- diagonal matrices inside the perturbation added to `scale`. When
- `None`, an identity matrix is used inside the perturbation. Can only be
- specified if `scale_perturb_factor` is also specified.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`,
- statistics (e.g., mean, mode, variance) use the value "`NaN`" to
- indicate the result is undefined. When `False`, an exception is raised
- if one or more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if at most `scale_identity_multiplier` is specified.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.allow_nan_stats` {#MultivariateNormalDiagPlusLowRank.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.batch_shape` {#MultivariateNormalDiagPlusLowRank.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.batch_shape_tensor(name='batch_shape_tensor')` {#MultivariateNormalDiagPlusLowRank.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.bijector` {#MultivariateNormalDiagPlusLowRank.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.cdf(value, name='cdf')` {#MultivariateNormalDiagPlusLowRank.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.copy(**override_parameters_kwargs)` {#MultivariateNormalDiagPlusLowRank.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.covariance(name='covariance')` {#MultivariateNormalDiagPlusLowRank.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.det_covariance(name='det_covariance')` {#MultivariateNormalDiagPlusLowRank.det_covariance}
-
-Determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.distribution` {#MultivariateNormalDiagPlusLowRank.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.dtype` {#MultivariateNormalDiagPlusLowRank.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.entropy(name='entropy')` {#MultivariateNormalDiagPlusLowRank.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.event_shape` {#MultivariateNormalDiagPlusLowRank.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.event_shape_tensor(name='event_shape_tensor')` {#MultivariateNormalDiagPlusLowRank.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.is_continuous` {#MultivariateNormalDiagPlusLowRank.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.is_scalar_batch(name='is_scalar_batch')` {#MultivariateNormalDiagPlusLowRank.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.is_scalar_event(name='is_scalar_event')` {#MultivariateNormalDiagPlusLowRank.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.loc` {#MultivariateNormalDiagPlusLowRank.loc}
-
-The `loc` `Tensor` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.log_cdf(value, name='log_cdf')` {#MultivariateNormalDiagPlusLowRank.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.log_det_covariance(name='log_det_covariance')` {#MultivariateNormalDiagPlusLowRank.log_det_covariance}
-
-Log of determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.log_prob(value, name='log_prob')` {#MultivariateNormalDiagPlusLowRank.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.log_survival_function(value, name='log_survival_function')` {#MultivariateNormalDiagPlusLowRank.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.mean(name='mean')` {#MultivariateNormalDiagPlusLowRank.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.mode(name='mode')` {#MultivariateNormalDiagPlusLowRank.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.name` {#MultivariateNormalDiagPlusLowRank.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#MultivariateNormalDiagPlusLowRank.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.param_static_shapes(cls, sample_shape)` {#MultivariateNormalDiagPlusLowRank.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.parameters` {#MultivariateNormalDiagPlusLowRank.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.prob(value, name='prob')` {#MultivariateNormalDiagPlusLowRank.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.reparameterization_type` {#MultivariateNormalDiagPlusLowRank.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.sample(sample_shape=(), seed=None, name='sample')` {#MultivariateNormalDiagPlusLowRank.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.scale` {#MultivariateNormalDiagPlusLowRank.scale}
-
-The `scale` `LinearOperator` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.stddev(name='stddev')` {#MultivariateNormalDiagPlusLowRank.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.survival_function(value, name='survival_function')` {#MultivariateNormalDiagPlusLowRank.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.validate_args` {#MultivariateNormalDiagPlusLowRank.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalDiagPlusLowRank.variance(name='variance')` {#MultivariateNormalDiagPlusLowRank.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.Normal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.Normal.md
deleted file mode 100644
index 5454ae907f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.Normal.md
+++ /dev/null
@@ -1,639 +0,0 @@
-The Normal distribution with location `loc` and `scale` parameters.
-
-#### Mathematical details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; mu, sigma) = exp(-0.5 (x - mu)**2 / sigma**2) / Z
-Z = (2 pi sigma**2)**0.5
-```
-
-where `loc = mu` is the mean, `scale = sigma` is the std. deviation, and, `Z`
-is the normalization constant.
-
-The Normal distribution is a member of the [location-scale family](
-https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be
-constructed as,
-
-```none
-X ~ Normal(loc=0, scale=1)
-Y = loc + scale * X
-```
-
-#### Examples
-
-Examples of initialization of one or a batch of distributions.
-
-```python
-# Define a single scalar Normal distribution.
-dist = tf.contrib.distributions.Normal(loc=0., scale=3.)
-
-# Evaluate the cdf at 1, returning a scalar.
-dist.cdf(1.)
-
-# Define a batch of two scalar valued Normals.
-# The first has mean 1 and standard deviation 11, the second 2 and 22.
-dist = tf.contrib.distributions.Normal(loc=[1, 2.], scale=[11, 22.])
-
-# Evaluate the pdf of the first distribution on 0, and the second on 1.5,
-# returning a length two tensor.
-dist.prob([0, 1.5])
-
-# Get 3 samples, returning a 3 x 2 tensor.
-dist.sample([3])
-```
-
-Arguments are broadcast when possible.
-
-```python
-# Define a batch of two scalar valued Normals.
-# Both have mean 1, but different standard deviations.
-dist = tf.contrib.distributions.Normal(loc=1., scale=[11, 22.])
-
-# Evaluate the pdf of both distributions on the same point, 3.0,
-# returning a length 2 tensor.
-dist.prob(3.0)
-```
-- - -
-
-#### `tf.contrib.distributions.Normal.__init__(loc, scale, validate_args=False, allow_nan_stats=True, name='Normal')` {#Normal.__init__}
-
-Construct Normal distributions with mean and stddev `loc` and `scale`.
-
-The parameters `loc` and `scale` must be shaped in a way that supports
-broadcasting (e.g. `loc + scale` is a valid operation).
-
-##### Args:
-
-
-* <b>`loc`</b>: Floating point tensor; the means of the distribution(s).
-* <b>`scale`</b>: Floating point tensor; the stddevs of the distribution(s).
- Must contain only positive values.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`,
- statistics (e.g., mean, mode, variance) use the value "`NaN`" to
- indicate the result is undefined. When `False`, an exception is raised
- if one or more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `loc` and `scale` have different `dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.allow_nan_stats` {#Normal.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.batch_shape` {#Normal.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.batch_shape_tensor(name='batch_shape_tensor')` {#Normal.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.cdf(value, name='cdf')` {#Normal.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.copy(**override_parameters_kwargs)` {#Normal.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.covariance(name='covariance')` {#Normal.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.dtype` {#Normal.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.entropy(name='entropy')` {#Normal.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.event_shape` {#Normal.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.event_shape_tensor(name='event_shape_tensor')` {#Normal.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.is_continuous` {#Normal.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.is_scalar_batch(name='is_scalar_batch')` {#Normal.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.is_scalar_event(name='is_scalar_event')` {#Normal.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.loc` {#Normal.loc}
-
-Distribution parameter for the mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.log_cdf(value, name='log_cdf')` {#Normal.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.log_prob(value, name='log_prob')` {#Normal.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.log_survival_function(value, name='log_survival_function')` {#Normal.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.mean(name='mean')` {#Normal.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.mode(name='mode')` {#Normal.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.name` {#Normal.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Normal.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.param_static_shapes(cls, sample_shape)` {#Normal.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.parameters` {#Normal.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.prob(value, name='prob')` {#Normal.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.reparameterization_type` {#Normal.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.sample(sample_shape=(), seed=None, name='sample')` {#Normal.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.scale` {#Normal.scale}
-
-Distribution parameter for standard deviation.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.stddev(name='stddev')` {#Normal.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.survival_function(value, name='survival_function')` {#Normal.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.validate_args` {#Normal.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Normal.variance(name='variance')` {#Normal.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.RelaxedBernoulli.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.RelaxedBernoulli.md
deleted file mode 100644
index f7af72d0f2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.RelaxedBernoulli.md
+++ /dev/null
@@ -1,706 +0,0 @@
-RelaxedBernoulli distribution with temperature and logits parameters.
-
-The RelaxedBernoulli is a distribution over the unit interval (0,1), which
-continuously approximates a Bernoulli. The degree of approximation is
-controlled by a temperature: as the temperaturegoes to 0 the RelaxedBernoulli
-becomes discrete with a distribution described by the `logits` or `probs`
-parameters, as the temperature goes to infinity the RelaxedBernoulli
-becomes the constant distribution that is identically 0.5.
-
-The RelaxedBernoulli distribution is a reparameterized continuous
-distribution that is the binary special case of the RelaxedOneHotCategorical
-distribution (Maddison et al., 2016; Jang et al., 2016). For details on the
-binary special case see the appendix of Maddison et al. (2016) where it is
-referred to as BinConcrete. If you use this distribution, please cite both
-papers.
-
-Some care needs to be taken for loss functions that depend on the
-log-probability of RelaxedBernoullis, because computing log-probabilities of
-the RelaxedBernoulli can suffer from underflow issues. In many case loss
-functions such as these are invariant under invertible transformations of
-the random variables. The KL divergence, found in the variational autoencoder
-loss, is an example. Because RelaxedBernoullis are sampled by by a Logistic
-random variable followed by a `tf.sigmoid` op, one solution is to treat
-the Logistic as the random variable and `tf.sigmoid` as downstream. The
-KL divergences of two Logistics, which are always followed by a `tf.sigmoid`
-op, is equivalent to evaluating KL divergences of RelaxedBernoulli samples.
-See Maddison et al., 2016 for more details where this distribution is called
-the BinConcrete.
-
-An alternative approach is to evaluate Bernoulli log probability or KL
-directly on relaxed samples, as done in Jang et al., 2016. In this case,
-guarantees on the loss are usually violated. For instance, using a Bernoulli
-KL in a relaxed ELBO is no longer a lower bound on the log marginal
-probability of the observation. Thus care and early stopping are important.
-
-#### Examples
-
-Creates three continuous distributions, which approximate 3 Bernoullis with
-probabilities (0.1, 0.5, 0.4). Samples from these distributions will be in
-the unit interval (0,1).
-
-```python
-temperature = 0.5
-p = [0.1, 0.5, 0.4]
-dist = RelaxedBernoulli(temperature, probs=p)
-```
-
-Creates three continuous distributions, which approximate 3 Bernoullis with
-logits (-2, 2, 0). Samples from these distributions will be in
-the unit interval (0,1).
-
-```python
-temperature = 0.5
-logits = [-2, 2, 0]
-dist = RelaxedBernoulli(temperature, logits=logits)
-```
-
-Creates three continuous distributions, whose sigmoid approximate 3 Bernoullis
-with logits (-2, 2, 0).
-
-```python
-temperature = 0.5
-logits = [-2, 2, 0]
-dist = Logistic(logits/temperature, 1./temperature)
-samples = dist.sample()
-sigmoid_samples = tf.sigmoid(samples)
-# sigmoid_samples has the same distribution as samples from
-# RelaxedBernoulli(temperature, logits=logits)
-```
-
-Creates three continuous distributions, which approximate 3 Bernoullis with
-logits (-2, 2, 0). Samples from these distributions will be in
-the unit interval (0,1). Because the temperature is very low, samples from
-these distributions are almost discrete, usually taking values very close to 0
-or 1.
-
-```python
-temperature = 1e-5
-logits = [-2, 2, 0]
-dist = RelaxedBernoulli(temperature, logits=logits)
-```
-
-Creates three continuous distributions, which approximate 3 Bernoullis with
-logits (-2, 2, 0). Samples from these distributions will be in
-the unit interval (0,1). Because the temperature is very high, samples from
-these distributions are usually close to the (0.5, 0.5, 0.5) vector.
-
-```python
-temperature = 100
-logits = [-2, 2, 0]
-dist = RelaxedBernoulli(temperature, logits=logits)
-```
-
-Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete Distribution:
-A Continuous Relaxation of Discrete Random Variables. 2016.
-
-Eric Jang, Shixiang Gu, and Ben Poole. Categorical Reparameterization with
-Gumbel-Softmax. 2016.
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.__init__(temperature, logits=None, probs=None, validate_args=False, allow_nan_stats=True, name='RelaxedBernoulli')` {#RelaxedBernoulli.__init__}
-
-Construct RelaxedBernoulli distributions.
-
-##### Args:
-
-
-* <b>`temperature`</b>: An 0-D `Tensor`, representing the temperature
- of a set of RelaxedBernoulli distributions. The temperature should be
- positive.
-* <b>`logits`</b>: An N-D `Tensor` representing the log-odds
- of a positive event. Each entry in the `Tensor` parametrizes
- an independent RelaxedBernoulli distribution where the probability of an
- event is sigmoid(logits). Only one of `logits` or `probs` should be
- passed in.
-* <b>`probs`</b>: An N-D `Tensor` representing the probability of a positive event.
- Each entry in the `Tensor` parameterizes an independent Bernoulli
- distribution. Only one of `logits` or `probs` should be passed in.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both `probs` and `logits` are passed, or if neither.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.allow_nan_stats` {#RelaxedBernoulli.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.batch_shape` {#RelaxedBernoulli.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.batch_shape_tensor(name='batch_shape_tensor')` {#RelaxedBernoulli.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.bijector` {#RelaxedBernoulli.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.cdf(value, name='cdf')` {#RelaxedBernoulli.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.copy(**override_parameters_kwargs)` {#RelaxedBernoulli.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.covariance(name='covariance')` {#RelaxedBernoulli.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.distribution` {#RelaxedBernoulli.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.dtype` {#RelaxedBernoulli.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.entropy(name='entropy')` {#RelaxedBernoulli.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.event_shape` {#RelaxedBernoulli.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.event_shape_tensor(name='event_shape_tensor')` {#RelaxedBernoulli.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.is_continuous` {#RelaxedBernoulli.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.is_scalar_batch(name='is_scalar_batch')` {#RelaxedBernoulli.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.is_scalar_event(name='is_scalar_event')` {#RelaxedBernoulli.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.log_cdf(value, name='log_cdf')` {#RelaxedBernoulli.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.log_prob(value, name='log_prob')` {#RelaxedBernoulli.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.log_survival_function(value, name='log_survival_function')` {#RelaxedBernoulli.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.logits` {#RelaxedBernoulli.logits}
-
-Log-odds of `1`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.mean(name='mean')` {#RelaxedBernoulli.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.mode(name='mode')` {#RelaxedBernoulli.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.name` {#RelaxedBernoulli.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#RelaxedBernoulli.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.param_static_shapes(cls, sample_shape)` {#RelaxedBernoulli.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.parameters` {#RelaxedBernoulli.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.prob(value, name='prob')` {#RelaxedBernoulli.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.probs` {#RelaxedBernoulli.probs}
-
-Probability of `1`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.reparameterization_type` {#RelaxedBernoulli.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.sample(sample_shape=(), seed=None, name='sample')` {#RelaxedBernoulli.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.stddev(name='stddev')` {#RelaxedBernoulli.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.survival_function(value, name='survival_function')` {#RelaxedBernoulli.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.temperature` {#RelaxedBernoulli.temperature}
-
-Distribution parameter for the location.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.validate_args` {#RelaxedBernoulli.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.RelaxedBernoulli.variance(name='variance')` {#RelaxedBernoulli.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.bijector.Inline.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.bijector.Inline.md
deleted file mode 100644
index c941b5ef65..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.bijector.Inline.md
+++ /dev/null
@@ -1,308 +0,0 @@
-Bijector constructed from custom callables.
-
-Example Use:
-
-```python
-exp = Inline(
- forward_fn=tf.exp,
- inverse_fn=tf.log,
- inverse_log_det_jacobian_fn=(
- lambda y: -tf.reduce_sum(tf.log(y), axis=-1)),
- name="exp")
-```
-
-The above example is equivalent to the `Bijector` `Exp(event_ndims=1)`.
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.__init__(forward_fn=None, inverse_fn=None, inverse_log_det_jacobian_fn=None, forward_log_det_jacobian_fn=None, forward_event_shape_fn=None, forward_event_shape_tensor_fn=None, inverse_event_shape_fn=None, inverse_event_shape_tensor_fn=None, is_constant_jacobian=False, validate_args=False, name='inline')` {#Inline.__init__}
-
-Creates a `Bijector` from callables.
-
-##### Args:
-
-
-* <b>`forward_fn`</b>: Python callable implementing the forward transformation.
-* <b>`inverse_fn`</b>: Python callable implementing the inverse transformation.
-* <b>`inverse_log_det_jacobian_fn`</b>: Python callable implementing the
- log o det o jacobian of the inverse transformation.
-* <b>`forward_log_det_jacobian_fn`</b>: Python callable implementing the
- log o det o jacobian of the forward transformation.
-* <b>`forward_event_shape_fn`</b>: Python callable implementing non-identical
- static event shape changes. Default: shape is assumed unchanged.
-* <b>`forward_event_shape_tensor_fn`</b>: Python callable implementing non-identical
- event shape changes. Default: shape is assumed unchanged.
-* <b>`inverse_event_shape_fn`</b>: Python callable implementing non-identical
- static event shape changes. Default: shape is assumed unchanged.
-* <b>`inverse_event_shape_tensor_fn`</b>: Python callable implementing non-identical
- event shape changes. Default: shape is assumed unchanged.
-* <b>`is_constant_jacobian`</b>: Python `bool` indicating that the Jacobian is
- constant for all input arguments.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str`, name given to ops managed by this object.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.dtype` {#Inline.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.event_ndims` {#Inline.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.forward(x, name='forward')` {#Inline.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.forward_event_shape(input_shape)` {#Inline.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#Inline.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#Inline.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.graph_parents` {#Inline.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.inverse(y, name='inverse')` {#Inline.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#Inline.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.inverse_event_shape(output_shape)` {#Inline.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#Inline.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#Inline.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.is_constant_jacobian` {#Inline.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.name` {#Inline.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.Inline.validate_args` {#Inline.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.bijector.PowerTransform.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.bijector.PowerTransform.md
deleted file mode 100644
index d95946499f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.bijector.PowerTransform.md
+++ /dev/null
@@ -1,301 +0,0 @@
-Compute `Y = g(X) = (1 + X * c)**(1 / c), X >= -1 / c`.
-
-The [power transform](https://en.wikipedia.org/wiki/Power_transform) maps
-inputs from `[0, inf]` to `[-1/c, inf]`; this is equivalent to the `inverse`
-of this bijector.
-
-This bijector is equivalent to the `Exp` bijector when `c=0`.
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.__init__(power=0.0, event_ndims=0, validate_args=False, name='power_transform')` {#PowerTransform.__init__}
-
-Instantiates the `PowerTransform` bijector.
-
-##### Args:
-
-
-* <b>`power`</b>: Python `float` scalar indicating the transform power, i.e.,
- `Y = g(X) = (1 + X * c)**(1 / c)` where `c` is the `power`.
-* <b>`event_ndims`</b>: Python scalar indicating the number of dimensions associated
- with a particular draw from the distribution.
-* <b>`validate_args`</b>: Python `bool` indicating whether arguments should be
- checked for correctness.
-* <b>`name`</b>: Python `str` name given to ops managed by this object.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `power < 0` or is not known statically.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.dtype` {#PowerTransform.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.event_ndims` {#PowerTransform.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.forward(x, name='forward')` {#PowerTransform.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.forward_event_shape(input_shape)` {#PowerTransform.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#PowerTransform.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#PowerTransform.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.graph_parents` {#PowerTransform.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.inverse(y, name='inverse')` {#PowerTransform.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#PowerTransform.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.inverse_event_shape(output_shape)` {#PowerTransform.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#PowerTransform.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#PowerTransform.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.is_constant_jacobian` {#PowerTransform.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.name` {#PowerTransform.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.power` {#PowerTransform.power}
-
-The `c` in: `Y = g(X) = (1 + X * c)**(1 / c)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.PowerTransform.validate_args` {#PowerTransform.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.kl.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.kl.md
deleted file mode 100644
index 59a41b2dd4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.distributions.kl.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.contrib.distributions.kl(dist_a, dist_b, allow_nan=False, name=None)` {#kl}
-
-Get the KL-divergence KL(dist_a || dist_b).
-
-If there is no KL method registered specifically for `type(dist_a)` and
-`type(dist_b)`, then the class hierarchies of these types are searched.
-
-If one KL method is registered between any pairs of classes in these two
-parent hierarchies, it is used.
-
-If more than one such registered method exists, the method whose registered
-classes have the shortest sum MRO paths to the input types is used.
-
-If more than one such shortest path exists, the first method
-identified in the search is used (favoring a shorter MRO distance to
-`type(dist_a)`).
-
-##### Args:
-
-
-* <b>`dist_a`</b>: The first distribution.
-* <b>`dist_b`</b>: The second distribution.
-* <b>`allow_nan`</b>: If `False` (default), a runtime error is raised
- if the KL returns NaN values for any batch entry of the given
- distributions. If `True`, the KL may return a NaN for the given entry.
-* <b>`name`</b>: (optional) Name scope to use for created operations.
-
-##### Returns:
-
- A Tensor with the batchwise KL-divergence between dist_a and dist_b.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If no KL method is defined for distribution types
- of dist_a and dist_b.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.assert_or_get_global_step.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.assert_or_get_global_step.md
deleted file mode 100644
index 0e67be8589..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.assert_or_get_global_step.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.contrib.framework.assert_or_get_global_step(graph=None, global_step_tensor=None)` {#assert_or_get_global_step}
-
-Verifies that a global step tensor is valid or gets one if None is given.
-
-If `global_step_tensor` is not None, check that it is a valid global step
-tensor (using `assert_global_step`). Otherwise find a global step tensor using
-`get_global_step` and return it.
-
-##### Args:
-
-
-* <b>`graph`</b>: The graph to find the global step tensor for.
-* <b>`global_step_tensor`</b>: The tensor to check for suitability as a global step.
- If None is given (the default), find a global step tensor.
-
-##### Returns:
-
- A tensor suitable as a global step, or `None` if none was provided and none
- was found.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.assign_from_values_fn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.assign_from_values_fn.md
deleted file mode 100644
index 9a5a82c8c4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.assign_from_values_fn.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.contrib.framework.assign_from_values_fn(var_names_to_values)` {#assign_from_values_fn}
-
-Returns a function that assigns specific variables from the given values.
-
-This function provides a mechanism for performing assignment of variables
-to values in a way that does not fill the graph with large assignment values.
-
-##### Args:
-
-
-* <b>`var_names_to_values`</b>: A map from variable names to values.
-
-##### Returns:
-
- A function that takes a single argument, a `tf.Session`, that applies the
- assignment operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if any of the given variable names were not found.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.filter_variables.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.filter_variables.md
deleted file mode 100644
index 1574edb406..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.filter_variables.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.contrib.framework.filter_variables(var_list, include_patterns=None, exclude_patterns=None, reg_search=True)` {#filter_variables}
-
-Filter a list of variables using regular expressions.
-
-First includes variables according to the list of include_patterns.
-Afterwards, eliminates variables according to the list of exclude_patterns.
-
-For example, one can obtain a list of variables with the weights of all
-convolutional layers (depending on the network definition) by:
-
-```python
-variables = tf.contrib.framework.get_model_variables()
-conv_weight_variables = tf.contrib.framework.filter_variables(
- variables,
- include_patterns=['Conv'],
- exclude_patterns=['biases', 'Logits'])
-```
-
-##### Args:
-
-
-* <b>`var_list`</b>: list of variables.
-* <b>`include_patterns`</b>: list of regular expressions to include. Defaults to None,
- which means all variables are selected according to the include rules.
- A variable is included if it matches any of the include_patterns.
-* <b>`exclude_patterns`</b>: list of regular expressions to exclude. Defaults to None,
- which means all variables are selected according to the exclude rules.
- A variable is excluded if it matches any of the exclude_patterns.
-* <b>`reg_search`</b>: boolean. If True (default), performs re.search to find matches
- (i.e. pattern can match any substring of the variable name). If False,
- performs re.match (i.e. regexp should match from the beginning of the
- variable name).
-
-##### Returns:
-
- filtered list of variables.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.get_variables_by_suffix.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.get_variables_by_suffix.md
deleted file mode 100644
index a25cf9006e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.get_variables_by_suffix.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.contrib.framework.get_variables_by_suffix(suffix, scope=None)` {#get_variables_by_suffix}
-
-Gets the list of variables that end with the given suffix.
-
-##### Args:
-
-
-* <b>`suffix`</b>: suffix for filtering the variables to return.
-* <b>`scope`</b>: an optional scope for filtering the variables to return.
-
-##### Returns:
-
- a copied list of variables with the given name and prefix.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.has_arg_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.has_arg_scope.md
deleted file mode 100644
index 92f4a772ed..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.has_arg_scope.md
+++ /dev/null
@@ -1,13 +0,0 @@
-### `tf.contrib.framework.has_arg_scope(func)` {#has_arg_scope}
-
-Checks whether a func has been decorated with @add_arg_scope or not.
-
-##### Args:
-
-
-* <b>`func`</b>: function to check.
-
-##### Returns:
-
- a boolean.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.with_shape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.with_shape.md
deleted file mode 100644
index 460c9c522c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.framework.with_shape.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.contrib.framework.with_shape(expected_shape, tensor)` {#with_shape}
-
-Asserts tensor has expected shape.
-
-If tensor shape and expected_shape, are fully defined, assert they match.
-Otherwise, add assert op that will validate the shape when tensor is
-evaluated, and set shape on tensor.
-
-##### Args:
-
-
-* <b>`expected_shape`</b>: Expected shape to assert, as a 1D array of ints, or tensor
- of same.
-* <b>`tensor`</b>: Tensor whose shape we're validating.
-
-##### Returns:
-
- tensor, perhaps with a dependent assert operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if tensor has an invalid shape.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.make_placeholder_from_dtype_and_shape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.make_placeholder_from_dtype_and_shape.md
deleted file mode 100644
index b504100d0c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.make_placeholder_from_dtype_and_shape.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.contrib.graph_editor.make_placeholder_from_dtype_and_shape(dtype, shape=None, scope=None)` {#make_placeholder_from_dtype_and_shape}
-
-Create a tf.placeholder for the Graph Editor.
-
-Note that the correct graph scope must be set by the calling function.
-The placeholder is named using the function placeholder_name (with no
-tensor argument).
-
-##### Args:
-
-
-* <b>`dtype`</b>: the tensor type.
-* <b>`shape`</b>: the tensor shape (optional).
-* <b>`scope`</b>: absolute scope within which to create the placeholder. None
- means that the scope of t is preserved. "" means the root scope.
-
-##### Returns:
-
- A newly created tf.placeholder.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.make_view.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.make_view.md
deleted file mode 100644
index b95f857600..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.make_view.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.contrib.graph_editor.make_view(*args, **kwargs)` {#make_view}
-
-Create a SubGraphView from selected operations and passthrough tensors.
-
-##### Args:
-
-
-* <b>`*args`</b>: list of 1) regular expressions (compiled or not) or 2) (array of)
- `tf.Operation` 3) (array of) `tf.Tensor`. Those objects will be converted
- into a list of operations and a list of candidate for passthrough tensors.
-* <b>`**kwargs`</b>: keyword graph is used 1) to check that the ops and ts are from
- the correct graph 2) for regular expression query
-
-##### Returns:
-
- A subgraph view.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if the optional keyword argument graph is not a `tf.Graph`
- or if an argument in args is not an (array of) `tf.Tensor`
- or an (array of) `tf.Operation` or a string or a regular expression.
-* <b>`ValueError`</b>: if one of the keyword arguments is unexpected.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.ph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.ph.md
deleted file mode 100644
index c765240585..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.ph.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.contrib.graph_editor.ph(dtype, shape=None, scope=None)` {#ph}
-
-Create a tf.placeholder for the Graph Editor.
-
-Note that the correct graph scope must be set by the calling function.
-The placeholder is named using the function placeholder_name (with no
-tensor argument).
-
-##### Args:
-
-
-* <b>`dtype`</b>: the tensor type.
-* <b>`shape`</b>: the tensor shape (optional).
-* <b>`scope`</b>: absolute scope within which to create the placeholder. None
- means that the scope of t is preserved. "" means the root scope.
-
-##### Returns:
-
- A newly created tf.placeholder.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.sgv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.sgv.md
deleted file mode 100644
index 80805e574f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.graph_editor.sgv.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.contrib.graph_editor.sgv(*args, **kwargs)` {#sgv}
-
-Create a SubGraphView from selected operations and passthrough tensors.
-
-##### Args:
-
-
-* <b>`*args`</b>: list of 1) regular expressions (compiled or not) or 2) (array of)
- `tf.Operation` 3) (array of) `tf.Tensor`. Those objects will be converted
- into a list of operations and a list of candidate for passthrough tensors.
-* <b>`**kwargs`</b>: keyword graph is used 1) to check that the ops and ts are from
- the correct graph 2) for regular expression query
-
-##### Returns:
-
- A subgraph view.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if the optional keyword argument graph is not a `tf.Graph`
- or if an argument in args is not an (array of) `tf.Tensor`
- or an (array of) `tf.Operation` or a string or a regular expression.
-* <b>`ValueError`</b>: if one of the keyword arguments is unexpected.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.convolution2d_transpose.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.convolution2d_transpose.md
deleted file mode 100644
index 5a2ea65784..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.convolution2d_transpose.md
+++ /dev/null
@@ -1,52 +0,0 @@
-### `tf.contrib.layers.convolution2d_transpose(*args, **kwargs)` {#convolution2d_transpose}
-
-Adds a convolution2d_transpose with an optional batch normalization layer.
-
-The function creates a variable called `weights`, representing the
-kernel, that is convolved with the input. If `batch_norm_params` is `None`, a
-second variable called 'biases' is added to the result of the operation.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A 4-D `Tensor` of type `float` and shape
- `[batch, height, width, in_channels]` for `NHWC` data format or
- `[batch, in_channels, height, width]` for `NCHW` data format.
-* <b>`num_outputs`</b>: Integer, the number of output filters.
-* <b>`kernel_size`</b>: A list of length 2 holding the [kernel_height, kernel_width] of
- of the filters. Can be an int if both values are the same.
-* <b>`stride`</b>: A list of length 2: [stride_height, stride_width].
- Can be an int if both strides are the same. Note that presently
- both strides must have the same value.
-* <b>`padding`</b>: One of 'VALID' or 'SAME'.
-* <b>`data_format`</b>: A string. `NHWC` (default) and `NCHW` are supported.
-* <b>`activation_fn`</b>: Activation function. The default value is a ReLU function.
- Explicitly set it to None to skip it and maintain a linear activation.
-* <b>`normalizer_fn`</b>: Normalization function to use instead of `biases`. If
- `normalizer_fn` is provided then `biases_initializer` and
- `biases_regularizer` are ignored and `biases` are not created nor added.
- default set to None for no normalizer function
-* <b>`normalizer_params`</b>: Normalization function parameters.
-* <b>`weights_initializer`</b>: An initializer for the weights.
-* <b>`weights_regularizer`</b>: Optional regularizer for the weights.
-* <b>`biases_initializer`</b>: An initializer for the biases. If None skip biases.
-* <b>`biases_regularizer`</b>: Optional regularizer for the biases.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional list of collections for all the variables or
- a dictionary containing a different list of collection per variable.
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`trainable`</b>: Whether or not the variables should be trainable or not.
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- A tensor representing the output of the operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If 'kernel_size' is not a list of length 2.
-* <b>`ValueError`</b>: If `data_format` is neither `NHWC` nor `NCHW`.
-* <b>`ValueError`</b>: If `C` dimension of `inputs` is None.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.input_from_feature_columns.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.input_from_feature_columns.md
deleted file mode 100644
index e0d5391be9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.input_from_feature_columns.md
+++ /dev/null
@@ -1,58 +0,0 @@
-### `tf.contrib.layers.input_from_feature_columns(columns_to_tensors, feature_columns, weight_collections=None, trainable=True, scope=None)` {#input_from_feature_columns}
-
-A tf.contrib.layer style input layer builder based on FeatureColumns.
-
-Generally a single example in training data is described with feature columns.
-At the first layer of the model, this column oriented data should be converted
-to a single tensor. Each feature column needs a different kind of operation
-during this conversion. For example sparse features need a totally different
-handling than continuous features.
-
-Example:
-
-```python
- # Building model for training
- columns_to_tensor = tf.parse_example(...)
- first_layer = input_from_feature_columns(
- columns_to_tensors=columns_to_tensor,
- feature_columns=feature_columns)
- second_layer = fully_connected(inputs=first_layer, ...)
- ...
-```
-
-where feature_columns can be defined as follows:
-
-```python
- sparse_feature = sparse_column_with_hash_bucket(
- column_name="sparse_col", ...)
- sparse_feature_emb = embedding_column(sparse_id_column=sparse_feature, ...)
- real_valued_feature = real_valued_column(...)
- real_valued_buckets = bucketized_column(
- source_column=real_valued_feature, ...)
-
- feature_columns=[sparse_feature_emb, real_valued_buckets]
-```
-
-##### Args:
-
-
-* <b>`columns_to_tensors`</b>: A mapping from feature column to tensors. 'string' key
- means a base feature (not-transformed). It can have FeatureColumn as a
- key too. That means that FeatureColumn is already transformed by input
- pipeline. For example, `inflow` may have handled transformations.
-* <b>`feature_columns`</b>: A set containing all the feature columns. All items in the
- set should be instances of classes derived by FeatureColumn.
-* <b>`weight_collections`</b>: List of graph collections to which weights are added.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- A Tensor which can be consumed by hidden layers in the neural network.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if FeatureColumn cannot be consumed by a neural network.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.layer_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.layer_norm.md
deleted file mode 100644
index 277976dacf..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.layer_norm.md
+++ /dev/null
@@ -1,39 +0,0 @@
-### `tf.contrib.layers.layer_norm(*args, **kwargs)` {#layer_norm}
-
-Adds a Layer Normalization layer from https://arxiv.org/abs/1607.06450.
-
- "Layer Normalization"
-
- Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton
-
-Can be used as a normalizer function for conv2d and fully_connected.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A tensor with 2 or more dimensions. The normalization
- occurs over all but the first dimension.
-* <b>`center`</b>: If True, add offset of `beta` to normalized tensor. If False, `beta`
- is ignored.
-* <b>`scale`</b>: If True, multiply by `gamma`. If False, `gamma` is
- not used. When the next layer is linear (also e.g. `nn.relu`), this can be
- disabled since the scaling can be done by the next layer.
-* <b>`activation_fn`</b>: Activation function, default set to None to skip it and
- maintain a linear activation.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional collections for the variables.
-* <b>`outputs_collections`</b>: Collections to add the outputs.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for `variable_scope`.
-
-##### Returns:
-
- A `Tensor` representing the output of the operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If rank or last dimension of `inputs` is undefined.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.sequence_input_from_feature_columns.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.sequence_input_from_feature_columns.md
deleted file mode 100644
index 937cc2db48..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.sequence_input_from_feature_columns.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.contrib.layers.sequence_input_from_feature_columns(*args, **kwargs)` {#sequence_input_from_feature_columns}
-
-Builds inputs for sequence models from `FeatureColumn`s. (experimental)
-
-THIS FUNCTION IS EXPERIMENTAL. It may change or be removed at any time, and without warning.
-
-
-See documentation for `input_from_feature_columns`. The following types of
-`FeatureColumn` are permitted in `feature_columns`: `_OneHotColumn`,
-`_EmbeddingColumn`, `_ScatteredEmbeddingColumn`, `_RealValuedColumn`,
-`_DataFrameColumn`. In addition, columns in `feature_columns` may not be
-constructed using any of the following: `ScatteredEmbeddingColumn`,
-`BucketizedColumn`, `CrossedColumn`.
-
-##### Args:
-
-
-* <b>`columns_to_tensors`</b>: A mapping from feature column to tensors. 'string' key
- means a base feature (not-transformed). It can have FeatureColumn as a
- key too. That means that FeatureColumn is already transformed by input
- pipeline. For example, `inflow` may have handled transformations.
-* <b>`feature_columns`</b>: A set containing all the feature columns. All items in the
- set should be instances of classes derived by FeatureColumn.
-* <b>`weight_collections`</b>: List of graph collections to which weights are added.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- A Tensor which can be consumed by hidden layers in the neural network.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if FeatureColumn cannot be consumed by a neural network.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.sparse_column_with_hash_bucket.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.sparse_column_with_hash_bucket.md
deleted file mode 100644
index 0d00e31a68..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.layers.sparse_column_with_hash_bucket.md
+++ /dev/null
@@ -1,33 +0,0 @@
-### `tf.contrib.layers.sparse_column_with_hash_bucket(column_name, hash_bucket_size, combiner='sum', dtype=tf.string)` {#sparse_column_with_hash_bucket}
-
-Creates a _SparseColumn with hashed bucket configuration.
-
-Use this when your sparse features are in string or integer format, but you
-don't have a vocab file that maps each value to an integer ID.
-output_id = Hash(input_feature_string) % bucket_size
-
-##### Args:
-
-
-* <b>`column_name`</b>: A string defining sparse column name.
-* <b>`hash_bucket_size`</b>: An int that is > 1. The number of buckets.
-* <b>`combiner`</b>: A string specifying how to reduce if the sparse column is
- multivalent. Currently "mean", "sqrtn" and "sum" are supported, with "sum"
- the default. "sqrtn" often achieves good accuracy, in particular with
- bag-of-words columns.
- * "sum": do not normalize features in the column
- * "mean": do l1 normalization on features in the column
- * "sqrtn": do l2 normalization on features in the column
- For more information: `tf.embedding_lookup_sparse`.
-* <b>`dtype`</b>: The type of features. Only string and integer types are supported.
-
-##### Returns:
-
- A _SparseColumn with hashed bucket configuration
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: hash_bucket_size is not greater than 2.
-* <b>`ValueError`</b>: dtype is neither string nor integer.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.InputFnOps.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.InputFnOps.md
deleted file mode 100644
index 4d65b070a1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.InputFnOps.md
+++ /dev/null
@@ -1,64 +0,0 @@
-A return type for an input_fn.
-
-This return type is currently only supported for serving input_fn.
-Training and eval input_fn should return a `(features, labels)` tuple.
-
-The expected return values are:
- features: A dict of string to `Tensor` or `SparseTensor`, specifying the
- features to be passed to the model.
- labels: A `Tensor`, `SparseTensor`, or a dict of string to `Tensor` or
- `SparseTensor`, specifying labels for training or eval. For serving, set
- `labels` to `None`.
- default_inputs: a dict of string to `Tensor` or `SparseTensor`, specifying
- the input placeholders (if any) that this input_fn expects to be fed.
- Typically, this is used by a serving input_fn, which expects to be fed
- serialized `tf.Example` protos.
-- - -
-
-#### `tf.contrib.learn.InputFnOps.__getnewargs__()` {#InputFnOps.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.contrib.learn.InputFnOps.__getstate__()` {#InputFnOps.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.contrib.learn.InputFnOps.__new__(_cls, features, labels, default_inputs)` {#InputFnOps.__new__}
-
-Create new instance of InputFnOps(features, labels, default_inputs)
-
-
-- - -
-
-#### `tf.contrib.learn.InputFnOps.__repr__()` {#InputFnOps.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.contrib.learn.InputFnOps.default_inputs` {#InputFnOps.default_inputs}
-
-Alias for field number 2
-
-
-- - -
-
-#### `tf.contrib.learn.InputFnOps.features` {#InputFnOps.features}
-
-Alias for field number 0
-
-
-- - -
-
-#### `tf.contrib.learn.InputFnOps.labels` {#InputFnOps.labels}
-
-Alias for field number 1
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.build_parsing_serving_input_fn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.build_parsing_serving_input_fn.md
deleted file mode 100644
index 41ee38cccc..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.build_parsing_serving_input_fn.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.contrib.learn.build_parsing_serving_input_fn(feature_spec, default_batch_size=None)` {#build_parsing_serving_input_fn}
-
-Build an input_fn appropriate for serving, expecting fed tf.Examples.
-
-Creates an input_fn that expects a serialized tf.Example fed into a string
-placeholder. The function parses the tf.Example according to the provided
-feature_spec, and returns all parsed Tensors as features. This input_fn is
-for use at serving time, so the labels return value is always None.
-
-##### Args:
-
-
-* <b>`feature_spec`</b>: a dict of string to `VarLenFeature`/`FixedLenFeature`.
-* <b>`default_batch_size`</b>: the number of query examples expected per batch.
- Leave unset for variable batch size (recommended).
-
-##### Returns:
-
- An input_fn suitable for use in serving.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.monitors.EveryN.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.monitors.EveryN.md
deleted file mode 100644
index 0caa4f902e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.learn.monitors.EveryN.md
+++ /dev/null
@@ -1,232 +0,0 @@
-Base class for monitors that execute callbacks every N steps.
-
-This class adds three new callbacks:
- - every_n_step_begin
- - every_n_step_end
- - every_n_post_step
-
-The callbacks are executed every n steps, or optionally every step for the
-first m steps, where m and n can both be user-specified.
-
-When extending this class, note that if you wish to use any of the
-`BaseMonitor` callbacks, you must call their respective super implementation:
-
- def step_begin(self, step):
- super(ExampleMonitor, self).step_begin(step)
- return []
-
-Failing to call the super implementation will cause unpredictable behavior.
-
-The `every_n_post_step()` callback is also called after the last step if it
-was not already called through the regular conditions. Note that
-`every_n_step_begin()` and `every_n_step_end()` do not receive that special
-treatment.
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.__init__(every_n_steps=100, first_n_steps=1)` {#EveryN.__init__}
-
-Initializes an `EveryN` monitor.
-
-##### Args:
-
-
-* <b>`every_n_steps`</b>: `int`, the number of steps to allow between callbacks.
-* <b>`first_n_steps`</b>: `int`, specifying the number of initial steps during
- which the callbacks will always be executed, regardless of the value
- of `every_n_steps`. Note that this value is relative to the global step
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.begin(max_steps=None)` {#EveryN.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.end(session=None)` {#EveryN.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.epoch_begin(epoch)` {#EveryN.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.epoch_end(epoch)` {#EveryN.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.every_n_post_step(step, session)` {#EveryN.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.every_n_step_begin(step)` {#EveryN.every_n_step_begin}
-
-Callback before every n'th step begins.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list` of tensors that will be evaluated at this step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.every_n_step_end(step, outputs)` {#EveryN.every_n_step_end}
-
-Callback after every n'th step finished.
-
-This callback provides access to the tensors/ops evaluated at this step,
-including the additional tensors for which evaluation was requested in
-`step_begin`.
-
-In addition, the callback has the opportunity to stop training by returning
-`True`. This is useful for early stopping, for example.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`outputs`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`. True if training should stop.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.post_step(step, session)` {#EveryN.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.run_on_all_workers` {#EveryN.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.set_estimator(estimator)` {#EveryN.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.step_begin(step)` {#EveryN.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.EveryN.step_end(step, output)` {#EveryN.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.legacy_seq2seq.model_with_buckets.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.legacy_seq2seq.model_with_buckets.md
deleted file mode 100644
index 5bd387a527..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.legacy_seq2seq.model_with_buckets.md
+++ /dev/null
@@ -1,43 +0,0 @@
-### `tf.contrib.legacy_seq2seq.model_with_buckets(encoder_inputs, decoder_inputs, targets, weights, buckets, seq2seq, softmax_loss_function=None, per_example_loss=False, name=None)` {#model_with_buckets}
-
-Create a sequence-to-sequence model with support for bucketing.
-
-The seq2seq argument is a function that defines a sequence-to-sequence model,
-e.g., seq2seq = lambda x, y: basic_rnn_seq2seq(
- x, y, core_rnn_cell.GRUCell(24))
-
-##### Args:
-
-
-* <b>`encoder_inputs`</b>: A list of Tensors to feed the encoder; first seq2seq input.
-* <b>`decoder_inputs`</b>: A list of Tensors to feed the decoder; second seq2seq input.
-* <b>`targets`</b>: A list of 1D batch-sized int32 Tensors (desired output sequence).
-* <b>`weights`</b>: List of 1D batch-sized float-Tensors to weight the targets.
-* <b>`buckets`</b>: A list of pairs of (input size, output size) for each bucket.
-* <b>`seq2seq`</b>: A sequence-to-sequence model function; it takes 2 input that
- agree with encoder_inputs and decoder_inputs, and returns a pair
- consisting of outputs and states (as, e.g., basic_rnn_seq2seq).
-* <b>`softmax_loss_function`</b>: Function (inputs-batch, labels-batch) -> loss-batch
- to be used instead of the standard softmax (the default if this is None).
-* <b>`per_example_loss`</b>: Boolean. If set, the returned loss will be a batch-sized
- tensor of losses for each sequence in the batch. If unset, it will be
- a scalar with the averaged loss from all examples.
-* <b>`name`</b>: Optional name for this operation, defaults to "model_with_buckets".
-
-##### Returns:
-
- A tuple of the form (outputs, losses), where:
-
-* <b>`outputs`</b>: The outputs for each bucket. Its j'th element consists of a list
- of 2D Tensors. The shape of output tensors can be either
- [batch_size x output_size] or [batch_size x num_decoder_symbols]
- depending on the seq2seq model used.
-* <b>`losses`</b>: List of scalar Tensors, representing losses for each bucket, or,
- if per_example_loss is set, a list of 1D batch-sized float Tensors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If length of encoder_inputsut, targets, or weights is smaller
- than the largest (last) bucket.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.losses.softmax_cross_entropy.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.losses.softmax_cross_entropy.md
deleted file mode 100644
index 2e4c434cd2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.losses.softmax_cross_entropy.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.contrib.losses.softmax_cross_entropy(*args, **kwargs)` {#softmax_cross_entropy}
-
-Creates a cross-entropy loss using tf.nn.softmax_cross_entropy_with_logits. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.softmax_cross_entropy instead. Note that the order of the logits and labels arguments has been changed.
-
-`weights` acts as a coefficient for the loss. If a scalar is provided,
-then the loss is simply scaled by the given value. If `weights` is a
-tensor of size [`batch_size`], then the loss weights apply to each
-corresponding sample.
-
-If `label_smoothing` is nonzero, smooth the labels towards 1/num_classes:
- new_onehot_labels = onehot_labels * (1 - label_smoothing)
- + label_smoothing / num_classes
-
-##### Args:
-
-
-* <b>`logits`</b>: [batch_size, num_classes] logits outputs of the network .
-* <b>`onehot_labels`</b>: [batch_size, num_classes] one-hot-encoded labels.
-* <b>`weights`</b>: Coefficients for the loss. The tensor must be a scalar or a tensor
- of shape [batch_size].
-* <b>`label_smoothing`</b>: If greater than 0 then smooth the labels.
-* <b>`scope`</b>: the scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the mean loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `logits` doesn't match that of `onehot_labels`
- or if the shape of `weights` is invalid or if `weights` is None.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_mean_absolute_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_mean_absolute_error.md
deleted file mode 100644
index 7b1f257677..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_mean_absolute_error.md
+++ /dev/null
@@ -1,51 +0,0 @@
-### `tf.contrib.metrics.streaming_mean_absolute_error(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_absolute_error}
-
-Computes the mean absolute error between the labels and predictions.
-
-The `streaming_mean_absolute_error` function creates two local variables,
-`total` and `count` that are used to compute the mean absolute error. This
-average is weighted by `weights`, and it is ultimately returned as
-`mean_absolute_error`: an idempotent operation that simply divides `total` by
-`count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`mean_absolute_error`. Internally, an `absolute_errors` operation computes the
-absolute value of the differences between `predictions` and `labels`. Then
-`update_op` increments `total` with the reduced sum of the product of
-`weights` and `absolute_errors`, and it increments `count` with the reduced
-sum of `weights`
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of arbitrary shape.
-* <b>`labels`</b>: A `Tensor` of the same shape as `predictions`.
-* <b>`weights`</b>: Optional `Tensor` indicating the frequency with which an example is
- sampled. Rank must be 0, or the same rank as `labels`, and must be
- broadcastable to `labels` (i.e., all dimensions must be either `1`, or
- the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that
- `mean_absolute_error` should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`mean_absolute_error`</b>: A `Tensor` representing the current mean, the value of
- `total` divided by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `mean_absolute_error`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_precision.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_precision.md
deleted file mode 100644
index 34a3eb0640..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_precision.md
+++ /dev/null
@@ -1,49 +0,0 @@
-### `tf.contrib.metrics.streaming_precision(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_precision}
-
-Computes the precision of the predictions with respect to the labels.
-
-The `streaming_precision` function creates two local variables,
-`true_positives` and `false_positives`, that are used to compute the
-precision. This value is ultimately returned as `precision`, an idempotent
-operation that simply divides `true_positives` by the sum of `true_positives`
-and `false_positives`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`precision`. `update_op` weights each prediction by the corresponding value in
-`weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted values, a `bool` `Tensor` of arbitrary shape.
-* <b>`labels`</b>: The ground truth values, a `bool` `Tensor` whose dimensions must
- match `predictions`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `precision` should
- be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`precision`</b>: Scalar float `Tensor` with the value of `true_positives`
- divided by the sum of `true_positives` and `false_positives`.
-* <b>`update_op`</b>: `Operation` that increments `true_positives` and
- `false_positives` variables appropriately and whose value matches
- `precision`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_sensitivity_at_specificity.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_sensitivity_at_specificity.md
deleted file mode 100644
index 979083617d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_sensitivity_at_specificity.md
+++ /dev/null
@@ -1,56 +0,0 @@
-### `tf.contrib.metrics.streaming_sensitivity_at_specificity(predictions, labels, specificity, weights=None, num_thresholds=200, metrics_collections=None, updates_collections=None, name=None)` {#streaming_sensitivity_at_specificity}
-
-Computes the specificity at a given sensitivity.
-
-The `streaming_sensitivity_at_specificity` function creates four local
-variables, `true_positives`, `true_negatives`, `false_positives` and
-`false_negatives` that are used to compute the sensitivity at the given
-specificity value. The threshold for the given specificity value is computed
-and used to evaluate the corresponding sensitivity.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`sensitivity`. `update_op` increments the `true_positives`, `true_negatives`,
-`false_positives` and `false_negatives` counts with the weight of each case
-found in the `predictions` and `labels`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-For additional information about specificity and sensitivity, see the
-following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
-
-##### Args:
-
-
-* <b>`predictions`</b>: A floating point `Tensor` of arbitrary shape and whose values
- are in the range `[0, 1]`.
-* <b>`labels`</b>: A `bool` `Tensor` whose shape matches `predictions`.
-* <b>`specificity`</b>: A scalar value in range `[0, 1]`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`num_thresholds`</b>: The number of thresholds to use for matching the given
- specificity.
-* <b>`metrics_collections`</b>: An optional list of collections that `sensitivity`
- should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`sensitivity`</b>: A scalar `Tensor` representing the sensitivity at the given
- `specificity` value.
-* <b>`update_op`</b>: An operation that increments the `true_positives`,
- `true_negatives`, `false_positives` and `false_negatives` variables
- appropriately and whose value matches `sensitivity`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- `specificity` is not between 0 and 1, or if either `metrics_collections`
- or `updates_collections` are not a list or tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_specificity_at_sensitivity.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_specificity_at_sensitivity.md
deleted file mode 100644
index ed12bd4657..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_specificity_at_sensitivity.md
+++ /dev/null
@@ -1,56 +0,0 @@
-### `tf.contrib.metrics.streaming_specificity_at_sensitivity(predictions, labels, sensitivity, weights=None, num_thresholds=200, metrics_collections=None, updates_collections=None, name=None)` {#streaming_specificity_at_sensitivity}
-
-Computes the specificity at a given sensitivity.
-
-The `streaming_specificity_at_sensitivity` function creates four local
-variables, `true_positives`, `true_negatives`, `false_positives` and
-`false_negatives` that are used to compute the specificity at the given
-sensitivity value. The threshold for the given sensitivity value is computed
-and used to evaluate the corresponding specificity.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`specificity`. `update_op` increments the `true_positives`, `true_negatives`,
-`false_positives` and `false_negatives` counts with the weight of each case
-found in the `predictions` and `labels`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-For additional information about specificity and sensitivity, see the
-following: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
-
-##### Args:
-
-
-* <b>`predictions`</b>: A floating point `Tensor` of arbitrary shape and whose values
- are in the range `[0, 1]`.
-* <b>`labels`</b>: A `bool` `Tensor` whose shape matches `predictions`.
-* <b>`sensitivity`</b>: A scalar value in range `[0, 1]`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`num_thresholds`</b>: The number of thresholds to use for matching the given
- sensitivity.
-* <b>`metrics_collections`</b>: An optional list of collections that `specificity`
- should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`specificity`</b>: A scalar `Tensor` representing the specificity at the given
- `specificity` value.
-* <b>`update_op`</b>: An operation that increments the `true_positives`,
- `true_negatives`, `false_positives` and `false_negatives` variables
- appropriately and whose value matches `specificity`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- `sensitivity` is not between 0 and 1, or if either `metrics_collections`
- or `updates_collections` are not a list or tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_true_positives_at_thresholds.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_true_positives_at_thresholds.md
deleted file mode 100644
index 685b0ba5e9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.metrics.streaming_true_positives_at_thresholds.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.metrics.streaming_true_positives_at_thresholds(predictions, labels, thresholds, weights=None)` {#streaming_true_positives_at_thresholds}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.opt.ExternalOptimizerInterface.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.opt.ExternalOptimizerInterface.md
deleted file mode 100644
index 7a9d543863..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.opt.ExternalOptimizerInterface.md
+++ /dev/null
@@ -1,56 +0,0 @@
-Base class for interfaces with external optimization algorithms.
-
-Subclass this and implement `_minimize` in order to wrap a new optimization
-algorithm.
-
-`ExternalOptimizerInterface` should not be instantiated directly; instead use
-e.g. `ScipyOptimizerInterface`.
-
-- - -
-
-#### `tf.contrib.opt.ExternalOptimizerInterface.__init__(loss, var_list=None, equalities=None, inequalities=None, **optimizer_kwargs)` {#ExternalOptimizerInterface.__init__}
-
-Initialize a new interface instance.
-
-##### Args:
-
-
-* <b>`loss`</b>: A scalar `Tensor` to be minimized.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`equalities`</b>: Optional list of equality constraint scalar `Tensor`s to be
- held equal to zero.
-* <b>`inequalities`</b>: Optional list of inequality constraint scalar `Tensor`s
- to be kept nonnegative.
-* <b>`**optimizer_kwargs`</b>: Other subclass-specific keyword arguments.
-
-
-
-- - -
-
-#### `tf.contrib.opt.ExternalOptimizerInterface.minimize(session=None, feed_dict=None, fetches=None, step_callback=None, loss_callback=None)` {#ExternalOptimizerInterface.minimize}
-
-Minimize a scalar `Tensor`.
-
-Variables subject to optimization are updated in-place at the end of
-optimization.
-
-Note that this method does *not* just return a minimization `Op`, unlike
-`Optimizer.minimize()`; instead it actually performs minimization by
-executing commands to control a `Session`.
-
-##### Args:
-
-
-* <b>`session`</b>: A `Session` instance.
-* <b>`feed_dict`</b>: A feed dict to be passed to calls to `session.run`.
-* <b>`fetches`</b>: A list of `Tensor`s to fetch and supply to `loss_callback`
- as positional arguments.
-* <b>`step_callback`</b>: A function to be called at each optimization step;
- arguments are the current values of all optimization variables
- flattened into a single vector.
-* <b>`loss_callback`</b>: A function to be called every time the loss and gradients
- are computed, with evaluated fetches supplied as positional arguments.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.BasicRNNCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.BasicRNNCell.md
deleted file mode 100644
index 9f13497f47..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.BasicRNNCell.md
+++ /dev/null
@@ -1,51 +0,0 @@
-The most basic RNN cell.
-- - -
-
-#### `tf.contrib.rnn.BasicRNNCell.__call__(inputs, state, scope=None)` {#BasicRNNCell.__call__}
-
-Most basic RNN: output = new_state = act(W * input + U * state + B).
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicRNNCell.__init__(num_units, input_size=None, activation=tanh)` {#BasicRNNCell.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicRNNCell.output_size` {#BasicRNNCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicRNNCell.state_size` {#BasicRNNCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.BasicRNNCell.zero_state(batch_size, dtype)` {#BasicRNNCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.FusedRNNCellAdaptor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.FusedRNNCellAdaptor.md
deleted file mode 100644
index 18ee35ad47..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.FusedRNNCellAdaptor.md
+++ /dev/null
@@ -1,21 +0,0 @@
-This is an adaptor for RNNCell classes to be used with `FusedRNNCell`.
-- - -
-
-#### `tf.contrib.rnn.FusedRNNCellAdaptor.__call__(inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)` {#FusedRNNCellAdaptor.__call__}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.FusedRNNCellAdaptor.__init__(cell, use_dynamic_rnn=False)` {#FusedRNNCellAdaptor.__init__}
-
-Initialize the adaptor.
-
-##### Args:
-
-
-* <b>`cell`</b>: an instance of a subclass of a `rnn_cell.RNNCell`.
-* <b>`use_dynamic_rnn`</b>: whether to use dynamic (or static) RNN.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.LSTMBlockFusedCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.LSTMBlockFusedCell.md
deleted file mode 100644
index ccd8831ec6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.LSTMBlockFusedCell.md
+++ /dev/null
@@ -1,72 +0,0 @@
-FusedRNNCell implementation of LSTM.
-
-This is an extremely efficient LSTM implementation, that uses a single TF op
-for the entire LSTM. It should be both faster and more memory-efficient than
-LSTMBlockCell defined above.
-
-The implementation is based on: http://arxiv.org/abs/1409.2329.
-
-We add forget_bias (default: 1) to the biases of the forget gate in order to
-reduce the scale of forgetting in the beginning of the training.
-
-The variable naming is consistent with `core_rnn_cell.LSTMCell`.
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockFusedCell.__call__(inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)` {#LSTMBlockFusedCell.__call__}
-
-Run this LSTM on inputs, starting from the given state.
-
-##### Args:
-
-
-* <b>`inputs`</b>: `3-D` tensor with shape `[time_len, batch_size, input_size]`
- or a list of `time_len` tensors of shape `[batch_size, input_size]`.
-* <b>`initial_state`</b>: a tuple `(initial_cell_state, initial_output)` with tensors
- of shape `[batch_size, self._num_units]`. If this is not provided, the
- cell is expected to create a zero initial state of type `dtype`.
-* <b>`dtype`</b>: The data type for the initial state and expected output. Required
- if `initial_state` is not provided or RNN state has a heterogeneous
- dtype.
-* <b>`sequence_length`</b>: Specifies the length of each sequence in inputs. An
- `int32` or `int64` vector (tensor) size `[batch_size]`, values in `[0,
- time_len).`
- Defaults to `time_len` for each element.
-* <b>`scope`</b>: `VariableScope` for the created subgraph; defaults to class name.
-
-##### Returns:
-
- A pair containing:
-
- - Output: A `3-D` tensor of shape `[time_len, batch_size, output_size]`
- or a list of time_len tensors of shape `[batch_size, output_size]`,
- to match the type of the `inputs`.
- - Final state: a tuple `(cell_state, output)` matching `initial_state`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: in case of shape mismatches
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockFusedCell.__init__(num_units, forget_bias=1.0, cell_clip=None, use_peephole=False)` {#LSTMBlockFusedCell.__init__}
-
-Initialize the LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell.
-* <b>`forget_bias`</b>: float, The bias added to forget gates (see above).
-* <b>`cell_clip`</b>: clip the cell to this value. Defaults to `3`.
-* <b>`use_peephole`</b>: Whether to use peephole connections or not.
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMBlockFusedCell.num_units` {#LSTMBlockFusedCell.num_units}
-
-Number of units in this cell (output dimension).
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.LSTMCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.LSTMCell.md
deleted file mode 100644
index 0d380d1e2e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.LSTMCell.md
+++ /dev/null
@@ -1,124 +0,0 @@
-Long short-term memory unit (LSTM) recurrent network cell.
-
-The default non-peephole implementation is based on:
-
- http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf
-
-S. Hochreiter and J. Schmidhuber.
-"Long Short-Term Memory". Neural Computation, 9(8):1735-1780, 1997.
-
-The peephole implementation is based on:
-
- https://research.google.com/pubs/archive/43905.pdf
-
-Hasim Sak, Andrew Senior, and Francoise Beaufays.
-"Long short-term memory recurrent neural network architectures for
- large scale acoustic modeling." INTERSPEECH, 2014.
-
-The class uses optional peep-hole connections, optional cell clipping, and
-an optional projection layer.
-- - -
-
-#### `tf.contrib.rnn.LSTMCell.__call__(inputs, state, scope=None)` {#LSTMCell.__call__}
-
-Run one step of LSTM.
-
-##### Args:
-
-
-* <b>`inputs`</b>: input Tensor, 2D, batch x num_units.
-* <b>`state`</b>: if `state_is_tuple` is False, this must be a state Tensor,
- `2-D, batch x state_size`. If `state_is_tuple` is True, this must be a
- tuple of state Tensors, both `2-D`, with column sizes `c_state` and
- `m_state`.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "lstm_cell".
-
-##### Returns:
-
- A tuple containing:
-
- - A `2-D, [batch x output_dim]`, Tensor representing the output of the
- LSTM after reading `inputs` when previous state was `state`.
- Here output_dim is:
- num_proj if num_proj was set,
- num_units otherwise.
- - Tensor(s) representing the new state of LSTM after reading `inputs` when
- the previous state was `state`. Same type and shape(s) as `state`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If input size cannot be inferred from inputs via
- static shape inference.
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMCell.__init__(num_units, input_size=None, use_peepholes=False, cell_clip=None, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=None, num_proj_shards=None, forget_bias=1.0, state_is_tuple=True, activation=tanh)` {#LSTMCell.__init__}
-
-Initialize the parameters for an LSTM cell.
-
-##### Args:
-
-
-* <b>`num_units`</b>: int, The number of units in the LSTM cell
-* <b>`input_size`</b>: Deprecated and unused.
-* <b>`use_peepholes`</b>: bool, set True to enable diagonal/peephole connections.
-* <b>`cell_clip`</b>: (optional) A float value, if provided the cell state is clipped
- by this value prior to the cell output activation.
-* <b>`initializer`</b>: (optional) The initializer to use for the weight and
- projection matrices.
-* <b>`num_proj`</b>: (optional) int, The output dimensionality for the projection
- matrices. If None, no projection is performed.
-* <b>`proj_clip`</b>: (optional) A float value. If `num_proj > 0` and `proj_clip` is
- provided, then the projected values are clipped elementwise to within
- `[-proj_clip, proj_clip]`.
-* <b>`num_unit_shards`</b>: Deprecated, will be removed by Jan. 2017.
- Use a variable_scope partitioner instead.
-* <b>`num_proj_shards`</b>: Deprecated, will be removed by Jan. 2017.
- Use a variable_scope partitioner instead.
-* <b>`forget_bias`</b>: Biases of the forget gate are initialized by default to 1
- in order to reduce the scale of forgetting at the beginning of
- the training.
-* <b>`state_is_tuple`</b>: If True, accepted and returned states are 2-tuples of
- the `c_state` and `m_state`. If False, they are concatenated
- along the column axis. This latter behavior will soon be deprecated.
-* <b>`activation`</b>: Activation function of the inner states.
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMCell.output_size` {#LSTMCell.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMCell.state_size` {#LSTMCell.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.LSTMCell.zero_state(batch_size, dtype)` {#LSTMCell.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.ResidualWrapper.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.ResidualWrapper.md
deleted file mode 100644
index 7434d63eba..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.rnn.ResidualWrapper.md
+++ /dev/null
@@ -1,73 +0,0 @@
-RNNCell wrapper that ensures cell inputs are added to the outputs.
-- - -
-
-#### `tf.contrib.rnn.ResidualWrapper.__call__(inputs, state, scope=None)` {#ResidualWrapper.__call__}
-
-Run the cell and add its inputs to its outputs.
-
-##### Args:
-
-
-* <b>`inputs`</b>: cell inputs.
-* <b>`state`</b>: cell state.
-* <b>`scope`</b>: optional cell scope.
-
-##### Returns:
-
- Tuple of cell outputs and new state.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If cell inputs and outputs have different structure (type).
-* <b>`ValueError`</b>: If cell inputs and outputs have different structure (value).
-
-
-- - -
-
-#### `tf.contrib.rnn.ResidualWrapper.__init__(cell)` {#ResidualWrapper.__init__}
-
-Constructs a `ResidualWrapper` for `cell`.
-
-##### Args:
-
-
-* <b>`cell`</b>: An instance of `RNNCell`.
-
-
-- - -
-
-#### `tf.contrib.rnn.ResidualWrapper.output_size` {#ResidualWrapper.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.ResidualWrapper.state_size` {#ResidualWrapper.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.ResidualWrapper.zero_state(batch_size, dtype)` {#ResidualWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.util.make_ndarray.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.util.make_ndarray.md
deleted file mode 100644
index 7b2a81d48e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.contrib.util.make_ndarray.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.contrib.util.make_ndarray(tensor)` {#make_ndarray}
-
-Create a numpy ndarray from a tensor.
-
-Create a numpy ndarray with the same shape and data as the tensor.
-
-##### Args:
-
-
-* <b>`tensor`</b>: A TensorProto.
-
-##### Returns:
-
- A numpy array with the tensor contents.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if tensor has unsupported type.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.count_up_to.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.count_up_to.md
deleted file mode 100644
index 97f802372c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.count_up_to.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.count_up_to(ref, limit, name=None)` {#count_up_to}
-
-Increments 'ref' until it reaches 'limit'.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `int32`, `int64`.
- Should be from a scalar `Variable` node.
-* <b>`limit`</b>: An `int`.
- If incrementing ref would bring it above limit, instead generates an
- 'OutOfRange' error.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `ref`.
- A copy of the input before increment. If nothing else modifies the
- input, the values produced will all be distinct.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.dynamic_stitch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.dynamic_stitch.md
deleted file mode 100644
index 3eaba84d7c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.dynamic_stitch.md
+++ /dev/null
@@ -1,59 +0,0 @@
-### `tf.dynamic_stitch(indices, data, name=None)` {#dynamic_stitch}
-
-Interleave the values from the `data` tensors into a single tensor.
-
-Builds a merged tensor such that
-
-```python
- merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]
-```
-
-For example, if each `indices[m]` is scalar or vector, we have
-
-```python
- # Scalar indices:
- merged[indices[m], ...] = data[m][...]
-
- # Vector indices:
- merged[indices[m][i], ...] = data[m][i, ...]
-```
-
-Each `data[i].shape` must start with the corresponding `indices[i].shape`,
-and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we
-must have `data[i].shape = indices[i].shape + constant`. In terms of this
-`constant`, the output shape is
-
- merged.shape = [max(indices)] + constant
-
-Values are merged in order, so if an index appears in both `indices[m][i]` and
-`indices[n][j]` for `(m,i) < (n,j)` the slice `data[n][j]` will appear in the
-merged result.
-
-For example:
-
-```python
- indices[0] = 6
- indices[1] = [4, 1]
- indices[2] = [[5, 2], [0, 3]]
- data[0] = [61, 62]
- data[1] = [[41, 42], [11, 12]]
- data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]
- merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],
- [51, 52], [61, 62]]
-```
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/DynamicStitch.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`indices`</b>: A list of at least 1 `Tensor` objects of type `int32`.
-* <b>`data`</b>: A list with the same number of `Tensor` objects as `indices` of `Tensor` objects of the same type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.error_code_from_exception_type.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.error_code_from_exception_type.md
deleted file mode 100644
index fce6574c6b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.error_code_from_exception_type.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.errors.error_code_from_exception_type(cls)` {#error_code_from_exception_type}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.exception_type_from_error_code.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.exception_type_from_error_code.md
deleted file mode 100644
index c635c56a01..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.errors.exception_type_from_error_code.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.errors.exception_type_from_error_code(error_code)` {#exception_type_from_error_code}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.fft3d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.fft3d.md
deleted file mode 100644
index a1cf358fe2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.fft3d.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.fft3d(input, name=None)` {#fft3d}
-
-Compute the 3-dimensional discrete Fourier Transform over the inner-most 3
-
-dimensions of `input`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `complex64`. A complex64 tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `complex64`.
- A complex64 tensor of the same shape as `input`. The inner-most 3
- dimensions of `input` are replaced with their 3D Fourier Transform.
-
- @compatibility(numpy)
- Equivalent to np.fft3
- @end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_collection.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_collection.md
deleted file mode 100644
index fc0044b490..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_collection.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.get_collection(key, scope=None)` {#get_collection}
-
-Wrapper for `Graph.get_collection()` using the default graph.
-
-See [`Graph.get_collection()`](../../api_docs/python/framework.md#Graph.get_collection)
-for more details.
-
-##### Args:
-
-
-* <b>`key`</b>: The key for the collection. For example, the `GraphKeys` class
- contains many standard names for collections.
-* <b>`scope`</b>: (Optional.) If supplied, the resulting list is filtered to include
- only items whose `name` attribute matches using `re.match`. Items
- without a `name` attribute are never returned if a scope is supplied and
- the choice or `re.match` means that a `scope` without special tokens
- filters by prefix.
-
-##### Returns:
-
- The list of values in the collection with the given `name`, or
- an empty list if no value has been added to that collection. The
- list contains the values in the order under which they were
- collected.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_collection_ref.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_collection_ref.md
deleted file mode 100644
index c393da2233..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_collection_ref.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.get_collection_ref(key)` {#get_collection_ref}
-
-Wrapper for `Graph.get_collection_ref()` using the default graph.
-
-See [`Graph.get_collection_ref()`](../../api_docs/python/framework.md#Graph.get_collection_ref)
-for more details.
-
-##### Args:
-
-
-* <b>`key`</b>: The key for the collection. For example, the `GraphKeys` class
- contains many standard names for collections.
-
-##### Returns:
-
- The list of values in the collection with the given `name`, or an empty
- list if no value has been added to that collection. Note that this returns
- the collection list itself, which can be modified in place to change the
- collection.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_session_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_session_tensor.md
deleted file mode 100644
index 42623f6706..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.get_session_tensor.md
+++ /dev/null
@@ -1,36 +0,0 @@
-### `tf.get_session_tensor(handle, dtype, name=None)` {#get_session_tensor}
-
-Get the tensor of type `dtype` by feeding a tensor handle.
-
-This is EXPERIMENTAL and subject to change.
-
-Get the value of the tensor from a tensor handle. The tensor
-is produced in a previous run() and stored in the state of the
-session.
-
-##### Args:
-
-
-* <b>`handle`</b>: The string representation of a persistent tensor handle.
-* <b>`dtype`</b>: The type of the output tensor.
-* <b>`name`</b>: Optional name prefix for the return tensor.
-
-##### Returns:
-
- A pair of tensors. The first is a placeholder for feeding a
- tensor handle and the second is the tensor in the session state
- keyed by the tensor handle.
-
-
-* <b>`Example`</b>:
-
-```python
-c = tf.multiply(a, b)
-h = tf.get_session_handle(c)
-h = sess.run(h)
-
-p, a = tf.get_session_tensor(h.handle, tf.float32)
-b = tf.multiply(a, 10)
-c = sess.run(b, feed_dict={p: h.handle})
-```
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ifft.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ifft.md
deleted file mode 100644
index 4e8b5c691d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ifft.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.ifft(input, name=None)` {#ifft}
-
-Compute the inverse 1-dimensional discrete Fourier Transform over the inner-most
-
-dimension of `input`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `complex64`. A complex64 tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `complex64`.
- A complex64 tensor of the same shape as `input`. The inner-most
- dimension of `input` is replaced with its inverse 1D Fourier Transform.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.non_max_suppression.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.non_max_suppression.md
deleted file mode 100644
index d6b354a3d2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.non_max_suppression.md
+++ /dev/null
@@ -1,45 +0,0 @@
-### `tf.image.non_max_suppression(boxes, scores, max_output_size, iou_threshold=None, name=None)` {#non_max_suppression}
-
-Greedily selects a subset of bounding boxes in descending order of score,
-
-pruning away boxes that have high intersection-over-union (IOU) overlap
-with previously selected boxes. Bounding boxes are supplied as
-[y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any
-diagonal pair of box corners and the coordinates can be provided as normalized
-(i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm
-is agnostic to where the origin is in the coordinate system. Note that this
-algorithm is invariant to orthogonal transformations and translations
-of the coordinate system; thus translating or reflections of the coordinate
-system result in the same boxes being selected by the algorithm.
-
-The output of this operation is a set of integers indexing into the input
-collection of bounding boxes representing the selected boxes. The bounding
-box coordinates corresponding to the selected indices can then be obtained
-using the `tf.gather operation`. For example:
-
- selected_indices = tf.image.non_max_suppression(
- boxes, scores, max_output_size, iou_threshold)
- selected_boxes = tf.gather(boxes, selected_indices)
-
-##### Args:
-
-
-* <b>`boxes`</b>: A `Tensor` of type `float32`.
- A 2-D float tensor of shape `[num_boxes, 4]`.
-* <b>`scores`</b>: A `Tensor` of type `float32`.
- A 1-D float tensor of shape `[num_boxes]` representing a single
- score corresponding to each box (each row of boxes).
-* <b>`max_output_size`</b>: A `Tensor` of type `int32`.
- A scalar integer tensor representing the maximum number of
- boxes to be selected by non max suppression.
-* <b>`iou_threshold`</b>: An optional `float`. Defaults to `0.5`.
- A float representing the threshold for deciding whether boxes
- overlap too much with respect to IOU.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `int32`.
- A 1-D integer tensor of shape `[M]` representing the selected
- indices from the boxes tensor, where `M <= max_output_size`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.random_flip_left_right.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.random_flip_left_right.md
deleted file mode 100644
index d063895136..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.random_flip_left_right.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.image.random_flip_left_right(image, seed=None)` {#random_flip_left_right}
-
-Randomly flip an image horizontally (left to right).
-
-With a 1 in 2 chance, outputs the contents of `image` flipped along the
-second dimension, which is `width`. Otherwise output the image as-is.
-
-##### Args:
-
-
-* <b>`image`</b>: A 3-D tensor of shape `[height, width, channels].`
-* <b>`seed`</b>: A Python integer. Used to create a random seed. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-
-##### Returns:
-
- A 3-D tensor of the same type and shape as `image`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape of `image` not supported.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.resize_area.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.resize_area.md
deleted file mode 100644
index dbc6fd1bcd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.image.resize_area.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.image.resize_area(images, size, align_corners=None, name=None)` {#resize_area}
-
-Resize `images` to `size` using area interpolation.
-
-Input images can be of different types but output images are always float.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`.
- 4-D with shape `[batch, height, width, channels]`.
-* <b>`size`</b>: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The
- new size for the images.
-* <b>`align_corners`</b>: An optional `bool`. Defaults to `False`.
- If true, rescale input by (new_height - 1) / (height - 1), which
- exactly aligns the 4 corners of images and resized images. If false, rescale
- by new_height / height. Treat similarly the width dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`. 4-D with shape
- `[batch, new_height, new_width, channels]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.less.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.less.md
deleted file mode 100644
index 3a00afa8db..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.less.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.less(x, y, name=None)` {#less}
-
-Returns the truth value of (x < y) element-wise.
-
-*NOTE*: `Less` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.lgamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.lgamma.md
deleted file mode 100644
index a4add48fb4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.lgamma.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.lgamma(x, name=None)` {#lgamma}
-
-Computes the log of the absolute value of `Gamma(x)` element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.logical_or.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.logical_or.md
deleted file mode 100644
index e04b6a15d2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.logical_or.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.logical_or(x, y, name=None)` {#logical_or}
-
-Returns the truth value of x OR y element-wise.
-
-*NOTE*: `LogicalOr` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_band_part.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_band_part.md
deleted file mode 100644
index 87bd745c2a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_band_part.md
+++ /dev/null
@@ -1,61 +0,0 @@
-### `tf.matrix_band_part(input, num_lower, num_upper, name=None)` {#matrix_band_part}
-
-Copy a tensor setting everything outside a central band in each innermost matrix
-
-to zero.
-
-The `band` part is computed as follows:
-Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a
-tensor with the same shape where
-
-`band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]`.
-
-The indicator function
-
-`in_band(m, n) = (num_lower < 0 || (m-n) <= num_lower)) &&
- (num_upper < 0 || (n-m) <= num_upper)`.
-
-For example:
-
-```prettyprint
-# if 'input' is [[ 0, 1, 2, 3]
- [-1, 0, 1, 2]
- [-2, -1, 0, 1]
- [-3, -2, -1, 0]],
-
-tf.matrix_band_part(input, 1, -1) ==> [[ 0, 1, 2, 3]
- [-1, 0, 1, 2]
- [ 0, -1, 0, 1]
- [ 0, 0, -1, 0]],
-
-tf.matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0]
- [-1, 0, 1, 0]
- [-2, -1, 0, 1]
- [ 0, -2, -1, 0]]
-```
-
-Useful special cases:
-
-```prettyprint
- tf.matrix_band_part(input, 0, -1) ==> Upper triangular part.
- tf.matrix_band_part(input, -1, 0) ==> Lower triangular part.
- tf.matrix_band_part(input, 0, 0) ==> Diagonal.
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Rank `k` tensor.
-* <b>`num_lower`</b>: A `Tensor` of type `int64`.
- 0-D tensor. Number of subdiagonals to keep. If negative, keep entire
- lower triangle.
-* <b>`num_upper`</b>: A `Tensor` of type `int64`.
- 0-D tensor. Number of superdiagonals to keep. If negative, keep
- entire upper triangle.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- Rank `k` tensor of the same shape as input. The extracted banded tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_diag.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_diag.md
deleted file mode 100644
index 16ba620c83..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_diag.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.matrix_diag(diagonal, name=None)` {#matrix_diag}
-
-Returns a batched diagonal tensor with a given batched diagonal values.
-
-Given a `diagonal`, this operation returns a tensor with the `diagonal` and
-everything else padded with zeros. The diagonal is computed as follows:
-
-Assume `diagonal` has `k` dimensions `[I, J, K, ..., N]`, then the output is a
-tensor of rank `k+1` with dimensions [I, J, K, ..., N, N]` where:
-
-`output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n]`.
-
-For example:
-
-```prettyprint
-# 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]
-
-and diagonal.shape = (2, 4)
-
-tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0]
- [0, 2, 0, 0]
- [0, 0, 3, 0]
- [0, 0, 0, 4]],
- [[5, 0, 0, 0]
- [0, 6, 0, 0]
- [0, 0, 7, 0]
- [0, 0, 0, 8]]]
-
-which has shape (2, 4, 4)
-```
-
-##### Args:
-
-
-* <b>`diagonal`</b>: A `Tensor`. Rank `k`, where `k >= 1`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `diagonal`.
- Rank `k+1`, with `output.shape = diagonal.shape + [diagonal.shape[-1]]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_diag_part.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_diag_part.md
deleted file mode 100644
index efaf772f6b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_diag_part.md
+++ /dev/null
@@ -1,45 +0,0 @@
-### `tf.matrix_diag_part(input, name=None)` {#matrix_diag_part}
-
-Returns the batched diagonal part of a batched tensor.
-
-This operation returns a tensor with the `diagonal` part
-of the batched `input`. The `diagonal` part is computed as follows:
-
-Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a
-tensor of rank `k - 1` with dimensions `[I, J, K, ..., min(M, N)]` where:
-
-`diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n]`.
-
-The input must be at least a matrix.
-
-For example:
-
-```prettyprint
-# 'input' is [[[1, 0, 0, 0]
- [0, 2, 0, 0]
- [0, 0, 3, 0]
- [0, 0, 0, 4]],
- [[5, 0, 0, 0]
- [0, 6, 0, 0]
- [0, 0, 7, 0]
- [0, 0, 0, 8]]]
-
-and input.shape = (2, 4, 4)
-
-tf.matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]]
-
-which has shape (2, 4)
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Rank `k` tensor where `k >= 2`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- The extracted diagonal(s) having shape
- `diagonal.shape = input.shape[:-2] + [min(input.shape[-2:])]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_solve.md
deleted file mode 100644
index 88d037f2fa..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_solve.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.matrix_solve(matrix, rhs, adjoint=None, name=None)` {#matrix_solve}
-
-Solves systems of linear equations.
-
-`Matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
-form square matrices. `Rhs` is a tensor of shape `[..., M, K]`. The `output` is
-a tensor shape `[..., M, K]`. If `adjoint` is `False` then each output matrix
-satisfies `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`.
-If `adjoint` is `True` then each output matrix satisfies
-`adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]`.
-
-##### Args:
-
-
-* <b>`matrix`</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`, `complex64`, `complex128`.
- Shape is `[..., M, M]`.
-* <b>`rhs`</b>: A `Tensor`. Must have the same type as `matrix`.
- Shape is `[..., M, K]`.
-* <b>`adjoint`</b>: An optional `bool`. Defaults to `False`.
- Boolean indicating whether to solve with `matrix` or its (block-wise)
- adjoint.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `matrix`. Shape is `[..., M, K]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_solve_ls.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_solve_ls.md
deleted file mode 100644
index 7c163ae7f0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_solve_ls.md
+++ /dev/null
@@ -1,56 +0,0 @@
-### `tf.matrix_solve_ls(matrix, rhs, l2_regularizer=0.0, fast=True, name=None)` {#matrix_solve_ls}
-
-Solves one or more linear least-squares problems.
-
-`matrix` is a tensor of shape `[..., M, N]` whose inner-most 2 dimensions
-form `M`-by-`N` matrices. Rhs is a tensor of shape `[..., M, K]` whose
-inner-most 2 dimensions form `M`-by-`K` matrices. The computed output is a
-`Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form `M`-by-`K`
-matrices that solve the equations
-`matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least squares
-sense.
-
-Below we will use the following notation for each pair of matrix and
-right-hand sides in the batch:
-
-`matrix`=\\(A \in \Re^{m \times n}\\),
-`rhs`=\\(B \in \Re^{m \times k}\\),
-`output`=\\(X \in \Re^{n \times k}\\),
-`l2_regularizer`=\\(\lambda\\).
-
-If `fast` is `True`, then the solution is computed by solving the normal
-equations using Cholesky decomposition. Specifically, if \\(m \ge n\\) then
-\\(X = (A^T A + \lambda I)^{-1} A^T B\\), which solves the least-squares
-problem \\(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} ||A Z - B||_F^2 +
-\lambda ||Z||_F^2\\). If \\(m \lt n\\) then `output` is computed as
-\\(X = A^T (A A^T + \lambda I)^{-1} B\\), which (for \\(\lambda = 0\\)) is
-the minimum-norm solution to the under-determined linear system, i.e.
-\\(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} ||Z||_F^2 \\), subject to
-\\(A Z = B\\). Notice that the fast path is only numerically stable when
-\\(A\\) is numerically full rank and has a condition number
-\\(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon_{mach}}}\\) or\\(\lambda\\)
-is sufficiently large.
-
-If `fast` is `False` an algorithm based on the numerically robust complete
-orthogonal decomposition is used. This computes the minimum-norm
-least-squares solution, even when \\(A\\) is rank deficient. This path is
-typically 6-7 times slower than the fast path. If `fast` is `False` then
-`l2_regularizer` is ignored.
-
-##### Args:
-
-
-* <b>`matrix`</b>: `Tensor` of shape `[..., M, N]`.
-* <b>`rhs`</b>: `Tensor` of shape `[..., M, K]`.
-* <b>`l2_regularizer`</b>: 0-D `double` `Tensor`. Ignored if `fast=False`.
-* <b>`fast`</b>: bool. Defaults to `True`.
-* <b>`name`</b>: string, optional name of the operation.
-
-##### Returns:
-
-
-* <b>`output`</b>: `Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form
- `M`-by-`K` matrices that solve the equations
- `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least
- squares sense.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_transpose.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_transpose.md
deleted file mode 100644
index 7bfbc549a2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.matrix_transpose.md
+++ /dev/null
@@ -1,34 +0,0 @@
-### `tf.matrix_transpose(a, name='matrix_transpose')` {#matrix_transpose}
-
-Transposes last two dimensions of tensor `a`.
-
-For example:
-
-```python
-# Matrix with no batch dimension.
-# 'x' is [[1 2 3]
-# [4 5 6]]
-tf.matrix_transpose(x) ==> [[1 4]
- [2 5]
- [3 6]]
-
-# Matrix with two batch dimensions.
-# x.shape is [1, 2, 3, 4]
-# tf.matrix_transpose(x) is shape [1, 2, 4, 3]
-```
-
-##### Args:
-
-
-* <b>`a`</b>: A `Tensor` with `rank >= 2`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A transposed batch matrix `Tensor`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `a` is determined statically to have `rank < 2`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.atrous_conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.atrous_conv2d.md
deleted file mode 100644
index b98661fad5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.atrous_conv2d.md
+++ /dev/null
@@ -1,115 +0,0 @@
-### `tf.nn.atrous_conv2d(value, filters, rate, padding, name=None)` {#atrous_conv2d}
-
-Atrous convolution (a.k.a. convolution with holes or dilated convolution).
-
-Computes a 2-D atrous convolution, also known as convolution with holes or
-dilated convolution, given 4-D `value` and `filters` tensors. If the `rate`
-parameter is equal to one, it performs regular 2-D convolution. If the `rate`
-parameter is greater than one, it performs convolution with holes, sampling
-the input values every `rate` pixels in the `height` and `width` dimensions.
-This is equivalent to convolving the input with a set of upsampled filters,
-produced by inserting `rate - 1` zeros between two consecutive values of the
-filters along the `height` and `width` dimensions, hence the name atrous
-convolution or convolution with holes (the French word trous means holes in
-English).
-
-More specifically:
-
- output[b, i, j, k] = sum_{di, dj, q} filters[di, dj, q, k] *
- value[b, i + rate * di, j + rate * dj, q]
-
-Atrous convolution allows us to explicitly control how densely to compute
-feature responses in fully convolutional networks. Used in conjunction with
-bilinear interpolation, it offers an alternative to `conv2d_transpose` in
-dense prediction tasks such as semantic image segmentation, optical flow
-computation, or depth estimation. It also allows us to effectively enlarge
-the field of view of filters without increasing the number of parameters or
-the amount of computation.
-
-For a description of atrous convolution and how it can be used for dense
-feature extraction, please see: [Semantic Image Segmentation with Deep
-Convolutional Nets and Fully Connected CRFs](http://arxiv.org/abs/1412.7062).
-The same operation is investigated further in [Multi-Scale Context Aggregation
-by Dilated Convolutions](http://arxiv.org/abs/1511.07122). Previous works
-that effectively use atrous convolution in different ways are, among others,
-[OverFeat: Integrated Recognition, Localization and Detection using
-Convolutional Networks](http://arxiv.org/abs/1312.6229) and [Fast Image
-Scanning with Deep Max-Pooling Convolutional Neural Networks](http://arxiv.org/abs/1302.1700).
-Atrous convolution is also closely related to the so-called noble identities
-in multi-rate signal processing.
-
-There are many different ways to implement atrous convolution (see the refs
-above). The implementation here reduces
-
-```python
- atrous_conv2d(value, filters, rate, padding=padding)
-```
-
-to the following three operations:
-
-```python
- paddings = ...
- net = space_to_batch(value, paddings, block_size=rate)
- net = conv2d(net, filters, strides=[1, 1, 1, 1], padding="VALID")
- crops = ...
- net = batch_to_space(net, crops, block_size=rate)
-```
-
-Advanced usage. Note the following optimization: A sequence of `atrous_conv2d`
-operations with identical `rate` parameters, 'SAME' `padding`, and filters
-with odd heights/ widths:
-
-```python
- net = atrous_conv2d(net, filters1, rate, padding="SAME")
- net = atrous_conv2d(net, filters2, rate, padding="SAME")
- ...
- net = atrous_conv2d(net, filtersK, rate, padding="SAME")
-```
-
-can be equivalently performed cheaper in terms of computation and memory as:
-
-```python
- pad = ... # padding so that the input dims are multiples of rate
- net = space_to_batch(net, paddings=pad, block_size=rate)
- net = conv2d(net, filters1, strides=[1, 1, 1, 1], padding="SAME")
- net = conv2d(net, filters2, strides=[1, 1, 1, 1], padding="SAME")
- ...
- net = conv2d(net, filtersK, strides=[1, 1, 1, 1], padding="SAME")
- net = batch_to_space(net, crops=pad, block_size=rate)
-```
-
-because a pair of consecutive `space_to_batch` and `batch_to_space` ops with
-the same `block_size` cancel out when their respective `paddings` and `crops`
-inputs are identical.
-
-##### Args:
-
-
-* <b>`value`</b>: A 4-D `Tensor` of type `float`. It needs to be in the default "NHWC"
- format. Its shape is `[batch, in_height, in_width, in_channels]`.
-* <b>`filters`</b>: A 4-D `Tensor` with the same type as `value` and shape
- `[filter_height, filter_width, in_channels, out_channels]`. `filters`'
- `in_channels` dimension must match that of `value`. Atrous convolution is
- equivalent to standard convolution with upsampled filters with effective
- height `filter_height + (filter_height - 1) * (rate - 1)` and effective
- width `filter_width + (filter_width - 1) * (rate - 1)`, produced by
- inserting `rate - 1` zeros along consecutive elements across the
- `filters`' spatial dimensions.
-* <b>`rate`</b>: A positive int32. The stride with which we sample input values across
- the `height` and `width` dimensions. Equivalently, the rate by which we
- upsample the filter values by inserting zeros across the `height` and
- `width` dimensions. In the literature, the same parameter is sometimes
- called `input stride` or `dilation`.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
-* <b>`name`</b>: Optional name for the returned tensor.
-
-##### Returns:
-
- A `Tensor` with the same type as `value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If input/output depth does not match `filters`' shape, or if
- padding is other than `'VALID'` or `'SAME'`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.avg_pool.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.avg_pool.md
deleted file mode 100644
index c6ef397b19..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.avg_pool.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.nn.avg_pool(value, ksize, strides, padding, data_format='NHWC', name=None)` {#avg_pool}
-
-Performs the average pooling on the input.
-
-Each entry in `output` is the mean of the corresponding size `ksize`
-window in `value`.
-
-##### Args:
-
-
-* <b>`value`</b>: A 4-D `Tensor` of shape `[batch, height, width, channels]` and type
- `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
-* <b>`ksize`</b>: A list of ints that has length >= 4.
- The size of the window for each dimension of the input tensor.
-* <b>`strides`</b>: A list of ints that has length >= 4.
- The stride of the sliding window for each dimension of the
- input tensor.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
- See the [comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`data_format`</b>: A string. 'NHWC' and 'NCHW' are supported.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- A `Tensor` with the same type as `value`. The average pooled output tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.conv3d_transpose.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.conv3d_transpose.md
deleted file mode 100644
index 575b52def5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.conv3d_transpose.md
+++ /dev/null
@@ -1,35 +0,0 @@
-### `tf.nn.conv3d_transpose(value, filter, output_shape, strides, padding='SAME', name=None)` {#conv3d_transpose}
-
-The transpose of `conv3d`.
-
-This operation is sometimes called "deconvolution" after [Deconvolutional
-Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is
-actually the transpose (gradient) of `conv3d` rather than an actual
-deconvolution.
-
-##### Args:
-
-
-* <b>`value`</b>: A 5-D `Tensor` of type `float` and shape
- `[batch, depth, height, width, in_channels]`.
-* <b>`filter`</b>: A 5-D `Tensor` with the same type as `value` and shape
- `[depth, height, width, output_channels, in_channels]`. `filter`'s
- `in_channels` dimension must match that of `value`.
-* <b>`output_shape`</b>: A 1-D `Tensor` representing the output shape of the
- deconvolution op.
-* <b>`strides`</b>: A list of ints. The stride of the sliding window for each
- dimension of the input tensor.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
- See the [comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`name`</b>: Optional name for the returned tensor.
-
-##### Returns:
-
- A `Tensor` with the same type as `value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If input/output depth does not match `filter`'s shape, or if
- padding is other than `'VALID'` or `'SAME'`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.fixed_unigram_candidate_sampler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.fixed_unigram_candidate_sampler.md
deleted file mode 100644
index ad9b059e42..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.fixed_unigram_candidate_sampler.md
+++ /dev/null
@@ -1,75 +0,0 @@
-### `tf.nn.fixed_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, vocab_file='', distortion=1.0, num_reserved_ids=0, num_shards=1, shard=0, unigrams=(), seed=None, name=None)` {#fixed_unigram_candidate_sampler}
-
-Samples a set of classes using the provided (fixed) base distribution.
-
-This operation randomly samples a tensor of sampled classes
-(`sampled_candidates`) from the range of integers `[0, range_max)`.
-
-The elements of `sampled_candidates` are drawn without replacement
-(if `unique=True`) or with replacement (if `unique=False`) from
-the base distribution.
-
-The base distribution is read from a file or passed in as an
-in-memory array. There is also an option to skew the distribution by
-applying a distortion power to the weights.
-
-In addition, this operation returns tensors `true_expected_count`
-and `sampled_expected_count` representing the number of times each
-of the target classes (`true_classes`) and the sampled
-classes (`sampled_candidates`) is expected to occur in an average
-tensor of sampled classes. These values correspond to `Q(y|x)`
-defined in [this
-document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
-If `unique=True`, then these are post-rejection probabilities and we
-compute them approximately.
-
-##### Args:
-
-
-* <b>`true_classes`</b>: A `Tensor` of type `int64` and shape `[batch_size,
- num_true]`. The target classes.
-* <b>`num_true`</b>: An `int`. The number of target classes per training example.
-* <b>`num_sampled`</b>: An `int`. The number of classes to randomly sample per batch.
-* <b>`unique`</b>: A `bool`. Determines whether all sampled classes in a batch are
- unique.
-* <b>`range_max`</b>: An `int`. The number of possible classes.
-* <b>`vocab_file`</b>: Each valid line in this file (which should have a CSV-like
- format) corresponds to a valid word ID. IDs are in sequential order,
- starting from num_reserved_ids. The last entry in each line is expected
- to be a value corresponding to the count or relative probability. Exactly
- one of `vocab_file` and `unigrams` needs to be passed to this operation.
-* <b>`distortion`</b>: The distortion is used to skew the unigram probability
- distribution. Each weight is first raised to the distortion's power
- before adding to the internal unigram distribution. As a result,
- `distortion = 1.0` gives regular unigram sampling (as defined by the vocab
- file), and `distortion = 0.0` gives a uniform distribution.
-* <b>`num_reserved_ids`</b>: Optionally some reserved IDs can be added in the range
- `[0, num_reserved_ids]` by the users. One use case is that a special
- unknown word token is used as ID 0. These IDs will have a sampling
- probability of 0.
-* <b>`num_shards`</b>: A sampler can be used to sample from a subset of the original
- range in order to speed up the whole computation through parallelism. This
- parameter (together with `shard`) indicates the number of partitions that
- are being used in the overall computation.
-* <b>`shard`</b>: A sampler can be used to sample from a subset of the original range
- in order to speed up the whole computation through parallelism. This
- parameter (together with `num_shards`) indicates the particular partition
- number of the operation, when partitioning is being used.
-* <b>`unigrams`</b>: A list of unigram counts or probabilities, one per ID in
- sequential order. Exactly one of `vocab_file` and `unigrams` should be
- passed to this operation.
-* <b>`seed`</b>: An `int`. An operation-specific seed. Default is 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
-
-* <b>`sampled_candidates`</b>: A tensor of type `int64` and shape `[num_sampled]`.
- The sampled classes.
-* <b>`true_expected_count`</b>: A tensor of type `float`. Same shape as
- `true_classes`. The expected counts under the sampling distribution
- of each of `true_classes`.
-* <b>`sampled_expected_count`</b>: A tensor of type `float`. Same shape as
- `sampled_candidates`. The expected counts under the sampling distribution
- of each of `sampled_candidates`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.fractional_avg_pool.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.fractional_avg_pool.md
deleted file mode 100644
index 367205ffd6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.fractional_avg_pool.md
+++ /dev/null
@@ -1,57 +0,0 @@
-### `tf.nn.fractional_avg_pool(value, pooling_ratio, pseudo_random=None, overlapping=None, deterministic=None, seed=None, seed2=None, name=None)` {#fractional_avg_pool}
-
-Performs fractional average pooling on the input.
-
-Fractional average pooling is similar to Fractional max pooling in the pooling
-region generation step. The only difference is that after pooling regions are
-generated, a mean operation is performed instead of a max operation in each
-pooling region.
-
-##### Args:
-
-
-* <b>`value`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
- 4-D with shape `[batch, height, width, channels]`.
-* <b>`pooling_ratio`</b>: A list of `floats` that has length `>= 4`.
- Pooling ratio for each dimension of `value`, currently only
- supports row and col dimension and should be >= 1.0. For example, a valid
- pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements
- must be 1.0 because we don't allow pooling on batch and channels
- dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions
- respectively.
-* <b>`pseudo_random`</b>: An optional `bool`. Defaults to `False`.
- When set to True, generates the pooling sequence in a
- pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin
- Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071) for
- difference between pseudorandom and random.
-* <b>`overlapping`</b>: An optional `bool`. Defaults to `False`.
- When set to True, it means when pooling, the values at the boundary
- of adjacent pooling cells are used by both cells. For example:
-
- `index 0 1 2 3 4`
-
- `value 20 5 16 3 7`
-
- If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice.
- The result would be [41/3, 26/3] for fractional avg pooling.
-
-* <b>`deterministic`</b>: An optional `bool`. Defaults to `False`.
- When set to True, a fixed pooling region will be used when
- iterating over a FractionalAvgPool node in the computation graph. Mainly used
- in unit test to make FractionalAvgPool deterministic.
-* <b>`seed`</b>: An optional `int`. Defaults to `0`.
- If either seed or seed2 are set to be non-zero, the random number
- generator is seeded by the given seed. Otherwise, it is seeded by a
- random seed.
-* <b>`seed2`</b>: An optional `int`. Defaults to `0`.
- An second seed to avoid seed collision.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, row_pooling_sequence, col_pooling_sequence).
-
-* <b>`output`</b>: A `Tensor`. Has the same type as `value`. output tensor after fractional avg pooling.
-* <b>`row_pooling_sequence`</b>: A `Tensor` of type `int64`. row pooling sequence, needed to calculate gradient.
-* <b>`col_pooling_sequence`</b>: A `Tensor` of type `int64`. column pooling sequence, needed to calculate gradient.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.in_top_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.in_top_k.md
deleted file mode 100644
index f46780649d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.in_top_k.md
+++ /dev/null
@@ -1,33 +0,0 @@
-### `tf.nn.in_top_k(predictions, targets, k, name=None)` {#in_top_k}
-
-Says whether the targets are in the top `K` predictions.
-
-This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the
-prediction for the target class is among the top `k` predictions among
-all predictions for example `i`. Note that the behavior of `InTopK` differs
-from the `TopK` op in its handling of ties; if multiple classes have the
-same prediction value and straddle the top-`k` boundary, all of those
-classes are considered to be in the top `k`.
-
-More formally, let
-
- \\(predictions_i\\) be the predictions for all classes for example `i`,
- \\(targets_i\\) be the target class for example `i`,
- \\(out_i\\) be the output for example `i`,
-
-$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of type `float32`.
- A `batch_size` x `classes` tensor.
-* <b>`targets`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A `batch_size` vector of class ids.
-* <b>`k`</b>: An `int`. Number of top elements to look at for computing precision.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`. Computed Precision at `k` as a `bool Tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.local_response_normalization.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.local_response_normalization.md
deleted file mode 100644
index 81134df29f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.local_response_normalization.md
+++ /dev/null
@@ -1,34 +0,0 @@
-### `tf.nn.local_response_normalization(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None)` {#local_response_normalization}
-
-Local Response Normalization.
-
-The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last
-dimension), and each vector is normalized independently. Within a given vector,
-each component is divided by the weighted, squared sum of inputs within
-`depth_radius`. In detail,
-
- sqr_sum[a, b, c, d] =
- sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)
- output = input / (bias + alpha * sqr_sum) ** beta
-
-For details, see [Krizhevsky et al., ImageNet classification with deep
-convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `half`.
- 4-D.
-* <b>`depth_radius`</b>: An optional `int`. Defaults to `5`.
- 0-D. Half-width of the 1-D normalization window.
-* <b>`bias`</b>: An optional `float`. Defaults to `1`.
- An offset (usually positive to avoid dividing by 0).
-* <b>`alpha`</b>: An optional `float`. Defaults to `1`.
- A scale factor, usually positive.
-* <b>`beta`</b>: An optional `float`. Defaults to `0.5`. An exponent.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.quantized_avg_pool.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.quantized_avg_pool.md
deleted file mode 100644
index 4bc6a1dc6d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.quantized_avg_pool.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.nn.quantized_avg_pool(input, min_input, max_input, ksize, strides, padding, name=None)` {#quantized_avg_pool}
-
-Produces the average pool of the input tensor for quantized types.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
- 4-D with shape `[batch, height, width, channels]`.
-* <b>`min_input`</b>: A `Tensor` of type `float32`.
- The float value that the lowest quantized input value represents.
-* <b>`max_input`</b>: A `Tensor` of type `float32`.
- The float value that the highest quantized input value represents.
-* <b>`ksize`</b>: A list of `ints`.
- The size of the window for each dimension of the input tensor.
- The length must be 4 to match the number of dimensions of the input.
-* <b>`strides`</b>: A list of `ints`.
- The stride of the sliding window for each dimension of the input
- tensor. The length must be 4 to match the number of dimensions of the input.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, min_output, max_output).
-
-* <b>`output`</b>: A `Tensor`. Has the same type as `input`.
-* <b>`min_output`</b>: A `Tensor` of type `float32`. The float value that the lowest quantized output value represents.
-* <b>`max_output`</b>: A `Tensor` of type `float32`. The float value that the highest quantized output value represents.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.softmax_cross_entropy_with_logits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.softmax_cross_entropy_with_logits.md
deleted file mode 100644
index d7a62e2da7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.softmax_cross_entropy_with_logits.md
+++ /dev/null
@@ -1,41 +0,0 @@
-### `tf.nn.softmax_cross_entropy_with_logits(_sentinel=None, labels=None, logits=None, dim=-1, name=None)` {#softmax_cross_entropy_with_logits}
-
-Computes softmax cross entropy between `logits` and `labels`.
-
-Measures the probability error in discrete classification tasks in which the
-classes are mutually exclusive (each entry is in exactly one class). For
-example, each CIFAR-10 image is labeled with one and only one label: an image
-can be a dog or a truck, but not both.
-
-**NOTE:** While the classes are mutually exclusive, their probabilities
-need not be. All that is required is that each row of `labels` is
-a valid probability distribution. If they are not, the computation of the
-gradient will be incorrect.
-
-If using exclusive `labels` (wherein one and only
-one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.
-
-**WARNING:** This op expects unscaled logits, since it performs a `softmax`
-on `logits` internally for efficiency. Do not call this op with the
-output of `softmax`, as it will produce incorrect results.
-
-`logits` and `labels` must have the same shape `[batch_size, num_classes]`
-and the same dtype (either `float16`, `float32`, or `float64`).
-
-**Note that to avoid confusion, it is required to pass only named arguments to
-this function.**
-
-##### Args:
-
- _sentinel: Used to prevent positional parameters. Internal, do not use.
-
-* <b>`labels`</b>: Each row `labels[i]` must be a valid probability distribution.
-* <b>`logits`</b>: Unscaled log probabilities.
-* <b>`dim`</b>: The class dimension. Defaulted to -1 which is the last dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A 1-D `Tensor` of length `batch_size` of the same type as `logits` with the
- softmax cross entropy loss.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.weighted_moments.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.weighted_moments.md
deleted file mode 100644
index def48d7552..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.nn.weighted_moments.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.nn.weighted_moments(x, axes, frequency_weights, name=None, keep_dims=False)` {#weighted_moments}
-
-Returns the frequency-weighted mean and variance of `x`.
-
-##### Args:
-
-
-* <b>`x`</b>: A tensor.
-* <b>`axes`</b>: 1-d tensor of int32 values; these are the axes along which
- to compute mean and variance.
-* <b>`frequency_weights`</b>: A tensor of positive weights which can be
- broadcast with x.
-* <b>`name`</b>: Name used to scope the operation.
-* <b>`keep_dims`</b>: Produce moments with the same dimensionality as the input.
-
-##### Returns:
-
- Two tensors: `weighted_mean` and `weighted_variance`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ones_like.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ones_like.md
deleted file mode 100644
index 5ca57f52a5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.ones_like.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.ones_like(tensor, dtype=None, name=None, optimize=True)` {#ones_like}
-
-Creates a tensor with all elements set to 1.
-
-Given a single tensor (`tensor`), this operation returns a tensor of the same
-type and shape as `tensor` with all elements set to 1. Optionally, you can
-specify a new type (`dtype`) for the returned tensor.
-
-For example:
-
-```python
-# 'tensor' is [[1, 2, 3], [4, 5, 6]]
-tf.ones_like(tensor) ==> [[1, 1, 1], [1, 1, 1]]
-```
-
-##### Args:
-
-
-* <b>`tensor`</b>: A `Tensor`.
-* <b>`dtype`</b>: A type for the returned `Tensor`. Must be `float32`, `float64`,
- `int8`, `int16`, `int32`, `int64`, `uint8`, `complex64`, `complex128` or
- `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`optimize`</b>: if true, attempt to statically determine the shape of 'tensor'
- and encode it as a constant.
-
-##### Returns:
-
- A `Tensor` with all elements set to 1.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.polygamma.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.polygamma.md
deleted file mode 100644
index c8b5b2578a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.polygamma.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.polygamma(a, x, name=None)` {#polygamma}
-
-Compute the polygamma function \\(\psi^{(n)}(x)\\).
-
-The polygamma function is defined as:
-
-```
-\psi^{(n)}(x) = \frac{d^n}{dx^n} \psi(x)
-```
-where \\(\psi(x)\\) is the digamma function.
-
-##### Args:
-
-
-* <b>`a`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`x`</b>: A `Tensor`. Must have the same type as `a`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `a`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.python_io.TFRecordWriter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.python_io.TFRecordWriter.md
deleted file mode 100644
index 41cbdda85c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.python_io.TFRecordWriter.md
+++ /dev/null
@@ -1,55 +0,0 @@
-A class to write records to a TFRecords file.
-
-This class implements `__enter__` and `__exit__`, and can be used
-in `with` blocks like a normal file.
-- - -
-
-#### `tf.python_io.TFRecordWriter.__enter__()` {#TFRecordWriter.__enter__}
-
-Enter a `with` block.
-
-
-- - -
-
-#### `tf.python_io.TFRecordWriter.__exit__(unused_type, unused_value, unused_traceback)` {#TFRecordWriter.__exit__}
-
-Exit a `with` block, closing the file.
-
-
-- - -
-
-#### `tf.python_io.TFRecordWriter.__init__(path, options=None)` {#TFRecordWriter.__init__}
-
-Opens file `path` and creates a `TFRecordWriter` writing to it.
-
-##### Args:
-
-
-* <b>`path`</b>: The path to the TFRecords file.
-* <b>`options`</b>: (optional) A TFRecordOptions object.
-
-##### Raises:
-
-
-* <b>`IOError`</b>: If `path` cannot be opened for writing.
-
-
-- - -
-
-#### `tf.python_io.TFRecordWriter.close()` {#TFRecordWriter.close}
-
-Close the file.
-
-
-- - -
-
-#### `tf.python_io.TFRecordWriter.write(record)` {#TFRecordWriter.write}
-
-Write a string record to the file.
-
-##### Args:
-
-
-* <b>`record`</b>: str
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.random_normal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.random_normal.md
deleted file mode 100644
index 1344423202..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.random_normal.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)` {#random_normal}
-
-Outputs random values from a normal distribution.
-
-##### Args:
-
-
-* <b>`shape`</b>: A 1-D integer Tensor or Python array. The shape of the output tensor.
-* <b>`mean`</b>: A 0-D Tensor or Python value of type `dtype`. The mean of the normal
- distribution.
-* <b>`stddev`</b>: A 0-D Tensor or Python value of type `dtype`. The standard deviation
- of the normal distribution.
-* <b>`dtype`</b>: The type of the output.
-* <b>`seed`</b>: A Python integer. Used to create a random seed for the distribution.
- See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tensor of the specified shape filled with random normal values.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.reduce_max.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.reduce_max.md
deleted file mode 100644
index cea9d70718..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.reduce_max.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.reduce_max(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_max}
-
-Computes the maximum of elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The tensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
-@compatibility(numpy)
-Equivalent to np.max
-@end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.rint.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.rint.md
deleted file mode 100644
index 91fc557ee6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.rint.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.rint(x, name=None)` {#rint}
-
-Returns element-wise integer closest to x.
-
-If the result is midway between two representable values,
-the even representable is chosen.
-For example:
-
-```
-rint(-1.5) ==> -2.0
-rint(0.5000001) ==> 1.0
-rint([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) ==> [-2., -2., -0., 0., 2., 2., 2.]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.scatter_nd_update.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.scatter_nd_update.md
deleted file mode 100644
index e7e975cbd3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.scatter_nd_update.md
+++ /dev/null
@@ -1,60 +0,0 @@
-### `tf.scatter_nd_update(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_update}
-
-Applies sparse `updates` to individual values or slices within a given
-
-variable according to `indices`.
-
-`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
-
-`indices` must be integer tensor, containing indices into `ref`.
-It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
-
-The innermost dimension of `indices` (with length `K`) corresponds to
-indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
-dimension of `ref`.
-
-`updates` is `Tensor` of rank `Q-1+P-K` with shape:
-
-```
-[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
-```
-
-For example, say we want to update 4 scattered elements to a rank-1 tensor to
-8 elements. In Python, that update would look like this:
-
- ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
- indices = tf.constant([[4], [3], [1] ,[7]])
- updates = tf.constant([9, 10, 11, 12])
- update = tf.scatter_nd_update(ref, indices, updates)
- with tf.Session() as sess:
- print sess.run(update)
-
-The resulting update to ref would look like this:
-
- [1, 11, 3, 10, 9, 6, 7, 12]
-
-See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to
-slices.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. A mutable Tensor. Should be from a Variable node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A Tensor. Must be one of the following types: int32, int64.
- A tensor of indices into ref.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A Tensor. Must have the same type as ref. A tensor of updated
- values to add to ref.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `True`.
- An optional bool. Defaults to True. If True, the assignment will
- be protected by a lock; otherwise the behavior is undefined,
- but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A mutable `Tensor`. Has the same type as `ref`.
- Same as ref. Returned as a convenience for operations that want to
- use the updated values after the update is done.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.scatter_sub.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.scatter_sub.md
deleted file mode 100644
index 8f1afc42f6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.scatter_sub.md
+++ /dev/null
@@ -1,44 +0,0 @@
-### `tf.scatter_sub(ref, indices, updates, use_locking=None, name=None)` {#scatter_sub}
-
-Subtracts sparse updates to a variable reference.
-
- # Scalar indices
- ref[indices, ...] -= updates[...]
-
- # Vector indices (for each i)
- ref[indices[i], ...] -= updates[i, ...]
-
- # High rank indices (for each i, ..., j)
- ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]
-
-This operation outputs `ref` after the update is done.
-This makes it easier to chain operations that need to use the reset value.
-
-Duplicate entries are handled correctly: if multiple `indices` reference
-the same location, their (negated) contributions add.
-
-Requires `updates.shape = indices.shape + ref.shape[1:]`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/ScatterSub.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Should be from a `Variable` node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A tensor of indices into the first dimension of `ref`.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A tensor of updated values to subtract from `ref`.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- If True, the subtraction will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as `ref`. Returned as a convenience for operations that want
- to use the updated values after the update is done.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.segment_prod.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.segment_prod.md
deleted file mode 100644
index c1e3e74cf5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.segment_prod.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.segment_prod(data, segment_ids, name=None)` {#segment_prod}
-
-Computes the product along segments of a tensor.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-Computes a tensor such that
-\\(output_i = \prod_j data_j\\) where the product is over `j` such
-that `segment_ids[j] == i`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/SegmentProd.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`segment_ids`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor whose rank is equal to the rank of `data`'s
- first dimension. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.segment_sum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.segment_sum.md
deleted file mode 100644
index be93c31a2e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.segment_sum.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.segment_sum(data, segment_ids, name=None)` {#segment_sum}
-
-Computes the sum along segments of a tensor.
-
-Read [the section on Segmentation](../../api_docs/python/math_ops.md#segmentation)
-for an explanation of segments.
-
-Computes a tensor such that
-\\(output_i = \sum_j data_j\\) where sum is over `j` such
-that `segment_ids[j] == i`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/SegmentSum.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`segment_ids`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor whose rank is equal to the rank of `data`'s
- first dimension. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sparse_reduce_sum.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sparse_reduce_sum.md
deleted file mode 100644
index 4c1e77ac36..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sparse_reduce_sum.md
+++ /dev/null
@@ -1,43 +0,0 @@
-### `tf.sparse_reduce_sum(sp_input, axis=None, keep_dims=False, reduction_axes=None)` {#sparse_reduce_sum}
-
-Computes the sum of elements across dimensions of a SparseTensor.
-
-This Op takes a SparseTensor and is the sparse counterpart to
-`tf.reduce_sum()`. In particular, this Op also returns a dense `Tensor`
-instead of a sparse one.
-
-Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless
-`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in
-`reduction_axes`. If `keep_dims` is true, the reduced dimensions are retained
-with length 1.
-
-If `reduction_axes` has no entries, all dimensions are reduced, and a tensor
-with a single element is returned. Additionally, the axes can be negative,
-similar to the indexing rules in Python.
-
-For example:
-
-```python
-# 'x' represents [[1, ?, 1]
-# [?, 1, ?]]
-# where ? is implicitly-zero.
-tf.sparse_reduce_sum(x) ==> 3
-tf.sparse_reduce_sum(x, 0) ==> [1, 1, 1]
-tf.sparse_reduce_sum(x, 1) ==> [2, 1] # Can also use -1 as the axis.
-tf.sparse_reduce_sum(x, 1, keep_dims=True) ==> [[2], [1]]
-tf.sparse_reduce_sum(x, [0, 1]) ==> 3
-```
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The SparseTensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce; list or scalar. If `None` (the
- default), reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retain reduced dimensions with length 1.
-* <b>`reduction_axes`</b>: Deprecated name of axis.
-
-##### Returns:
-
- The reduced Tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sparse_tensor_dense_matmul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sparse_tensor_dense_matmul.md
deleted file mode 100644
index 27de39cda2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.sparse_tensor_dense_matmul.md
+++ /dev/null
@@ -1,165 +0,0 @@
-### `tf.sparse_tensor_dense_matmul(sp_a, b, adjoint_a=False, adjoint_b=False, name=None)` {#sparse_tensor_dense_matmul}
-
-Multiply SparseTensor (of rank 2) "A" by dense matrix "B".
-
-No validity checking is performed on the indices of A. However, the following
-input format is recommended for optimal behavior:
-
-if adjoint_a == false:
- A should be sorted in lexicographically increasing order. Use
- sparse_reorder if you're not sure.
-if adjoint_a == true:
- A should be sorted in order of increasing dimension 1 (i.e., "column major"
- order instead of "row major" order).
-
-Deciding when to use sparse_tensor_dense_matmul vs. matmul(sp_a=True):
-
-There are a number of questions to ask in the decision process, including:
-
-* Will the SparseTensor A fit in memory if densified?
-* Is the column count of the product large (>> 1)?
-* Is the density of A larger than approximately 15%?
-
-If the answer to several of these questions is yes, consider
-converting the `SparseTensor` to a dense one and using `tf.matmul` with
-`sp_a=True`.
-
-This operation tends to perform well when A is more sparse, if the column size
-of the product is small (e.g. matrix-vector multiplication), if
-`sp_a.dense_shape` takes on large values.
-
-Below is a rough speed comparison between sparse_tensor_dense_matmul,
-labelled 'sparse', and matmul(sp_a=True), labelled 'dense'. For purposes of
-the comparison, the time spent converting from a SparseTensor to a dense
-Tensor is not included, so it is overly conservative with respect to
-the time ratio.
-
-Benchmark system:
-CPU: Intel Ivybridge with HyperThreading (6 cores) dL1:32KB dL2:256KB dL3:12MB
-GPU: NVidia Tesla k40c
-
-Compiled with:
-`-c opt --config=cuda --copt=-mavx`
-
-```
-tensorflow/python/sparse_tensor_dense_matmul_op_test --benchmarks
-A sparse [m, k] with % nonzero values between 1% and 80%
-B dense [k, n]
-
-% nnz n gpu m k dt(dense) dt(sparse) dt(sparse)/dt(dense)
-0.01 1 True 100 100 0.000221166 0.00010154 0.459112
-0.01 1 True 100 1000 0.00033858 0.000109275 0.322745
-0.01 1 True 1000 100 0.000310557 9.85661e-05 0.317385
-0.01 1 True 1000 1000 0.0008721 0.000100875 0.115669
-0.01 1 False 100 100 0.000208085 0.000107603 0.51711
-0.01 1 False 100 1000 0.000327112 9.51118e-05 0.290762
-0.01 1 False 1000 100 0.000308222 0.00010345 0.335635
-0.01 1 False 1000 1000 0.000865721 0.000101397 0.117124
-0.01 10 True 100 100 0.000218522 0.000105537 0.482958
-0.01 10 True 100 1000 0.000340882 0.000111641 0.327506
-0.01 10 True 1000 100 0.000315472 0.000117376 0.372064
-0.01 10 True 1000 1000 0.000905493 0.000123263 0.136128
-0.01 10 False 100 100 0.000221529 9.82571e-05 0.44354
-0.01 10 False 100 1000 0.000330552 0.000112615 0.340687
-0.01 10 False 1000 100 0.000341277 0.000114097 0.334324
-0.01 10 False 1000 1000 0.000819944 0.000120982 0.147549
-0.01 25 True 100 100 0.000207806 0.000105977 0.509981
-0.01 25 True 100 1000 0.000322879 0.00012921 0.400181
-0.01 25 True 1000 100 0.00038262 0.00014158 0.370035
-0.01 25 True 1000 1000 0.000865438 0.000202083 0.233504
-0.01 25 False 100 100 0.000209401 0.000104696 0.499979
-0.01 25 False 100 1000 0.000321161 0.000130737 0.407076
-0.01 25 False 1000 100 0.000377012 0.000136801 0.362856
-0.01 25 False 1000 1000 0.000861125 0.00020272 0.235413
-0.2 1 True 100 100 0.000206952 9.69219e-05 0.46833
-0.2 1 True 100 1000 0.000348674 0.000147475 0.422959
-0.2 1 True 1000 100 0.000336908 0.00010122 0.300439
-0.2 1 True 1000 1000 0.001022 0.000203274 0.198898
-0.2 1 False 100 100 0.000207532 9.5412e-05 0.459746
-0.2 1 False 100 1000 0.000356127 0.000146824 0.41228
-0.2 1 False 1000 100 0.000322664 0.000100918 0.312764
-0.2 1 False 1000 1000 0.000998987 0.000203442 0.203648
-0.2 10 True 100 100 0.000211692 0.000109903 0.519165
-0.2 10 True 100 1000 0.000372819 0.000164321 0.440753
-0.2 10 True 1000 100 0.000338651 0.000144806 0.427596
-0.2 10 True 1000 1000 0.00108312 0.000758876 0.70064
-0.2 10 False 100 100 0.000215727 0.000110502 0.512231
-0.2 10 False 100 1000 0.000375419 0.0001613 0.429653
-0.2 10 False 1000 100 0.000336999 0.000145628 0.432132
-0.2 10 False 1000 1000 0.00110502 0.000762043 0.689618
-0.2 25 True 100 100 0.000218705 0.000129913 0.594009
-0.2 25 True 100 1000 0.000394794 0.00029428 0.745402
-0.2 25 True 1000 100 0.000404483 0.0002693 0.665788
-0.2 25 True 1000 1000 0.0012002 0.00194494 1.62052
-0.2 25 False 100 100 0.000221494 0.0001306 0.589632
-0.2 25 False 100 1000 0.000396436 0.000297204 0.74969
-0.2 25 False 1000 100 0.000409346 0.000270068 0.659754
-0.2 25 False 1000 1000 0.00121051 0.00193737 1.60046
-0.5 1 True 100 100 0.000214981 9.82111e-05 0.456836
-0.5 1 True 100 1000 0.000415328 0.000223073 0.537101
-0.5 1 True 1000 100 0.000358324 0.00011269 0.314492
-0.5 1 True 1000 1000 0.00137612 0.000437401 0.317851
-0.5 1 False 100 100 0.000224196 0.000101423 0.452386
-0.5 1 False 100 1000 0.000400987 0.000223286 0.556841
-0.5 1 False 1000 100 0.000368825 0.00011224 0.304318
-0.5 1 False 1000 1000 0.00136036 0.000429369 0.31563
-0.5 10 True 100 100 0.000222125 0.000112308 0.505608
-0.5 10 True 100 1000 0.000461088 0.00032357 0.701753
-0.5 10 True 1000 100 0.000394624 0.000225497 0.571422
-0.5 10 True 1000 1000 0.00158027 0.00190898 1.20801
-0.5 10 False 100 100 0.000232083 0.000114978 0.495418
-0.5 10 False 100 1000 0.000454574 0.000324632 0.714146
-0.5 10 False 1000 100 0.000379097 0.000227768 0.600817
-0.5 10 False 1000 1000 0.00160292 0.00190168 1.18638
-0.5 25 True 100 100 0.00023429 0.000151703 0.647501
-0.5 25 True 100 1000 0.000497462 0.000598873 1.20386
-0.5 25 True 1000 100 0.000460778 0.000557038 1.20891
-0.5 25 True 1000 1000 0.00170036 0.00467336 2.74845
-0.5 25 False 100 100 0.000228981 0.000155334 0.678371
-0.5 25 False 100 1000 0.000496139 0.000620789 1.25124
-0.5 25 False 1000 100 0.00045473 0.000551528 1.21287
-0.5 25 False 1000 1000 0.00171793 0.00467152 2.71927
-0.8 1 True 100 100 0.000222037 0.000105301 0.47425
-0.8 1 True 100 1000 0.000410804 0.000329327 0.801664
-0.8 1 True 1000 100 0.000349735 0.000131225 0.375212
-0.8 1 True 1000 1000 0.00139219 0.000677065 0.48633
-0.8 1 False 100 100 0.000214079 0.000107486 0.502085
-0.8 1 False 100 1000 0.000413746 0.000323244 0.781261
-0.8 1 False 1000 100 0.000348983 0.000131983 0.378193
-0.8 1 False 1000 1000 0.00136296 0.000685325 0.50282
-0.8 10 True 100 100 0.000229159 0.00011825 0.516017
-0.8 10 True 100 1000 0.000498845 0.000532618 1.0677
-0.8 10 True 1000 100 0.000383126 0.00029935 0.781336
-0.8 10 True 1000 1000 0.00162866 0.00307312 1.88689
-0.8 10 False 100 100 0.000230783 0.000124958 0.541452
-0.8 10 False 100 1000 0.000493393 0.000550654 1.11606
-0.8 10 False 1000 100 0.000377167 0.000298581 0.791642
-0.8 10 False 1000 1000 0.00165795 0.00305103 1.84024
-0.8 25 True 100 100 0.000233496 0.000175241 0.75051
-0.8 25 True 100 1000 0.00055654 0.00102658 1.84458
-0.8 25 True 1000 100 0.000463814 0.000783267 1.68875
-0.8 25 True 1000 1000 0.00186905 0.00755344 4.04132
-0.8 25 False 100 100 0.000240243 0.000175047 0.728625
-0.8 25 False 100 1000 0.000578102 0.00104499 1.80763
-0.8 25 False 1000 100 0.000485113 0.000776849 1.60138
-0.8 25 False 1000 1000 0.00211448 0.00752736 3.55992
-```
-
-##### Args:
-
-
-* <b>`sp_a`</b>: SparseTensor A, of rank 2.
-* <b>`b`</b>: A dense Matrix with the same dtype as sp_a.
-* <b>`adjoint_a`</b>: Use the adjoint of A in the matrix multiply. If A is complex,
- this is transpose(conj(A)). Otherwise it's transpose(A).
-* <b>`adjoint_b`</b>: Use the adjoint of B in the matrix multiply. If B is complex,
- this is transpose(conj(B)). Otherwise it's transpose(B).
-* <b>`name`</b>: A name prefix for the returned tensors (optional)
-
-##### Returns:
-
- A dense matrix (pseudo-code in dense np.matrix notation):
- A = A.H if adjoint_a else A
- B = B.H if adjoint_b else B
- return A*B
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.square.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.square.md
deleted file mode 100644
index 940154968f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.square.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.square(x, name=None)` {#square}
-
-Computes square of x element-wise.
-
-I.e., \(y = x * x = x^2\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,
- `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.string_to_hash_bucket.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.string_to_hash_bucket.md
deleted file mode 100644
index 1b818c2d3b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.string_to_hash_bucket.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.string_to_hash_bucket(string_tensor, num_buckets, name=None)` {#string_to_hash_bucket}
-
-Converts each string in the input Tensor to its hash mod by a number of buckets.
-
-The hash function is deterministic on the content of the string within the
-process.
-
-Note that the hash function may change from time to time.
-This functionality will be deprecated and it's recommended to use
-`tf.string_to_hash_bucket_fast()` or `tf.string_to_hash_bucket_strong()`.
-
-##### Args:
-
-
-* <b>`string_tensor`</b>: A `Tensor` of type `string`.
-* <b>`num_buckets`</b>: An `int` that is `>= 1`. The number of buckets.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `int64`.
- A Tensor of the same shape as the input `string_tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.summary.FileWriter.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.summary.FileWriter.md
deleted file mode 100644
index 526e408fba..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.summary.FileWriter.md
+++ /dev/null
@@ -1,209 +0,0 @@
-Writes `Summary` protocol buffers to event files.
-
-The `FileWriter` class provides a mechanism to create an event file in a
-given directory and add summaries and events to it. The class updates the
-file contents asynchronously. This allows a training program to call methods
-to add data to the file directly from the training loop, without slowing down
-training.
-- - -
-
-#### `tf.summary.FileWriter.__init__(logdir, graph=None, max_queue=10, flush_secs=120, graph_def=None)` {#FileWriter.__init__}
-
-Creates a `FileWriter` and an event file.
-
-On construction the summary writer creates a new event file in `logdir`.
-This event file will contain `Event` protocol buffers constructed when you
-call one of the following functions: `add_summary()`, `add_session_log()`,
-`add_event()`, or `add_graph()`.
-
-If you pass a `Graph` to the constructor it is added to
-the event file. (This is equivalent to calling `add_graph()` later).
-
-TensorBoard will pick the graph from the file and display it graphically so
-you can interactively explore the graph you built. You will usually pass
-the graph from the session in which you launched it:
-
-```python
-...create a graph...
-# Launch the graph in a session.
-sess = tf.Session()
-# Create a summary writer, add the 'graph' to the event file.
-writer = tf.summary.FileWriter(<some-directory>, sess.graph)
-```
-
-The other arguments to the constructor control the asynchronous writes to
-the event file:
-
-* `flush_secs`: How often, in seconds, to flush the added summaries
- and events to disk.
-* `max_queue`: Maximum number of summaries or events pending to be
- written to disk before one of the 'add' calls block.
-
-##### Args:
-
-
-* <b>`logdir`</b>: A string. Directory where event file will be written.
-* <b>`graph`</b>: A `Graph` object, such as `sess.graph`.
-* <b>`max_queue`</b>: Integer. Size of the queue for pending events and summaries.
-* <b>`flush_secs`</b>: Number. How often, in seconds, to flush the
- pending events and summaries to disk.
-* <b>`graph_def`</b>: DEPRECATED: Use the `graph` argument instead.
-
-
-- - -
-
-#### `tf.summary.FileWriter.add_event(event)` {#FileWriter.add_event}
-
-Adds an event to the event file.
-
-##### Args:
-
-
-* <b>`event`</b>: An `Event` protocol buffer.
-
-
-- - -
-
-#### `tf.summary.FileWriter.add_graph(graph, global_step=None, graph_def=None)` {#FileWriter.add_graph}
-
-Adds a `Graph` to the event file.
-
-The graph described by the protocol buffer will be displayed by
-TensorBoard. Most users pass a graph in the constructor instead.
-
-##### Args:
-
-
-* <b>`graph`</b>: A `Graph` object, such as `sess.graph`.
-* <b>`global_step`</b>: Number. Optional global step counter to record with the
- graph.
-* <b>`graph_def`</b>: DEPRECATED. Use the `graph` parameter instead.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both graph and graph_def are passed to the method.
-
-
-- - -
-
-#### `tf.summary.FileWriter.add_meta_graph(meta_graph_def, global_step=None)` {#FileWriter.add_meta_graph}
-
-Adds a `MetaGraphDef` to the event file.
-
-The `MetaGraphDef` allows running the given graph via
-`saver.import_meta_graph()`.
-
-##### Args:
-
-
-* <b>`meta_graph_def`</b>: A `MetaGraphDef` object, often as retured by
- `saver.export_meta_graph()`.
-* <b>`global_step`</b>: Number. Optional global step counter to record with the
- graph.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If both `meta_graph_def` is not an instance of `MetaGraphDef`.
-
-
-- - -
-
-#### `tf.summary.FileWriter.add_run_metadata(run_metadata, tag, global_step=None)` {#FileWriter.add_run_metadata}
-
-Adds a metadata information for a single session.run() call.
-
-##### Args:
-
-
-* <b>`run_metadata`</b>: A `RunMetadata` protobuf object.
-* <b>`tag`</b>: The tag name for this metadata.
-* <b>`global_step`</b>: Number. Optional global step counter to record with the
- StepStats.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the provided tag was already used for this type of event.
-
-
-- - -
-
-#### `tf.summary.FileWriter.add_session_log(session_log, global_step=None)` {#FileWriter.add_session_log}
-
-Adds a `SessionLog` protocol buffer to the event file.
-
-This method wraps the provided session in an `Event` protocol buffer
-and adds it to the event file.
-
-##### Args:
-
-
-* <b>`session_log`</b>: A `SessionLog` protocol buffer.
-* <b>`global_step`</b>: Number. Optional global step value to record with the
- summary.
-
-
-- - -
-
-#### `tf.summary.FileWriter.add_summary(summary, global_step=None)` {#FileWriter.add_summary}
-
-Adds a `Summary` protocol buffer to the event file.
-
-This method wraps the provided summary in an `Event` protocol buffer
-and adds it to the event file.
-
-You can pass the result of evaluating any summary op, using
-[`Session.run()`](client.md#Session.run) or
-[`Tensor.eval()`](framework.md#Tensor.eval), to this
-function. Alternatively, you can pass a `tf.Summary` protocol
-buffer that you populate with your own data. The latter is
-commonly done to report evaluation results in event files.
-
-##### Args:
-
-
-* <b>`summary`</b>: A `Summary` protocol buffer, optionally serialized as a string.
-* <b>`global_step`</b>: Number. Optional global step value to record with the
- summary.
-
-
-- - -
-
-#### `tf.summary.FileWriter.close()` {#FileWriter.close}
-
-Flushes the event file to disk and close the file.
-
-Call this method when you do not need the summary writer anymore.
-
-
-- - -
-
-#### `tf.summary.FileWriter.flush()` {#FileWriter.flush}
-
-Flushes the event file to disk.
-
-Call this method to make sure that all pending events have been written to
-disk.
-
-
-- - -
-
-#### `tf.summary.FileWriter.get_logdir()` {#FileWriter.get_logdir}
-
-Returns the directory where event file will be written.
-
-
-- - -
-
-#### `tf.summary.FileWriter.reopen()` {#FileWriter.reopen}
-
-Reopens the EventFileWriter.
-
-Can be called after `close()` to add more events in the same directory.
-The events will go into a new events file.
-
-Does nothing if the EventFileWriter was not closed.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.summary.FileWriterCache.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.summary.FileWriterCache.md
deleted file mode 100644
index 3c6c8773b3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.summary.FileWriterCache.md
+++ /dev/null
@@ -1,26 +0,0 @@
-Cache for file writers.
-
-This class caches file writers, one per directory.
-- - -
-
-#### `tf.summary.FileWriterCache.clear()` {#FileWriterCache.clear}
-
-Clear cached summary writers. Currently only used for unit tests.
-
-
-- - -
-
-#### `tf.summary.FileWriterCache.get(logdir)` {#FileWriterCache.get}
-
-Returns the FileWriter for the specified directory.
-
-##### Args:
-
-
-* <b>`logdir`</b>: str, name of the directory.
-
-##### Returns:
-
- A `FileWriter`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.MomentumOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.MomentumOptimizer.md
deleted file mode 100644
index 810f802c25..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.MomentumOptimizer.md
+++ /dev/null
@@ -1,21 +0,0 @@
-Optimizer that implements the Momentum algorithm.
-
-- - -
-
-#### `tf.train.MomentumOptimizer.__init__(learning_rate, momentum, use_locking=False, name='Momentum', use_nesterov=False)` {#MomentumOptimizer.__init__}
-
-Construct a new Momentum optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A `Tensor` or a floating point value. The learning rate.
-* <b>`momentum`</b>: A `Tensor` or a floating point value. The momentum.
-* <b>`use_locking`</b>: If `True` use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "Momentum".
-* <b>`use_nesterov`</b>: If `True` use Nesterov Momentum.
- See [Sutskever et. al., 2013](
-* <b>`http`</b>: //jmlr.org/proceedings/papers/v28/sutskever13.pdf)
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.ProximalGradientDescentOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.ProximalGradientDescentOptimizer.md
deleted file mode 100644
index ee14fe89df..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.ProximalGradientDescentOptimizer.md
+++ /dev/null
@@ -1,24 +0,0 @@
-Optimizer that implements the proximal gradient descent algorithm.
-
-See this [paper](http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting.pdf).
-
-- - -
-
-#### `tf.train.ProximalGradientDescentOptimizer.__init__(learning_rate, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='ProximalGradientDescent')` {#ProximalGradientDescentOptimizer.__init__}
-
-Construct a new proximal gradient descent optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A Tensor or a floating point value. The learning
- rate to use.
-* <b>`l1_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`l2_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`use_locking`</b>: If True use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "GradientDescent".
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.SessionRunValues.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.SessionRunValues.md
deleted file mode 100644
index 7856c8bf92..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.SessionRunValues.md
+++ /dev/null
@@ -1,66 +0,0 @@
-Contains the results of `Session.run()`.
-
-In the future we may use this object to add more information about result of
-run without changing the Hook API.
-
-Args:
- results: The return values from `Session.run()` corresponding to the fetches
- attribute returned in the RunArgs. Note that this has the same shape as
- the RunArgs fetches. For example:
- fetches = global_step_tensor
- => results = nparray(int)
- fetches = [train_op, summary_op, global_step_tensor]
- => results = [None, nparray(string), nparray(int)]
- fetches = {'step': global_step_tensor, 'summ': summary_op}
- => results = {'step': nparray(int), 'summ': nparray(string)}
- options: `RunOptions` from the `Session.run()` call.
- run_metadata: `RunMetadata` from the `Session.run()` call.
-- - -
-
-#### `tf.train.SessionRunValues.__getnewargs__()` {#SessionRunValues.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.train.SessionRunValues.__getstate__()` {#SessionRunValues.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.train.SessionRunValues.__new__(_cls, results, options, run_metadata)` {#SessionRunValues.__new__}
-
-Create new instance of SessionRunValues(results, options, run_metadata)
-
-
-- - -
-
-#### `tf.train.SessionRunValues.__repr__()` {#SessionRunValues.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.train.SessionRunValues.options` {#SessionRunValues.options}
-
-Alias for field number 1
-
-
-- - -
-
-#### `tf.train.SessionRunValues.results` {#SessionRunValues.results}
-
-Alias for field number 0
-
-
-- - -
-
-#### `tf.train.SessionRunValues.run_metadata` {#SessionRunValues.run_metadata}
-
-Alias for field number 2
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.export_meta_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.export_meta_graph.md
deleted file mode 100644
index dd31819759..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.export_meta_graph.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.train.export_meta_graph(filename=None, meta_info_def=None, graph_def=None, saver_def=None, collection_list=None, as_text=False, graph=None, export_scope=None, clear_devices=False, **kwargs)` {#export_meta_graph}
-
-Returns `MetaGraphDef` proto. Optionally writes it to filename.
-
-This function exports the graph, saver, and collection objects into
-`MetaGraphDef` protocol buffer with the intention of it being imported
-at a later time or location to restart training, run inference, or be
-a subgraph.
-
-##### Args:
-
-
-* <b>`filename`</b>: Optional filename including the path for writing the
- generated `MetaGraphDef` protocol buffer.
-* <b>`meta_info_def`</b>: `MetaInfoDef` protocol buffer.
-* <b>`graph_def`</b>: `GraphDef` protocol buffer.
-* <b>`saver_def`</b>: `SaverDef` protocol buffer.
-* <b>`collection_list`</b>: List of string keys to collect.
-* <b>`as_text`</b>: If `True`, writes the `MetaGraphDef` as an ASCII proto.
-* <b>`graph`</b>: The `Graph` to import into. If `None`, use the default graph.
-* <b>`export_scope`</b>: Optional `string`. Name scope under which to extract
- the subgraph. The scope name will be striped from the node definitions
- for easy import later into new name scopes. If `None`, the whole graph
- is exported. graph_def and export_scope cannot both be specified.
-* <b>`clear_devices`</b>: Whether or not to clear the device field for an `Operation`
- or `Tensor` during export.
-* <b>`**kwargs`</b>: Optional keyed arguments.
-
-##### Returns:
-
- A `MetaGraphDef` proto.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: When the `GraphDef` is larger than 2GB.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.get_checkpoint_state.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.get_checkpoint_state.md
deleted file mode 100644
index 8963539605..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.get_checkpoint_state.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.train.get_checkpoint_state(checkpoint_dir, latest_filename=None)` {#get_checkpoint_state}
-
-Returns CheckpointState proto from the "checkpoint" file.
-
-If the "checkpoint" file contains a valid CheckpointState
-proto, returns it.
-
-##### Args:
-
-
-* <b>`checkpoint_dir`</b>: The directory of checkpoints.
-* <b>`latest_filename`</b>: Optional name of the checkpoint file. Default to
- 'checkpoint'.
-
-##### Returns:
-
- A CheckpointState if the state was available, None
- otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the checkpoint read doesn't have model_checkpoint_path set.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.get_global_step.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.get_global_step.md
deleted file mode 100644
index 7ccb41889f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.get_global_step.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.train.get_global_step(graph=None)` {#get_global_step}
-
-Get the global step tensor.
-
-The global step tensor must be an integer variable. We first try to find it
-in the collection `GLOBAL_STEP`, or by name `global_step:0`.
-
-##### Args:
-
-
-* <b>`graph`</b>: The graph to find the global step in. If missing, use default graph.
-
-##### Returns:
-
- The global step variable, or `None` if none was found.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the global step tensor has a non-integer type, or if it is not
- a `Variable`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.inverse_time_decay.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.inverse_time_decay.md
deleted file mode 100644
index fe85cb1b12..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.train.inverse_time_decay.md
+++ /dev/null
@@ -1,56 +0,0 @@
-### `tf.train.inverse_time_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None)` {#inverse_time_decay}
-
-Applies inverse time decay to the initial learning rate.
-
-When training a model, it is often recommended to lower the learning rate as
-the training progresses. This function applies an inverse decay function
-to a provided initial learning rate. It requires an `global_step` value to
-compute the decayed learning rate. You can just pass a TensorFlow variable
-that you increment at each training step.
-
-The function returns the decayed learning rate. It is computed as:
-
-```python
-decayed_learning_rate = learning_rate / (1 + decay_rate * t)
-```
-
-Example: decay 1/t with a rate of 0.5:
-
-```python
-...
-global_step = tf.Variable(0, trainable=False)
-learning_rate = 0.1
-k = 0.5
-learning_rate = tf.train.inverse_time_decay(learning_rate, global_step, k)
-
-# Passing global_step to minimize() will increment it at each step.
-learning_step = (
- tf.train.GradientDescentOptimizer(learning_rate)
- .minimize(...my loss..., global_step=global_step)
-)
-```
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A scalar `float32` or `float64` `Tensor` or a
- Python number. The initial learning rate.
-* <b>`global_step`</b>: A Python number.
- Global step to use for the decay computation. Must not be negative.
-* <b>`decay_steps`</b>: How often to apply decay.
-* <b>`decay_rate`</b>: A Python number. The decay rate.
-* <b>`staircase`</b>: Whether to apply decay in a discrete staircase, as opposed to
- continuous, fashion.
-* <b>`name`</b>: String. Optional name of the operation. Defaults to
- 'InverseTimeDecay'.
-
-##### Returns:
-
- A scalar `Tensor` of the same type as `learning_rate`. The decayed
- learning rate.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `global_step` is not supplied.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.truediv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.truediv.md
deleted file mode 100644
index 7a0c7a4aac..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.truediv.md
+++ /dev/null
@@ -1,34 +0,0 @@
-### `tf.truediv(x, y, name=None)` {#truediv}
-
-Divides x / y elementwise (using Python 3 division operator semantics).
-
-NOTE: Prefer using the Tensor operator or tf.divide which obey Python
-division operator semantics.
-
-This function forces Python 3 division operator semantics where all integer
-arguments are cast to floating types first. This op is generated by normal
-`x / y` division in Python 3 and in Python 2.7 with
-`from __future__ import division`. If you want integer division that rounds
-down, use `x // y` or `tf.floordiv`.
-
-`x` and `y` must have the same numeric type. If the inputs are floating
-point, the output will have the same type. If the inputs are integral, the
-inputs are cast to `float32` for `int8` and `int16` and `float64` for `int32`
-and `int64` (matching the behavior of Numpy).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of numeric type.
-* <b>`y`</b>: `Tensor` denominator of numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` evaluated in floating point.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` and `y` have different dtypes.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.unique.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.unique.md
deleted file mode 100644
index 5b9bc642c8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.unique.md
+++ /dev/null
@@ -1,34 +0,0 @@
-### `tf.unique(x, out_idx=None, name=None)` {#unique}
-
-Finds unique elements in a 1-D tensor.
-
-This operation returns a tensor `y` containing all of the unique elements of `x`
-sorted in the same order that they occur in `x`. This operation also returns a
-tensor `idx` the same size as `x` that contains the index of each value of `x`
-in the unique output `y`. In other words:
-
-`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`
-
-For example:
-
-```prettyprint
-# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
-y, idx = unique(x)
-y ==> [1, 2, 4, 7, 8]
-idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. 1-D.
-* <b>`out_idx`</b>: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (y, idx).
-
-* <b>`y`</b>: A `Tensor`. Has the same type as `x`. 1-D.
-* <b>`idx`</b>: A `Tensor` of type `out_idx`. 1-D.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.variable_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.variable_scope.md
deleted file mode 100644
index 2bf61a0190..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.variable_scope.md
+++ /dev/null
@@ -1,100 +0,0 @@
-### `tf.variable_scope(name_or_scope, default_name=None, values=None, initializer=None, regularizer=None, caching_device=None, partitioner=None, custom_getter=None, reuse=None, dtype=None, use_resource=None)` {#variable_scope}
-
-Returns a context manager for defining ops that creates variables (layers).
-
-This context manager validates that the (optional) `values` are from
-the same graph, ensures that graph is the default graph, and pushes a
-name scope and a variable scope.
-
-If `name_or_scope` is not None, it is used as is. If `scope` is None, then
-`default_name` is used. In that case, if the same name has been previously
-used in the same scope, it will made unique be appending `_N` to it.
-
-Variable scope allows to create new variables and to share already created
-ones while providing checks to not create or share by accident. For details,
-see the [Variable Scope How To](../../how_tos/variable_scope/index.md),
-here we present only a few basic examples.
-
-Simple example of how to create a new variable:
-
-```python
-with tf.variable_scope("foo"):
- with tf.variable_scope("bar"):
- v = tf.get_variable("v", [1])
- assert v.name == "foo/bar/v:0"
-```
-
-Basic example of sharing a variable:
-
-```python
-with tf.variable_scope("foo"):
- v = tf.get_variable("v", [1])
-with tf.variable_scope("foo", reuse=True):
- v1 = tf.get_variable("v", [1])
-assert v1 == v
-```
-
-Sharing a variable by capturing a scope and setting reuse:
-
-```python
-with tf.variable_scope("foo") as scope:
- v = tf.get_variable("v", [1])
- scope.reuse_variables()
- v1 = tf.get_variable("v", [1])
-assert v1 == v
-```
-
-To prevent accidental sharing of variables, we raise an exception when
-getting an existing variable in a non-reusing scope.
-
-```python
-with tf.variable_scope("foo"):
- v = tf.get_variable("v", [1])
- v1 = tf.get_variable("v", [1])
- # Raises ValueError("... v already exists ...").
-```
-
-Similarly, we raise an exception when trying to get a variable that
-does not exist in reuse mode.
-
-```python
-with tf.variable_scope("foo", reuse=True):
- v = tf.get_variable("v", [1])
- # Raises ValueError("... v does not exists ...").
-```
-
-Note that the `reuse` flag is inherited: if we open a reusing scope,
-then all its sub-scopes become reusing as well.
-
-##### Args:
-
-
-* <b>`name_or_scope`</b>: `string` or `VariableScope`: the scope to open.
-* <b>`default_name`</b>: The default name to use if the `name_or_scope` argument is
- `None`, this name will be uniquified. If name_or_scope is provided it
- won't be used and therefore it is not required and can be None.
-* <b>`values`</b>: The list of `Tensor` arguments that are passed to the op function.
-* <b>`initializer`</b>: default initializer for variables within this scope.
-* <b>`regularizer`</b>: default regularizer for variables within this scope.
-* <b>`caching_device`</b>: default caching device for variables within this scope.
-* <b>`partitioner`</b>: default partitioner for variables within this scope.
-* <b>`custom_getter`</b>: default custom getter for variables within this scope.
-* <b>`reuse`</b>: `True` or `None`; if `True`, we go into reuse mode for this scope as
- well as all sub-scopes; if `None`, we just inherit the parent scope reuse.
-* <b>`dtype`</b>: type of variables created in this scope (defaults to the type
- in the passed scope, or inherited from parent scope).
-* <b>`use_resource`</b>: If False, all variables will be regular Variables. If True,
- experimental ResourceVariables with well-defined semantics will be used
- instead. Defaults to False (will later change to True).
-
-##### Returns:
-
- A scope that can be to captured and reused.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: when trying to reuse within a create scope, or create within
- a reuse scope, or if reuse is not `None` or `True`.
-* <b>`TypeError`</b>: when the types of some arguments are not appropriate.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.where.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.where.md
deleted file mode 100644
index 8aaf2e1463..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard7/tf.where.md
+++ /dev/null
@@ -1,47 +0,0 @@
-### `tf.where(condition, x=None, y=None, name=None)` {#where}
-
-Return the elements, either from `x` or `y`, depending on the `condition`.
-
-If both `x` and `y` are None, then this operation returns the coordinates of
-true elements of `condition`. The coordinates are returned in a 2-D tensor
-where the first dimension (rows) represents the number of true elements, and
-the second dimension (columns) represents the coordinates of the true
-elements. Keep in mind, the shape of the output tensor can vary depending on
-how many true values there are in input. Indices are output in row-major
-order.
-
-If both non-None, `x` and `y` must have the same shape.
-The `condition` tensor must be a scalar if `x` and `y` are scalar.
-If `x` and `y` are vectors or higher rank, then `condition` must be either a
-vector with size matching the first dimension of `x`, or must have the same
-shape as `x`.
-
-The `condition` tensor acts as a mask that chooses, based on the value at each
-element, whether the corresponding element / row in the output should be taken
-from `x` (if true) or `y` (if false).
-
-If `condition` is a vector and `x` and `y` are higher rank matrices, then it
-chooses which row (outer dimension) to copy from `x` and `y`. If `condition`
-has the same shape as `x` and `y`, then it chooses which element to copy from
-`x` and `y`.
-
-##### Args:
-
-
-* <b>`condition`</b>: A `Tensor` of type `bool`
-* <b>`x`</b>: A Tensor which may have the same shape as `condition`. If `condition` is
- rank 1, `x` may have higher rank, but its first dimension must match the
- size of `condition`.
-* <b>`y`</b>: A `tensor` with the same shape and type as `x`.
-* <b>`name`</b>: A name of the operation (optional)
-
-##### Returns:
-
- A `Tensor` with the same type and shape as `x`, `y` if they are non-None.
- A `Tensor` with shape `(num_true, dim_size(condition))`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: When exactly one of `x` or `y` is non-None.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.FixedLenSequenceFeature.__new__.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.FixedLenSequenceFeature.__new__.md
deleted file mode 100644
index 33babc9edd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.FixedLenSequenceFeature.__new__.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.FixedLenSequenceFeature.__new__(_cls, shape, dtype, allow_missing=False)` {#FixedLenSequenceFeature.__new__}
-
-Create new instance of FixedLenSequenceFeature(shape, dtype, allow_missing)
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.GraphKeys.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.GraphKeys.md
deleted file mode 100644
index 74b46140d2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.GraphKeys.md
+++ /dev/null
@@ -1,44 +0,0 @@
-Standard names to use for graph collections.
-
-The standard library uses various well-known names to collect and
-retrieve values associated with a graph. For example, the
-`tf.Optimizer` subclasses default to optimizing the variables
-collected under `tf.GraphKeys.TRAINABLE_VARIABLES` if none is
-specified, but it is also possible to pass an explicit list of
-variables.
-
-The following standard keys are defined:
-
-* `GLOBAL_VARIABLES`: the default collection of `Variable` objects, shared
- across distributed environment (model variables are subset of these). See
- [`tf.global_variables()`](../../api_docs/python/state_ops.md#global_variables)
- for more details.
- Commonly, all `TRAINABLE_VARIABLES` variables will be in `MODEL_VARIABLES`,
- and all `MODEL_VARIABLES` variables will be in `GLOBAL_VARIABLES`.
-* `LOCAL_VARIABLES`: the subset of `Variable` objects that are local to each
- machine. Usually used for temporarily variables, like counters.
- Note: use `tf.contrib.framework.local_variable` to add to this collection.
-* `MODEL_VARIABLES`: the subset of `Variable` objects that are used in the
- model for inference (feed forward). Note: use
- `tf.contrib.framework.model_variable` to add to this collection.
-* `TRAINABLE_VARIABLES`: the subset of `Variable` objects that will
- be trained by an optimizer. See
- [`tf.trainable_variables()`](../../api_docs/python/state_ops.md#trainable_variables)
- for more details.
-* `SUMMARIES`: the summary `Tensor` objects that have been created in the
- graph. See
- [`tf.summary.merge_all()`](../../api_docs/python/summary.md#merge_all)
- for more details.
-* `QUEUE_RUNNERS`: the `QueueRunner` objects that are used to
- produce input for a computation. See
- [`tf.start_queue_runners()`](../../api_docs/python/train.md#start_queue_runners)
- for more details.
-* `MOVING_AVERAGE_VARIABLES`: the subset of `Variable` objects that will also
- keep moving averages. See
- [`tf.moving_average_variables()`](../../api_docs/python/state_ops.md#moving_average_variables)
- for more details.
-* `REGULARIZATION_LOSSES`: regularization losses collected during graph
- construction.
-* `WEIGHTS`: weights inside neural network layers
-* `BIASES`: biases inside neural network layers
-* `ACTIVATIONS`: activations of neural network layers
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.IndexedSlices.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.IndexedSlices.md
deleted file mode 100644
index cb674c3ea8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.IndexedSlices.md
+++ /dev/null
@@ -1,102 +0,0 @@
-A sparse representation of a set of tensor slices at given indices.
-
-This class is a simple wrapper for a pair of `Tensor` objects:
-
-* `values`: A `Tensor` of any dtype with shape `[D0, D1, ..., Dn]`.
-* `indices`: A 1-D integer `Tensor` with shape `[D0]`.
-
-An `IndexedSlices` is typically used to represent a subset of a larger
-tensor `dense` of shape `[LARGE0, D1, .. , DN]` where `LARGE0 >> D0`.
-The values in `indices` are the indices in the first dimension of
-the slices that have been extracted from the larger tensor.
-
-The dense tensor `dense` represented by an `IndexedSlices` `slices` has
-
-```python
-dense[slices.indices[i], :, :, :, ...] = slices.values[i, :, :, :, ...]
-```
-
-The `IndexedSlices` class is used principally in the definition of
-gradients for operations that have sparse gradients
-(e.g. [`tf.gather`](../../api_docs/python/array_ops.md#gather)).
-
-Contrast this representation with
-[`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor),
-which uses multi-dimensional indices and scalar values.
-- - -
-
-#### `tf.IndexedSlices.__init__(values, indices, dense_shape=None)` {#IndexedSlices.__init__}
-
-Creates an `IndexedSlices`.
-
-
-- - -
-
-#### `tf.IndexedSlices.__neg__()` {#IndexedSlices.__neg__}
-
-
-
-
-- - -
-
-#### `tf.IndexedSlices.__str__()` {#IndexedSlices.__str__}
-
-
-
-
-- - -
-
-#### `tf.IndexedSlices.dense_shape` {#IndexedSlices.dense_shape}
-
-A 1-D `Tensor` containing the shape of the corresponding dense tensor.
-
-
-- - -
-
-#### `tf.IndexedSlices.device` {#IndexedSlices.device}
-
-The name of the device on which `values` will be produced, or `None`.
-
-
-- - -
-
-#### `tf.IndexedSlices.dtype` {#IndexedSlices.dtype}
-
-The `DType` of elements in this tensor.
-
-
-- - -
-
-#### `tf.IndexedSlices.graph` {#IndexedSlices.graph}
-
-The `Graph` that contains the values, indices, and shape tensors.
-
-
-- - -
-
-#### `tf.IndexedSlices.indices` {#IndexedSlices.indices}
-
-A 1-D `Tensor` containing the indices of the slices.
-
-
-- - -
-
-#### `tf.IndexedSlices.name` {#IndexedSlices.name}
-
-The name of this `IndexedSlices`.
-
-
-- - -
-
-#### `tf.IndexedSlices.op` {#IndexedSlices.op}
-
-The `Operation` that produces `values` as an output.
-
-
-- - -
-
-#### `tf.IndexedSlices.values` {#IndexedSlices.values}
-
-A `Tensor` containing the values of the slices.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.RandomShuffleQueue.from_list.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.RandomShuffleQueue.from_list.md
deleted file mode 100644
index 546ee36157..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.RandomShuffleQueue.from_list.md
+++ /dev/null
@@ -1,21 +0,0 @@
-#### `tf.RandomShuffleQueue.from_list(index, queues)` {#RandomShuffleQueue.from_list}
-
-Create a queue using the queue reference from `queues[index]`.
-
-##### Args:
-
-
-* <b>`index`</b>: An integer scalar tensor that determines the input that gets
- selected.
-* <b>`queues`</b>: A list of `QueueBase` objects.
-
-##### Returns:
-
- A `QueueBase` object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
- or when the data types of `queues` are not all the same.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Session.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Session.md
deleted file mode 100644
index 92766465b2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Session.md
+++ /dev/null
@@ -1,416 +0,0 @@
-A class for running TensorFlow operations.
-
-A `Session` object encapsulates the environment in which `Operation`
-objects are executed, and `Tensor` objects are evaluated. For
-example:
-
-```python
-# Build a graph.
-a = tf.constant(5.0)
-b = tf.constant(6.0)
-c = a * b
-
-# Launch the graph in a session.
-sess = tf.Session()
-
-# Evaluate the tensor `c`.
-print(sess.run(c))
-```
-
-A session may own resources, such as
-[variables](../../api_docs/python/state_ops.md#Variable), [queues](../../api_docs/python/io_ops.md#QueueBase),
-and [readers](../../api_docs/python/io_ops.md#ReaderBase). It is important to release
-these resources when they are no longer required. To do this, either
-invoke the [`close()`](#Session.close) method on the session, or use
-the session as a context manager. The following two examples are
-equivalent:
-
-```python
-# Using the `close()` method.
-sess = tf.Session()
-sess.run(...)
-sess.close()
-
-# Using the context manager.
-with tf.Session() as sess:
- sess.run(...)
-```
-
-The [`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto)
-protocol buffer exposes various configuration options for a
-session. For example, to create a session that uses soft constraints
-for device placement, and log the resulting placement decisions,
-create a session as follows:
-
-```python
-# Launch the graph in a session that allows soft device placement and
-# logs the placement decisions.
-sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True,
- log_device_placement=True))
-```
-- - -
-
-#### `tf.Session.__del__()` {#Session.__del__}
-
-
-
-
-- - -
-
-#### `tf.Session.__enter__()` {#Session.__enter__}
-
-
-
-
-- - -
-
-#### `tf.Session.__exit__(exec_type, exec_value, exec_tb)` {#Session.__exit__}
-
-
-
-
-- - -
-
-#### `tf.Session.__init__(target='', graph=None, config=None)` {#Session.__init__}
-
-Creates a new TensorFlow session.
-
-If no `graph` argument is specified when constructing the session,
-the default graph will be launched in the session. If you are
-using more than one graph (created with `tf.Graph()` in the same
-process, you will have to use different sessions for each graph,
-but each graph can be used in multiple sessions. In this case, it
-is often clearer to pass the graph to be launched explicitly to
-the session constructor.
-
-##### Args:
-
-
-* <b>`target`</b>: (Optional.) The execution engine to connect to.
- Defaults to using an in-process engine. See
- [Distributed Tensorflow](https://www.tensorflow.org/how_tos/distributed/index.html)
- for more examples.
-* <b>`graph`</b>: (Optional.) The `Graph` to be launched (described above).
-* <b>`config`</b>: (Optional.) A [`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto)
- protocol buffer with configuration options for the session.
-
-
-- - -
-
-#### `tf.Session.as_default()` {#Session.as_default}
-
-Returns a context manager that makes this object the default session.
-
-Use with the `with` keyword to specify that calls to
-[`Operation.run()`](../../api_docs/python/framework.md#Operation.run) or
-[`Tensor.eval()`](../../api_docs/python/framework.md#Tensor.eval) should be
-executed in this session.
-
-```python
-c = tf.constant(..)
-sess = tf.Session()
-
-with sess.as_default():
- assert tf.get_default_session() is sess
- print(c.eval())
-```
-
-To get the current default session, use
-[`tf.get_default_session()`](#get_default_session).
-
-
-*N.B.* The `as_default` context manager *does not* close the
-session when you exit the context, and you must close the session
-explicitly.
-
-```python
-c = tf.constant(...)
-sess = tf.Session()
-with sess.as_default():
- print(c.eval())
-# ...
-with sess.as_default():
- print(c.eval())
-
-sess.close()
-```
-
-Alternatively, you can use `with tf.Session():` to create a
-session that is automatically closed on exiting the context,
-including when an uncaught exception is raised.
-
-*N.B.* The default graph is a property of the current thread. If you
-create a new thread, and wish to use the default session in that
-thread, you must explicitly add a `with sess.as_default():` in that
-thread's function.
-
-##### Returns:
-
- A context manager using this session as the default session.
-
-
-- - -
-
-#### `tf.Session.close()` {#Session.close}
-
-Closes this session.
-
-Calling this method frees all resources associated with the session.
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses if an error occurs while
- closing the TensorFlow session.
-
-
-- - -
-
-#### `tf.Session.graph` {#Session.graph}
-
-The graph that was launched in this session.
-
-
-- - -
-
-#### `tf.Session.graph_def` {#Session.graph_def}
-
-A serializable version of the underlying TensorFlow graph.
-
-##### Returns:
-
- A graph_pb2.GraphDef proto containing nodes for all of the Operations in
- the underlying TensorFlow graph.
-
-
-- - -
-
-#### `tf.Session.partial_run(handle, fetches, feed_dict=None)` {#Session.partial_run}
-
-Continues the execution with more feeds and fetches.
-
-This is EXPERIMENTAL and subject to change.
-
-To use partial execution, a user first calls `partial_run_setup()` and
-then a sequence of `partial_run()`. `partial_run_setup` specifies the
-list of feeds and fetches that will be used in the subsequent
-`partial_run` calls.
-
-The optional `feed_dict` argument allows the caller to override
-the value of tensors in the graph. See run() for more information.
-
-Below is a simple example:
-
-```python
-a = array_ops.placeholder(dtypes.float32, shape=[])
-b = array_ops.placeholder(dtypes.float32, shape=[])
-c = array_ops.placeholder(dtypes.float32, shape=[])
-r1 = math_ops.add(a, b)
-r2 = math_ops.multiply(r1, c)
-
-h = sess.partial_run_setup([r1, r2], [a, b, c])
-res = sess.partial_run(h, r1, feed_dict={a: 1, b: 2})
-res = sess.partial_run(h, r2, feed_dict={c: res})
-```
-
-##### Args:
-
-
-* <b>`handle`</b>: A handle for a sequence of partial runs.
-* <b>`fetches`</b>: A single graph element, a list of graph elements,
- or a dictionary whose values are graph elements or lists of graph
- elements (see documentation for `run`).
-* <b>`feed_dict`</b>: A dictionary that maps graph elements to values
- (described above).
-
-##### Returns:
-
- Either a single value if `fetches` is a single graph element, or
- a list of values if `fetches` is a list, or a dictionary with the
- same keys as `fetches` if that is a dictionary
- (see documentation for `run`).
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses on error.
-
-
-- - -
-
-#### `tf.Session.partial_run_setup(fetches, feeds=None)` {#Session.partial_run_setup}
-
-Sets up a graph with feeds and fetches for partial run.
-
-This is EXPERIMENTAL and subject to change.
-
-Note that contrary to `run`, `feeds` only specifies the graph elements.
-The tensors will be supplied by the subsequent `partial_run` calls.
-
-##### Args:
-
-
-* <b>`fetches`</b>: A single graph element, or a list of graph elements.
-* <b>`feeds`</b>: A single graph element, or a list of graph elements.
-
-##### Returns:
-
- A handle for partial run.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If this `Session` is in an invalid state (e.g. has been
- closed).
-* <b>`TypeError`</b>: If `fetches` or `feed_dict` keys are of an inappropriate type.
- tf.errors.OpError: Or one of its subclasses if a TensorFlow error happens.
-
-
-- - -
-
-#### `tf.Session.reset(target, containers=None, config=None)` {#Session.reset}
-
-Resets resource containers on `target`, and close all connected sessions.
-
-A resource container is distributed across all workers in the
-same cluster as `target`. When a resource container on `target`
-is reset, resources associated with that container will be cleared.
-In particular, all Variables in the container will become undefined:
-they lose their values and shapes.
-
-NOTE:
-(i) reset() is currently only implemented for distributed sessions.
-(ii) Any sessions on the master named by `target` will be closed.
-
-If no resource containers are provided, all containers are reset.
-
-##### Args:
-
-
-* <b>`target`</b>: The execution engine to connect to.
-* <b>`containers`</b>: A list of resource container name strings, or `None` if all of
- all the containers are to be reset.
-* <b>`config`</b>: (Optional.) Protocol buffer with configuration options.
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses if an error occurs while
- resetting containers.
-
-
-- - -
-
-#### `tf.Session.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#Session.run}
-
-Runs operations and evaluates tensors in `fetches`.
-
-This method runs one "step" of TensorFlow computation, by
-running the necessary graph fragment to execute every `Operation`
-and evaluate every `Tensor` in `fetches`, substituting the values in
-`feed_dict` for the corresponding input values.
-
-The `fetches` argument may be a single graph element, or an arbitrarily
-nested list, tuple, namedtuple, dict, or OrderedDict containing graph
-elements at its leaves. A graph element can be one of the following types:
-
-* An [`Operation`](../../api_docs/python/framework.md#Operation).
- The corresponding fetched value will be `None`.
-* A [`Tensor`](../../api_docs/python/framework.md#Tensor).
- The corresponding fetched value will be a numpy ndarray containing the
- value of that tensor.
-* A [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor).
- The corresponding fetched value will be a
- [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue)
- containing the value of that sparse tensor.
-* A `get_tensor_handle` op. The corresponding fetched value will be a
- numpy ndarray containing the handle of that tensor.
-* A `string` which is the name of a tensor or operation in the graph.
-
-The value returned by `run()` has the same shape as the `fetches` argument,
-where the leaves are replaced by the corresponding values returned by
-TensorFlow.
-
-Example:
-
-```python
- a = tf.constant([10, 20])
- b = tf.constant([1.0, 2.0])
- # 'fetches' can be a singleton
- v = session.run(a)
- # v is the numpy array [10, 20]
- # 'fetches' can be a list.
- v = session.run([a, b])
- # v a Python list with 2 numpy arrays: the numpy array [10, 20] and the
- # 1-D array [1.0, 2.0]
- # 'fetches' can be arbitrary lists, tuples, namedtuple, dicts:
- MyData = collections.namedtuple('MyData', ['a', 'b'])
- v = session.run({'k1': MyData(a, b), 'k2': [b, a]})
- # v is a dict with
- # v['k1'] is a MyData namedtuple with 'a' the numpy array [10, 20] and
- # 'b' the numpy array [1.0, 2.0]
- # v['k2'] is a list with the numpy array [1.0, 2.0] and the numpy array
- # [10, 20].
-```
-
-The optional `feed_dict` argument allows the caller to override
-the value of tensors in the graph. Each key in `feed_dict` can be
-one of the following types:
-
-* If the key is a [`Tensor`](../../api_docs/python/framework.md#Tensor), the
- value may be a Python scalar, string, list, or numpy ndarray
- that can be converted to the same `dtype` as that
- tensor. Additionally, if the key is a
- [placeholder](../../api_docs/python/io_ops.md#placeholder), the shape of
- the value will be checked for compatibility with the placeholder.
-* If the key is a
- [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor),
- the value should be a
- [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue).
-* If the key is a nested tuple of `Tensor`s or `SparseTensor`s, the value
- should be a nested tuple with the same structure that maps to their
- corresponding values as above.
-
-Each value in `feed_dict` must be convertible to a numpy array of the dtype
-of the corresponding key.
-
-The optional `options` argument expects a [`RunOptions`] proto. The options
-allow controlling the behavior of this particular step (e.g. turning tracing
-on).
-
-The optional `run_metadata` argument expects a [`RunMetadata`] proto. When
-appropriate, the non-Tensor output of this step will be collected there. For
-example, when users turn on tracing in `options`, the profiled info will be
-collected into this argument and passed back.
-
-##### Args:
-
-
-* <b>`fetches`</b>: A single graph element, a list of graph elements,
- or a dictionary whose values are graph elements or lists of graph
- elements (described above).
-* <b>`feed_dict`</b>: A dictionary that maps graph elements to values
- (described above).
-* <b>`options`</b>: A [`RunOptions`] protocol buffer
-* <b>`run_metadata`</b>: A [`RunMetadata`] protocol buffer
-
-##### Returns:
-
- Either a single value if `fetches` is a single graph element, or
- a list of values if `fetches` is a list, or a dictionary with the
- same keys as `fetches` if that is a dictionary (described above).
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If this `Session` is in an invalid state (e.g. has been
- closed).
-* <b>`TypeError`</b>: If `fetches` or `feed_dict` keys are of an inappropriate type.
-* <b>`ValueError`</b>: If `fetches` or `feed_dict` keys are invalid or refer to a
- `Tensor` that doesn't exist.
-
-
-- - -
-
-#### `tf.Session.sess_str` {#Session.sess_str}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.VarLenFeature.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.VarLenFeature.md
deleted file mode 100644
index 85f2546d3e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.VarLenFeature.md
+++ /dev/null
@@ -1,39 +0,0 @@
-Configuration for parsing a variable-length input feature.
-
-Fields:
- dtype: Data type of input.
-- - -
-
-#### `tf.VarLenFeature.__getnewargs__()` {#VarLenFeature.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.VarLenFeature.__getstate__()` {#VarLenFeature.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.VarLenFeature.__new__(_cls, dtype)` {#VarLenFeature.__new__}
-
-Create new instance of VarLenFeature(dtype,)
-
-
-- - -
-
-#### `tf.VarLenFeature.__repr__()` {#VarLenFeature.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.VarLenFeature.dtype` {#VarLenFeature.dtype}
-
-Alias for field number 0
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Variable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Variable.md
deleted file mode 100644
index 8c921f7c04..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.Variable.md
+++ /dev/null
@@ -1,1156 +0,0 @@
-See the [Variables How To](../../how_tos/variables/index.md) for a high
-level overview.
-
-A variable maintains state in the graph across calls to `run()`. You add a
-variable to the graph by constructing an instance of the class `Variable`.
-
-The `Variable()` constructor requires an initial value for the variable,
-which can be a `Tensor` of any type and shape. The initial value defines the
-type and shape of the variable. After construction, the type and shape of
-the variable are fixed. The value can be changed using one of the assign
-methods.
-
-If you want to change the shape of a variable later you have to use an
-`assign` Op with `validate_shape=False`.
-
-Just like any `Tensor`, variables created with `Variable()` can be used as
-inputs for other Ops in the graph. Additionally, all the operators
-overloaded for the `Tensor` class are carried over to variables, so you can
-also add nodes to the graph by just doing arithmetic on variables.
-
-```python
-import tensorflow as tf
-
-# Create a variable.
-w = tf.Variable(<initial-value>, name=<optional-name>)
-
-# Use the variable in the graph like any Tensor.
-y = tf.matmul(w, ...another variable or tensor...)
-
-# The overloaded operators are available too.
-z = tf.sigmoid(w + y)
-
-# Assign a new value to the variable with `assign()` or a related method.
-w.assign(w + 1.0)
-w.assign_add(1.0)
-```
-
-When you launch the graph, variables have to be explicitly initialized before
-you can run Ops that use their value. You can initialize a variable by
-running its *initializer op*, restoring the variable from a save file, or
-simply running an `assign` Op that assigns a value to the variable. In fact,
-the variable *initializer op* is just an `assign` Op that assigns the
-variable's initial value to the variable itself.
-
-```python
-# Launch the graph in a session.
-with tf.Session() as sess:
- # Run the variable initializer.
- sess.run(w.initializer)
- # ...you now can run ops that use the value of 'w'...
-```
-
-The most common initialization pattern is to use the convenience function
-`global_variables_initializer()` to add an Op to the graph that initializes
-all the variables. You then run that Op after launching the graph.
-
-```python
-# Add an Op to initialize global variables.
-init_op = tf.global_variables_initializer()
-
-# Launch the graph in a session.
-with tf.Session() as sess:
- # Run the Op that initializes global variables.
- sess.run(init_op)
- # ...you can now run any Op that uses variable values...
-```
-
-If you need to create a variable with an initial value dependent on another
-variable, use the other variable's `initialized_value()`. This ensures that
-variables are initialized in the right order.
-
-All variables are automatically collected in the graph where they are
-created. By default, the constructor adds the new variable to the graph
-collection `GraphKeys.GLOBAL_VARIABLES`. The convenience function
-`global_variables()` returns the contents of that collection.
-
-When building a machine learning model it is often convenient to distinguish
-between variables holding the trainable model parameters and other variables
-such as a `global step` variable used to count training steps. To make this
-easier, the variable constructor supports a `trainable=<bool>` parameter. If
-`True`, the new variable is also added to the graph collection
-`GraphKeys.TRAINABLE_VARIABLES`. The convenience function
-`trainable_variables()` returns the contents of this collection. The
-various `Optimizer` classes use this collection as the default list of
-variables to optimize.
-
-
-Creating a variable.
-
-- - -
-
-#### `tf.Variable.__init__(initial_value=None, trainable=True, collections=None, validate_shape=True, caching_device=None, name=None, variable_def=None, dtype=None, expected_shape=None, import_scope=None)` {#Variable.__init__}
-
-Creates a new variable with value `initial_value`.
-
-The new variable is added to the graph collections listed in `collections`,
-which defaults to `[GraphKeys.GLOBAL_VARIABLES]`.
-
-If `trainable` is `True` the variable is also added to the graph collection
-`GraphKeys.TRAINABLE_VARIABLES`.
-
-This constructor creates both a `variable` Op and an `assign` Op to set the
-variable to its initial value.
-
-##### Args:
-
-
-* <b>`initial_value`</b>: A `Tensor`, or Python object convertible to a `Tensor`,
- which is the initial value for the Variable. The initial value must have
- a shape specified unless `validate_shape` is set to False. Can also be a
- callable with no argument that returns the initial value when called. In
- that case, `dtype` must be specified. (Note that initializer functions
- from init_ops.py must first be bound to a shape before being used here.)
-* <b>`trainable`</b>: If `True`, the default, also adds the variable to the graph
- collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as
- the default list of variables to use by the `Optimizer` classes.
-* <b>`collections`</b>: List of graph collections keys. The new variable is added to
- these collections. Defaults to `[GraphKeys.GLOBAL_VARIABLES]`.
-* <b>`validate_shape`</b>: If `False`, allows the variable to be initialized with a
- value of unknown shape. If `True`, the default, the shape of
- `initial_value` must be known.
-* <b>`caching_device`</b>: Optional device string describing where the Variable
- should be cached for reading. Defaults to the Variable's device.
- If not `None`, caches on another device. Typical use is to cache
- on the device where the Ops using the Variable reside, to deduplicate
- copying through `Switch` and other conditional statements.
-* <b>`name`</b>: Optional name for the variable. Defaults to `'Variable'` and gets
- uniquified automatically.
-* <b>`variable_def`</b>: `VariableDef` protocol buffer. If not `None`, recreates
- the Variable object with its contents. `variable_def` and the other
- arguments are mutually exclusive.
-* <b>`dtype`</b>: If set, initial_value will be converted to the given type.
- If `None`, either the datatype will be kept (if `initial_value` is
- a Tensor), or `convert_to_tensor` will decide.
-* <b>`expected_shape`</b>: A TensorShape. If set, initial_value is expected
- to have this shape.
-* <b>`import_scope`</b>: Optional `string`. Name scope to add to the
- `Variable.` Only used when initializing from protocol buffer.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both `variable_def` and initial_value are specified.
-* <b>`ValueError`</b>: If the initial value is not specified, or does not have a
- shape and `validate_shape` is `True`.
-
-
-- - -
-
-#### `tf.Variable.initialized_value()` {#Variable.initialized_value}
-
-Returns the value of the initialized variable.
-
-You should use this instead of the variable itself to initialize another
-variable with a value that depends on the value of this variable.
-
-Beware of using initialized_value except during initialization:
-initialized_value causes the Variable's initializer op to be run, so running
-this op resets the variable to the initial value.
-
-```python
-# Initialize 'v' with a random tensor.
-v = tf.Variable(tf.truncated_normal([10, 40]))
-# Use `initialized_value` to guarantee that `v` has been
-# initialized before its value is used to initialize `w`.
-# The random values are picked only once.
-w = tf.Variable(v.initialized_value() * 2.0)
-```
-
-##### Returns:
-
- A `Tensor` holding the value of this variable after its initializer
- has run.
-
-
-
-Changing a variable value.
-
-- - -
-
-#### `tf.Variable.assign(value, use_locking=False)` {#Variable.assign}
-
-Assigns a new value to the variable.
-
-This is essentially a shortcut for `assign(self, value)`.
-
-##### Args:
-
-
-* <b>`value`</b>: A `Tensor`. The new value for this variable.
-* <b>`use_locking`</b>: If `True`, use locking during the assignment.
-
-##### Returns:
-
- A `Tensor` that will hold the new value of this variable after
- the assignment has completed.
-
-
-- - -
-
-#### `tf.Variable.assign_add(delta, use_locking=False)` {#Variable.assign_add}
-
-Adds a value to this variable.
-
- This is essentially a shortcut for `assign_add(self, delta)`.
-
-##### Args:
-
-
-* <b>`delta`</b>: A `Tensor`. The value to add to this variable.
-* <b>`use_locking`</b>: If `True`, use locking during the operation.
-
-##### Returns:
-
- A `Tensor` that will hold the new value of this variable after
- the addition has completed.
-
-
-- - -
-
-#### `tf.Variable.assign_sub(delta, use_locking=False)` {#Variable.assign_sub}
-
-Subtracts a value from this variable.
-
-This is essentially a shortcut for `assign_sub(self, delta)`.
-
-##### Args:
-
-
-* <b>`delta`</b>: A `Tensor`. The value to subtract from this variable.
-* <b>`use_locking`</b>: If `True`, use locking during the operation.
-
-##### Returns:
-
- A `Tensor` that will hold the new value of this variable after
- the subtraction has completed.
-
-
-- - -
-
-#### `tf.Variable.scatter_sub(sparse_delta, use_locking=False)` {#Variable.scatter_sub}
-
-Subtracts `IndexedSlices` from this variable.
-
-This is essentially a shortcut for `scatter_sub(self, sparse_delta.indices,
-sparse_delta.values)`.
-
-##### Args:
-
-
-* <b>`sparse_delta`</b>: `IndexedSlices` to be subtracted from this variable.
-* <b>`use_locking`</b>: If `True`, use locking during the operation.
-
-##### Returns:
-
- A `Tensor` that will hold the new value of this variable after
- the scattered subtraction has completed.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sparse_delta` is not an `IndexedSlices`.
-
-
-- - -
-
-#### `tf.Variable.count_up_to(limit)` {#Variable.count_up_to}
-
-Increments this variable until it reaches `limit`.
-
-When that Op is run it tries to increment the variable by `1`. If
-incrementing the variable would bring it above `limit` then the Op raises
-the exception `OutOfRangeError`.
-
-If no error is raised, the Op outputs the value of the variable before
-the increment.
-
-This is essentially a shortcut for `count_up_to(self, limit)`.
-
-##### Args:
-
-
-* <b>`limit`</b>: value at which incrementing the variable raises an error.
-
-##### Returns:
-
- A `Tensor` that will hold the variable value before the increment. If no
- other Op modifies this variable, the values produced will all be
- distinct.
-
-
-
-- - -
-
-#### `tf.Variable.eval(session=None)` {#Variable.eval}
-
-In a session, computes and returns the value of this variable.
-
-This is not a graph construction method, it does not add ops to the graph.
-
-This convenience method requires a session where the graph containing this
-variable has been launched. If no session is passed, the default session is
-used. See the [Session class](../../api_docs/python/client.md#Session) for
-more information on launching a graph and on sessions.
-
-```python
-v = tf.Variable([1, 2])
-init = tf.global_variables_initializer()
-
-with tf.Session() as sess:
- sess.run(init)
- # Usage passing the session explicitly.
- print(v.eval(sess))
- # Usage with the default session. The 'with' block
- # above makes 'sess' the default session.
- print(v.eval())
-```
-
-##### Args:
-
-
-* <b>`session`</b>: The session to use to evaluate this variable. If
- none, the default session is used.
-
-##### Returns:
-
- A numpy `ndarray` with a copy of the value of this variable.
-
-
-
-Properties.
-
-- - -
-
-#### `tf.Variable.name` {#Variable.name}
-
-The name of this variable.
-
-
-- - -
-
-#### `tf.Variable.dtype` {#Variable.dtype}
-
-The `DType` of this variable.
-
-
-- - -
-
-#### `tf.Variable.get_shape()` {#Variable.get_shape}
-
-The `TensorShape` of this variable.
-
-##### Returns:
-
- A `TensorShape`.
-
-
-- - -
-
-#### `tf.Variable.device` {#Variable.device}
-
-The device of this variable.
-
-
-- - -
-
-#### `tf.Variable.initializer` {#Variable.initializer}
-
-The initializer operation for this variable.
-
-
-- - -
-
-#### `tf.Variable.graph` {#Variable.graph}
-
-The `Graph` of this variable.
-
-
-- - -
-
-#### `tf.Variable.op` {#Variable.op}
-
-The `Operation` of this variable.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.Variable.__abs__(a, *args)` {#Variable.__abs__}
-
-Computes the absolute value of a tensor.
-
-Given a tensor of real numbers `x`, this operation returns a tensor
-containing the absolute value of each element in `x`. For example, if x is
-an input element and y is an output element, this operation computes
-\\(y = |x|\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor` of type `float32`, `float64`, `int32`, or
- `int64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` the same size and type as `x` with absolute
- values.
-
-
-- - -
-
-#### `tf.Variable.__add__(a, *args)` {#Variable.__add__}
-
-Returns x + y element-wise.
-
-*NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Variable.__and__(a, *args)` {#Variable.__and__}
-
-Returns the truth value of x AND y element-wise.
-
-*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__div__(a, *args)` {#Variable.__div__}
-
-Divide two values using Python 2 semantics. Used for Tensor.__div__.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` returns the quotient of x and y.
-
-
-- - -
-
-#### `tf.Variable.__floordiv__(a, *args)` {#Variable.__floordiv__}
-
-Divides `x / y` elementwise, rounding toward the most negative integer.
-
-The same as `tf.div(x,y)` for integers, but uses `tf.floor(tf.div(x,y))` for
-floating point arguments so that the result is always an integer (though
-possibly an integer represented as floating point). This op is generated by
-`x // y` floor division in Python 3 and in Python 2.7 with
-`from __future__ import division`.
-
-Note that for efficiency, `floordiv` uses C semantics for negative numbers
-(unlike Python and Numpy).
-
-`x` and `y` must have the same type, and the result will have the same type
-as well.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` rounded down (except possibly towards zero for negative integers).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the inputs are complex.
-
-
-- - -
-
-#### `tf.Variable.__ge__(a, *args)` {#Variable.__ge__}
-
-Returns the truth value of (x >= y) element-wise.
-
-*NOTE*: `GreaterEqual` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__getitem__(var, slice_spec)` {#Variable.__getitem__}
-
-Creates a slice helper object given a variable.
-
-This allows creating a sub-tensor from part of the current contents
-of a variable.
-See
-[`Tensor.__getitem__`](../../api_docs/python/framework.md#Tensor.__getitem__)
-for detailed examples of slicing.
-
-This function in addition also allows assignment to a sliced range.
-This is similar to `__setitem__` functionality in Python. However,
-the syntax is different so that the user can capture the assignment
-operation for grouping or passing to `sess.run()`.
-For example,
-
-```prettyprint
-import tensorflow as tf
-A = tf.Variable([[1,2,3], [4,5,6], [7,8,9]], dtype=tf.float32)
-with tf.Session() as sess:
- sess.run(tf.global_variables_initializer())
- print sess.run(A[:2, :2]) # => [[1,2], [4,5]]
-
- op = A[:2,:2].assign(22. * tf.ones((2, 2)))
- print sess.run(op) # => [[22, 22, 3], [22, 22, 6], [7,8,9]]
-```
-
-Note that assignments currently do not support NumPy broadcasting
-semantics.
-
-##### Args:
-
-
-* <b>`var`</b>: An `ops.Variable` object.
-* <b>`slice_spec`</b>: The arguments to `Tensor.__getitem__`.
-
-##### Returns:
-
- The appropriate slice of "tensor", based on "slice_spec".
- As an operator. The operator also has a `assign()` method
- that can be used to generate an assignment operator.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If a slice range is negative size.
-* <b>`TypeError`</b>: If the slice indices aren't int, slice, or Ellipsis.
-
-
-- - -
-
-#### `tf.Variable.__gt__(a, *args)` {#Variable.__gt__}
-
-Returns the truth value of (x > y) element-wise.
-
-*NOTE*: `Greater` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__invert__(a, *args)` {#Variable.__invert__}
-
-Returns the truth value of NOT x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__iter__()` {#Variable.__iter__}
-
-Dummy method to prevent iteration. Do not call.
-
-NOTE(mrry): If we register __getitem__ as an overloaded operator,
-Python will valiantly attempt to iterate over the variable's Tensor from 0
-to infinity. Declaring this method prevents this unintended behavior.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: when invoked.
-
-
-- - -
-
-#### `tf.Variable.__le__(a, *args)` {#Variable.__le__}
-
-Returns the truth value of (x <= y) element-wise.
-
-*NOTE*: `LessEqual` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__lt__(a, *args)` {#Variable.__lt__}
-
-Returns the truth value of (x < y) element-wise.
-
-*NOTE*: `Less` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__mod__(a, *args)` {#Variable.__mod__}
-
-Returns element-wise remainder of division. When `x < 0` xor `y < 0` is
-
-true, this follows Python semantics in that the result here is consistent
-with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.
-
-*NOTE*: `FloorMod` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Variable.__mul__(a, *args)` {#Variable.__mul__}
-
-Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse".
-
-
-- - -
-
-#### `tf.Variable.__neg__(a, *args)` {#Variable.__neg__}
-
-Computes numerical negative value element-wise.
-
-I.e., \\(y = -x\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Variable.__or__(a, *args)` {#Variable.__or__}
-
-Returns the truth value of x OR y element-wise.
-
-*NOTE*: `LogicalOr` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__pow__(a, *args)` {#Variable.__pow__}
-
-Computes the power of one value to another.
-
-Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for
-corresponding elements in `x` and `y`. For example:
-
-```
-# tensor 'x' is [[2, 2], [3, 3]]
-# tensor 'y' is [[8, 16], [2, 3]]
-tf.pow(x, y) ==> [[256, 65536], [9, 27]]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`y`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`.
-
-
-- - -
-
-#### `tf.Variable.__radd__(a, *args)` {#Variable.__radd__}
-
-Returns x + y element-wise.
-
-*NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Variable.__rand__(a, *args)` {#Variable.__rand__}
-
-Returns the truth value of x AND y element-wise.
-
-*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__rdiv__(a, *args)` {#Variable.__rdiv__}
-
-Divide two values using Python 2 semantics. Used for Tensor.__div__.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` returns the quotient of x and y.
-
-
-- - -
-
-#### `tf.Variable.__rfloordiv__(a, *args)` {#Variable.__rfloordiv__}
-
-Divides `x / y` elementwise, rounding toward the most negative integer.
-
-The same as `tf.div(x,y)` for integers, but uses `tf.floor(tf.div(x,y))` for
-floating point arguments so that the result is always an integer (though
-possibly an integer represented as floating point). This op is generated by
-`x // y` floor division in Python 3 and in Python 2.7 with
-`from __future__ import division`.
-
-Note that for efficiency, `floordiv` uses C semantics for negative numbers
-(unlike Python and Numpy).
-
-`x` and `y` must have the same type, and the result will have the same type
-as well.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` rounded down (except possibly towards zero for negative integers).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the inputs are complex.
-
-
-- - -
-
-#### `tf.Variable.__rmod__(a, *args)` {#Variable.__rmod__}
-
-Returns element-wise remainder of division. When `x < 0` xor `y < 0` is
-
-true, this follows Python semantics in that the result here is consistent
-with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.
-
-*NOTE*: `FloorMod` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Variable.__rmul__(a, *args)` {#Variable.__rmul__}
-
-Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse".
-
-
-- - -
-
-#### `tf.Variable.__ror__(a, *args)` {#Variable.__ror__}
-
-Returns the truth value of x OR y element-wise.
-
-*NOTE*: `LogicalOr` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__rpow__(a, *args)` {#Variable.__rpow__}
-
-Computes the power of one value to another.
-
-Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for
-corresponding elements in `x` and `y`. For example:
-
-```
-# tensor 'x' is [[2, 2], [3, 3]]
-# tensor 'y' is [[8, 16], [2, 3]]
-tf.pow(x, y) ==> [[256, 65536], [9, 27]]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`y`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`.
-
-
-- - -
-
-#### `tf.Variable.__rsub__(a, *args)` {#Variable.__rsub__}
-
-Returns x - y element-wise.
-
-*NOTE*: `Sub` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Variable.__rtruediv__(a, *args)` {#Variable.__rtruediv__}
-
-
-
-
-- - -
-
-#### `tf.Variable.__rxor__(a, *args)` {#Variable.__rxor__}
-
-x ^ y = (x | y) & ~(x & y).
-
-
-- - -
-
-#### `tf.Variable.__str__()` {#Variable.__str__}
-
-
-
-
-- - -
-
-#### `tf.Variable.__sub__(a, *args)` {#Variable.__sub__}
-
-Returns x - y element-wise.
-
-*NOTE*: `Sub` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Variable.__truediv__(a, *args)` {#Variable.__truediv__}
-
-
-
-
-- - -
-
-#### `tf.Variable.__xor__(a, *args)` {#Variable.__xor__}
-
-x ^ y = (x | y) & ~(x & y).
-
-
-- - -
-
-#### `tf.Variable.from_proto(variable_def, import_scope=None)` {#Variable.from_proto}
-
-Returns a `Variable` object created from `variable_def`.
-
-
-- - -
-
-#### `tf.Variable.initial_value` {#Variable.initial_value}
-
-Returns the Tensor used as the initial value for the variable.
-
-Note that this is different from `initialized_value()` which runs
-the op that initializes the variable before returning its value.
-This method returns the tensor that is used by the op that initializes
-the variable.
-
-##### Returns:
-
- A `Tensor`.
-
-
-- - -
-
-#### `tf.Variable.load(value, session=None)` {#Variable.load}
-
-Load new value into this variable
-
-Writes new value to variable's memory. Doesn't add ops to the graph.
-
-This convenience method requires a session where the graph containing this
-variable has been launched. If no session is passed, the default session is
-used. See the [Session class](../../api_docs/python/client.md#Session) for
-more information on launching a graph and on sessions.
-
-```python
-v = tf.Variable([1, 2])
-init = tf.global_variables_initializer()
-
-with tf.Session() as sess:
- sess.run(init)
- # Usage passing the session explicitly.
- v.load([2, 3], sess)
- print(v.eval(sess)) # prints [2 3]
- # Usage with the default session. The 'with' block
- # above makes 'sess' the default session.
- v.load([3, 4], sess)
- print(v.eval()) # prints [3 4]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: New variable value
-* <b>`session`</b>: The session to use to evaluate this variable. If
- none, the default session is used.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: Session is not passed and no default session
-
-
-- - -
-
-#### `tf.Variable.read_value()` {#Variable.read_value}
-
-Returns the value of this variable, read in the current context.
-
-Can be different from value() if it's on another device, with control
-dependencies, etc.
-
-##### Returns:
-
- A `Tensor` containing the value of the variable.
-
-
-- - -
-
-#### `tf.Variable.set_shape(shape)` {#Variable.set_shape}
-
-Overrides the shape for this variable.
-
-##### Args:
-
-
-* <b>`shape`</b>: the `TensorShape` representing the overridden shape.
-
-
-- - -
-
-#### `tf.Variable.to_proto(export_scope=None)` {#Variable.to_proto}
-
-Converts a `Variable` to a `VariableDef` protocol buffer.
-
-##### Args:
-
-
-* <b>`export_scope`</b>: Optional `string`. Name scope to remove.
-
-##### Returns:
-
- A `VariableDef` protocol buffer, or `None` if the `Variable` is not
- in the specified name scope.
-
-
-- - -
-
-#### `tf.Variable.value()` {#Variable.value}
-
-Returns the last snapshot of this variable.
-
-You usually do not need to call this method as all ops that need the value
-of the variable call it automatically through a `convert_to_tensor()` call.
-
-Returns a `Tensor` which holds the value of the variable. You can not
-assign a new value to this tensor as it is not a reference to the variable.
-
-To avoid copies, if the consumer of the returned value is on the same device
-as the variable, this actually returns the live value of the variable, not
-a copy. Updates to the variable are seen by the consumer. If the consumer
-is on a different device it will get a copy of the variable.
-
-##### Returns:
-
- A `Tensor` containing the value of the variable.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.acos.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.acos.md
deleted file mode 100644
index 15ecc97044..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.acos.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.acos(x, name=None)` {#acos}
-
-Computes acos of x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.argmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.argmax.md
deleted file mode 100644
index 44a278e0d4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.argmax.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.argmax(input, axis=None, name=None, dimension=None)` {#argmax}
-
-Returns the index with the largest value across axes of a tensor.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`axis`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- int32, 0 <= axis < rank(input). Describes which axis
- of the input Tensor to reduce across. For vectors, use axis = 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `int64`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_negative.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_negative.md
deleted file mode 100644
index 6f93226afe..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_negative.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.assert_negative(x, data=None, summarize=None, message=None, name=None)` {#assert_negative}
-
-Assert the condition `x < 0` holds element-wise.
-
-Example of adding a dependency to an operation:
-
-```python
-with tf.control_dependencies([tf.assert_negative(x)]):
- output = tf.reduce_sum(x)
-```
-
-Negative means, for every element `x[i]` of `x`, we have `x[i] < 0`.
-If `x` is empty this is trivially satisfied.
-
-##### Args:
-
-
-* <b>`x`</b>: Numeric `Tensor`.
-* <b>`data`</b>: The tensors to print out if the condition is False. Defaults to
- error message and first few entries of `x`.
-* <b>`summarize`</b>: Print this many entries of each tensor.
-* <b>`message`</b>: A string to prefix to the default message.
-* <b>`name`</b>: A name for this operation (optional). Defaults to "assert_negative".
-
-##### Returns:
-
- Op raising `InvalidArgumentError` unless `x` is all negative.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_proper_iterable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_proper_iterable.md
deleted file mode 100644
index ba01073765..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assert_proper_iterable.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.assert_proper_iterable(values)` {#assert_proper_iterable}
-
-Static assert that values is a "proper" iterable.
-
-`Ops` that expect iterables of `Tensor` can call this to validate input.
-Useful since `Tensor`, `ndarray`, byte/text type are all iterables themselves.
-
-##### Args:
-
-
-* <b>`values`</b>: Object to be checked.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `values` is not iterable or is one of
- `Tensor`, `SparseTensor`, `np.array`, `tf.compat.bytes_or_text_types`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assign_sub.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assign_sub.md
deleted file mode 100644
index 73232dddc1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.assign_sub.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.assign_sub(ref, value, use_locking=None, name=None)` {#assign_sub}
-
-Update 'ref' by subtracting 'value' from it.
-
-This operation outputs "ref" after the update is done.
-This makes it easier to chain operations that need to use the reset value.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types:
- `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`,
- `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Should be from a `Variable` node.
-* <b>`value`</b>: A `Tensor`. Must have the same type as `ref`.
- The value to be subtracted to the variable.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- If True, the subtraction will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as "ref". Returned as a convenience for operations that want
- to use the new value after the variable has been updated.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.atan.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.atan.md
deleted file mode 100644
index 63fe76f460..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.atan.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.atan(x, name=None)` {#atan}
-
-Computes atan of x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_to_space_nd.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_to_space_nd.md
deleted file mode 100644
index 1f84141a13..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.batch_to_space_nd.md
+++ /dev/null
@@ -1,136 +0,0 @@
-### `tf.batch_to_space_nd(input, block_shape, crops, name=None)` {#batch_to_space_nd}
-
-BatchToSpace for N-D tensors of type T.
-
-This operation reshapes the "batch" dimension 0 into `M + 1` dimensions of shape
-`block_shape + [batch]`, interleaves these blocks back into the grid defined by
-the spatial dimensions `[1, ..., M]`, to obtain a result with the same rank as
-the input. The spatial dimensions of this intermediate result are then
-optionally cropped according to `crops` to produce the output. This is the
-reverse of SpaceToBatch. See below for a precise description.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`.
- N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,
- where spatial_shape has M dimensions.
-* <b>`block_shape`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 1-D with shape `[M]`, all values must be >= 1.
-* <b>`crops`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 2-D with shape `[M, 2]`, all values must be >= 0.
- `crops[i] = [crop_start, crop_end]` specifies the amount to crop from input
- dimension `i + 1`, which corresponds to spatial dimension `i`. It is
- required that
- `crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]`.
-
- This operation is equivalent to the following steps:
-
- 1. Reshape `input` to `reshaped` of shape:
- [block_shape[0], ..., block_shape[M-1],
- batch / prod(block_shape),
- input_shape[1], ..., input_shape[N-1]]
-
- 2. Permute dimensions of `reshaped` to produce `permuted` of shape
- [batch / prod(block_shape),
-
- input_shape[1], block_shape[0],
- ...,
- input_shape[M], block_shape[M-1],
-
- input_shape[M+1], ..., input_shape[N-1]]
-
- 3. Reshape `permuted` to produce `reshaped_permuted` of shape
- [batch / prod(block_shape),
-
- input_shape[1] * block_shape[0],
- ...,
- input_shape[M] * block_shape[M-1],
-
- input_shape[M+1],
- ...,
- input_shape[N-1]]
-
- 4. Crop the start and end of dimensions `[1, ..., M]` of
- `reshaped_permuted` according to `crops` to produce the output of shape:
- [batch / prod(block_shape),
-
- input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1],
- ...,
- input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1],
-
- input_shape[M+1], ..., input_shape[N-1]]
-
- Some examples:
-
- (1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and
- `crops = [[0, 0], [0, 0]]`:
-
- ```prettyprint
- [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
- ```
-
- The output tensor has shape `[1, 2, 2, 1]` and value:
-
- ```prettyprint
- x = [[[[1], [2]], [[3], [4]]]]
- ```
-
- (2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and
- `crops = [[0, 0], [0, 0]]`:
-
- ```prettyprint
- [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
- ```
-
- The output tensor has shape `[1, 2, 2, 3]` and value:
-
- ```prettyprint
- x = [[[[1, 2, 3], [4, 5, 6]],
- [[7, 8, 9], [10, 11, 12]]]]
- ```
-
- (3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and
- `crops = [[0, 0], [0, 0]]`:
-
- ```prettyprint
- x = [[[[1], [3]], [[9], [11]]],
- [[[2], [4]], [[10], [12]]],
- [[[5], [7]], [[13], [15]]],
- [[[6], [8]], [[14], [16]]]]
- ```
-
- The output tensor has shape `[1, 4, 4, 1]` and value:
-
- ```prettyprint
- x = [[[1], [2], [3], [4]],
- [[5], [6], [7], [8]],
- [[9], [10], [11], [12]],
- [[13], [14], [15], [16]]]
- ```
-
- (4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and
- `crops = [[0, 0], [2, 0]]`:
-
- ```prettyprint
- x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
- [[[0], [2], [4]]], [[[0], [10], [12]]],
- [[[0], [5], [7]]], [[[0], [13], [15]]],
- [[[0], [6], [8]]], [[[0], [14], [16]]]]
- ```
-
- The output tensor has shape `[2, 2, 4, 1]` and value:
-
- ```prettyprint
- x = [[[[1], [2], [3], [4]],
- [[5], [6], [7], [8]]],
- [[[9], [10], [11], [12]],
- [[13], [14], [15], [16]]]]
- ```
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.bayesflow.stochastic_tensor.MeanValue.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.bayesflow.stochastic_tensor.MeanValue.md
deleted file mode 100644
index 032b60f98b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.bayesflow.stochastic_tensor.MeanValue.md
+++ /dev/null
@@ -1,36 +0,0 @@
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.MeanValue.__init__(stop_gradient=False)` {#MeanValue.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.MeanValue.declare_inputs(unused_stochastic_tensor, unused_inputs_dict)` {#MeanValue.declare_inputs}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.MeanValue.popped_above(unused_value_type)` {#MeanValue.popped_above}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.MeanValue.pushed_above(unused_value_type)` {#MeanValue.pushed_above}
-
-
-
-
-- - -
-
-#### `tf.contrib.bayesflow.stochastic_tensor.MeanValue.stop_gradient` {#MeanValue.stop_gradient}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.bayesflow.variational_inference.elbo_with_log_joint.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.bayesflow.variational_inference.elbo_with_log_joint.md
deleted file mode 100644
index 592be500f2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.bayesflow.variational_inference.elbo_with_log_joint.md
+++ /dev/null
@@ -1,35 +0,0 @@
-### `tf.contrib.bayesflow.variational_inference.elbo_with_log_joint(log_joint, variational=None, keep_batch_dim=True, form=None, name='ELBO')` {#elbo_with_log_joint}
-
-Evidence Lower BOund. `log p(x) >= ELBO`.
-
-This method is for models that have computed `p(x,Z)` instead of `p(x|Z)`.
-See `elbo` for further details.
-
-Because only the joint is specified, analytic KL is not available.
-
-##### Args:
-
-
-* <b>`log_joint`</b>: `Tensor` log p(x, Z).
-* <b>`variational`</b>: list of `StochasticTensor` q(Z). If `None`, defaults to all
- `StochasticTensor` objects upstream of `log_joint`.
-* <b>`keep_batch_dim`</b>: bool. Whether to keep the batch dimension when summing
- entropy term. When the sample is per data point, this should be True;
- otherwise (e.g. in a Bayesian NN), this should be False.
-* <b>`form`</b>: ELBOForms constant. Controls how the ELBO is computed. Defaults to
- ELBOForms.default.
-* <b>`name`</b>: name to prefix ops with.
-
-##### Returns:
-
- `Tensor` ELBO of the same type and shape as `log_joint`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if variationals in `variational` are not `StochasticTensor`s.
-* <b>`TypeError`</b>: if form is not a valid ELBOForms constant.
-* <b>`ValueError`</b>: if `variational` is None and there are no `StochasticTensor`s
- upstream of `log_joint`.
-* <b>`ValueError`</b>: if form is ELBOForms.analytic_kl.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Mixture.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Mixture.md
deleted file mode 100644
index 9799e8b23e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.Mixture.md
+++ /dev/null
@@ -1,659 +0,0 @@
-Mixture distribution.
-
-The `Mixture` object implements batched mixture distributions.
-The mixture model is defined by a `Categorical` distribution (the mixture)
-and a python list of `Distribution` objects.
-
-Methods supported include `log_prob`, `prob`, `mean`, `sample`, and
-`entropy_lower_bound`.
-- - -
-
-#### `tf.contrib.distributions.Mixture.__init__(cat, components, validate_args=False, allow_nan_stats=True, name='Mixture')` {#Mixture.__init__}
-
-Initialize a Mixture distribution.
-
-A `Mixture` is defined by a `Categorical` (`cat`, representing the
-mixture probabilities) and a list of `Distribution` objects
-all having matching dtype, batch shape, event shape, and continuity
-properties (the components).
-
-The `num_classes` of `cat` must be possible to infer at graph construction
-time and match `len(components)`.
-
-##### Args:
-
-
-* <b>`cat`</b>: A `Categorical` distribution instance, representing the probabilities
- of `distributions`.
-* <b>`components`</b>: A list or tuple of `Distribution` instances.
- Each instance must have the same type, be defined on the same domain,
- and have matching `event_shape` and `batch_shape`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. If `True`, raise a runtime
- error if batch or event ranks are inconsistent between cat and any of
- the distributions. This is only checked if the ranks cannot be
- determined statically at graph construction time.
-* <b>`allow_nan_stats`</b>: Boolean, default `True`. If `False`, raise an
- exception if a statistic (e.g. mean/mode/etc...) is undefined for any
- batch member. If `True`, batch members with valid parameters leading to
- undefined statistics will return NaN for this statistic.
-* <b>`name`</b>: A name for this distribution (optional).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If cat is not a `Categorical`, or `components` is not
- a list or tuple, or the elements of `components` are not
- instances of `Distribution`, or do not have matching `dtype`.
-* <b>`ValueError`</b>: If `components` is an empty list or tuple, or its
- elements do not have a statically known event rank.
- If `cat.num_classes` cannot be inferred at graph creation time,
- or the constant value of `cat.num_classes` is not equal to
- `len(components)`, or all `components` and `cat` do not have
- matching static batch shapes, or all components do not
- have matching static event shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.allow_nan_stats` {#Mixture.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.batch_shape` {#Mixture.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.batch_shape_tensor(name='batch_shape_tensor')` {#Mixture.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.cat` {#Mixture.cat}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.cdf(value, name='cdf')` {#Mixture.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.components` {#Mixture.components}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.copy(**override_parameters_kwargs)` {#Mixture.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.covariance(name='covariance')` {#Mixture.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.dtype` {#Mixture.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.entropy(name='entropy')` {#Mixture.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.entropy_lower_bound(name='entropy_lower_bound')` {#Mixture.entropy_lower_bound}
-
-A lower bound on the entropy of this mixture model.
-
-The bound below is not always very tight, and its usefulness depends
-on the mixture probabilities and the components in use.
-
-A lower bound is useful for ELBO when the `Mixture` is the variational
-distribution:
-
-\\(
-\log p(x) >= ELBO = \int q(z) \log p(x, z) dz + H[q]
-\\)
-
-where \\( p \\) is the prior distribution, \\( q \\) is the variational,
-and \\( H[q] \\) is the entropy of \\( q \\). If there is a lower bound
-\\( G[q] \\) such that \\( H[q] \geq G[q] \\) then it can be used in
-place of \\( H[q] \\).
-
-For a mixture of distributions \\( q(Z) = \sum_i c_i q_i(Z) \\) with
-\\( \sum_i c_i = 1 \\), by the concavity of \\( f(x) = -x \log x \\), a
-simple lower bound is:
-
-\\(
-\begin{align}
-H[q] & = - \int q(z) \log q(z) dz \\\
- & = - \int (\sum_i c_i q_i(z)) \log(\sum_i c_i q_i(z)) dz \\\
- & \geq - \sum_i c_i \int q_i(z) \log q_i(z) dz \\\
- & = \sum_i c_i H[q_i]
-\end{align}
-\\)
-
-This is the term we calculate below for \\( G[q] \\).
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A lower bound on the Mixture's entropy.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.event_shape` {#Mixture.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.event_shape_tensor(name='event_shape_tensor')` {#Mixture.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.is_continuous` {#Mixture.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.is_scalar_batch(name='is_scalar_batch')` {#Mixture.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.is_scalar_event(name='is_scalar_event')` {#Mixture.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.log_cdf(value, name='log_cdf')` {#Mixture.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.log_prob(value, name='log_prob')` {#Mixture.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.log_survival_function(value, name='log_survival_function')` {#Mixture.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.mean(name='mean')` {#Mixture.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.mode(name='mode')` {#Mixture.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.name` {#Mixture.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.num_components` {#Mixture.num_components}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Mixture.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.param_static_shapes(cls, sample_shape)` {#Mixture.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.parameters` {#Mixture.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.prob(value, name='prob')` {#Mixture.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.reparameterization_type` {#Mixture.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.sample(sample_shape=(), seed=None, name='sample')` {#Mixture.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.stddev(name='stddev')` {#Mixture.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.survival_function(value, name='survival_function')` {#Mixture.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.validate_args` {#Mixture.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Mixture.variance(name='variance')` {#Mixture.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.bijector.SoftmaxCentered.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.bijector.SoftmaxCentered.md
deleted file mode 100644
index 1e47513e6b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.bijector.SoftmaxCentered.md
+++ /dev/null
@@ -1,298 +0,0 @@
-Bijector which computes `Y = g(X) = exp([X 0]) / sum(exp([X 0]))`.
-
-To implement [softmax](https://en.wikipedia.org/wiki/Softmax_function) as a
-bijection, the forward transformation appends a value to the input and the
-inverse removes this coordinate. The appended coordinate represents a pivot,
-e.g., `softmax(x) = exp(x-c) / sum(exp(x-c))` where `c` is the implicit last
-coordinate.
-
-Because we append a coordinate, this bijector only supports `event_ndim in [0,
-1]`, i.e., scalars and vectors.
-
-Example Use:
-
-```python
-bijector.SoftmaxCentered(event_ndims=1).forward(tf.log([2, 3, 4]))
-# Result: [0.2, 0.3, 0.4, 0.1]
-# Extra result: 0.1
-
-bijector.SoftmaxCentered(event_ndims=1).inverse([0.2, 0.3, 0.4, 0.1])
-# Result: tf.log([2, 3, 4])
-# Extra coordinate removed.
-```
-
-At first blush it may seem like the [Invariance of domain](
-https://en.wikipedia.org/wiki/Invariance_of_domain) theorem implies this
-implementation is not a bijection. However, the appended dimension
-makes the (forward) image non-open and the theorem does not directly apply.
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.__init__(event_ndims=0, validate_args=False, name='softmax_centered')` {#SoftmaxCentered.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.dtype` {#SoftmaxCentered.dtype}
-
-dtype of `Tensor`s transformable by this distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.event_ndims` {#SoftmaxCentered.event_ndims}
-
-Returns then number of event dimensions this bijector operates on.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.forward(x, name='forward')` {#SoftmaxCentered.forward}
-
-Returns the forward `Bijector` evaluation, i.e., X = g(Y).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `x.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if `_forward` is not implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.forward_event_shape(input_shape)` {#SoftmaxCentered.forward_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `forward_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `forward` function.
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `forward`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.forward_event_shape_tensor(input_shape, name='forward_event_shape_tensor')` {#SoftmaxCentered.forward_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`input_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `forward` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`forward_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `forward`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.forward_log_det_jacobian(x, name='forward_log_det_jacobian')` {#SoftmaxCentered.forward_log_det_jacobian}
-
-Returns both the forward_log_det_jacobian.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. The input to the "forward" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_forward_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.graph_parents` {#SoftmaxCentered.graph_parents}
-
-Returns this `Bijector`'s graph_parents as a Python list.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.inverse(y, name='inverse')` {#SoftmaxCentered.inverse}
-
-Returns the inverse `Bijector` evaluation, i.e., X = g^{-1}(Y).
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.inverse_and_inverse_log_det_jacobian(y, name='inverse_and_inverse_log_det_jacobian')` {#SoftmaxCentered.inverse_and_inverse_log_det_jacobian}
-
-Returns both the inverse evaluation and inverse_log_det_jacobian.
-
-Enables possibly more efficient calculation when both inverse and
-corresponding Jacobian are needed.
-
-See `inverse()`, `inverse_log_det_jacobian()` for more details.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_and_inverse_log_det_jacobian`
- nor {`_inverse`, `_inverse_log_det_jacobian`} are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.inverse_event_shape(output_shape)` {#SoftmaxCentered.inverse_event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-Same meaning as `inverse_event_shape_tensor`. May be only partially defined.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `TensorShape` indicating event-portion shape passed into
- `inverse` function.
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `TensorShape` indicating event-portion shape
- after applying `inverse`. Possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.inverse_event_shape_tensor(output_shape, name='inverse_event_shape_tensor')` {#SoftmaxCentered.inverse_event_shape_tensor}
-
-Shape of a single sample from a single batch as an `int32` 1D `Tensor`.
-
-##### Args:
-
-
-* <b>`output_shape`</b>: `Tensor`, `int32` vector indicating event-portion shape
- passed into `inverse` function.
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`inverse_event_shape_tensor`</b>: `Tensor`, `int32` vector indicating
- event-portion shape after applying `inverse`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.inverse_log_det_jacobian(y, name='inverse_log_det_jacobian')` {#SoftmaxCentered.inverse_log_det_jacobian}
-
-Returns the (log o det o Jacobian o inverse)(y).
-
-Mathematically, returns: `log(det(dX/dY))(Y)`. (Recall that: `X=g^{-1}(Y)`.)
-
-Note that `forward_log_det_jacobian` is the negative of this function.
-
-##### Args:
-
-
-* <b>`y`</b>: `Tensor`. The input to the "inverse" Jacobian evaluation.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `self.dtype` is specified and `y.dtype` is not
- `self.dtype`.
-* <b>`NotImplementedError`</b>: if neither `_inverse_log_det_jacobian` nor
- `_inverse_and_inverse_log_det_jacobian` are implemented.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.is_constant_jacobian` {#SoftmaxCentered.is_constant_jacobian}
-
-Returns true iff the Jacobian is not a function of x.
-
-Note: Jacobian is either constant for both forward and inverse or neither.
-
-##### Returns:
-
-
-* <b>`is_constant_jacobian`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.name` {#SoftmaxCentered.name}
-
-Returns the string name of this `Bijector`.
-
-
-- - -
-
-#### `tf.contrib.distributions.bijector.SoftmaxCentered.validate_args` {#SoftmaxCentered.validate_args}
-
-Returns True if Tensor arguments will be validated.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.matrix_diag_transform.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.matrix_diag_transform.md
deleted file mode 100644
index 1f39a487ba..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.matrix_diag_transform.md
+++ /dev/null
@@ -1,53 +0,0 @@
-### `tf.contrib.distributions.matrix_diag_transform(matrix, transform=None, name=None)` {#matrix_diag_transform}
-
-Transform diagonal of [batch-]matrix, leave rest of matrix unchanged.
-
-Create a trainable covariance defined by a Cholesky factor:
-
-```python
-# Transform network layer into 2 x 2 array.
-matrix_values = tf.contrib.layers.fully_connected(activations, 4)
-matrix = tf.reshape(matrix_values, (batch_size, 2, 2))
-
-# Make the diagonal positive. If the upper triangle was zero, this would be a
-# valid Cholesky factor.
-chol = matrix_diag_transform(matrix, transform=tf.nn.softplus)
-
-# OperatorPDCholesky ignores the upper triangle.
-operator = OperatorPDCholesky(chol)
-```
-
-Example of heteroskedastic 2-D linear regression.
-
-```python
-# Get a trainable Cholesky factor.
-matrix_values = tf.contrib.layers.fully_connected(activations, 4)
-matrix = tf.reshape(matrix_values, (batch_size, 2, 2))
-chol = matrix_diag_transform(matrix, transform=tf.nn.softplus)
-
-# Get a trainable mean.
-mu = tf.contrib.layers.fully_connected(activations, 2)
-
-# This is a fully trainable multivariate normal!
-dist = tf.contrib.distributions.MVNCholesky(mu, chol)
-
-# Standard log loss. Minimizing this will "train" mu and chol, and then dist
-# will be a distribution predicting labels as multivariate Gaussians.
-loss = -1 * tf.reduce_mean(dist.log_prob(labels))
-```
-
-##### Args:
-
-
-* <b>`matrix`</b>: Rank `R` `Tensor`, `R >= 2`, where the last two dimensions are
- equal.
-* <b>`transform`</b>: Element-wise function mapping `Tensors` to `Tensors`. To
- be applied to the diagonal of `matrix`. If `None`, `matrix` is returned
- unchanged. Defaults to `None`.
-* <b>`name`</b>: A name to give created ops.
- Defaults to "matrix_diag_transform".
-
-##### Returns:
-
- A `Tensor` with same shape and `dtype` as `matrix`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.normal_conjugates_known_scale_posterior.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.normal_conjugates_known_scale_posterior.md
deleted file mode 100644
index 554d553e77..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.distributions.normal_conjugates_known_scale_posterior.md
+++ /dev/null
@@ -1,48 +0,0 @@
-### `tf.contrib.distributions.normal_conjugates_known_scale_posterior(prior, scale, s, n)` {#normal_conjugates_known_scale_posterior}
-
-Posterior Normal distribution with conjugate prior on the mean.
-
-This model assumes that `n` observations (with sum `s`) come from a
-Normal with unknown mean `loc` (described by the Normal `prior`)
-and known variance `scale**2`. The "known scale posterior" is
-the distribution of the unknown `loc`.
-
-Accepts a prior Normal distribution object, having parameters
-`loc0` and `scale0`, as well as known `scale` values of the predictive
-distribution(s) (also assumed Normal),
-and statistical estimates `s` (the sum(s) of the observations) and
-`n` (the number(s) of observations).
-
-Returns a posterior (also Normal) distribution object, with parameters
-`(loc', scale'**2)`, where:
-
-```
-mu ~ N(mu', sigma'**2)
-sigma'**2 = 1/(1/sigma0**2 + n/sigma**2),
-mu' = (mu0/sigma0**2 + s/sigma**2) * sigma'**2.
-```
-
-Distribution parameters from `prior`, as well as `scale`, `s`, and `n`.
-will broadcast in the case of multidimensional sets of parameters.
-
-##### Args:
-
-
-* <b>`prior`</b>: `Normal` object of type `dtype`:
- the prior distribution having parameters `(loc0, scale0)`.
-* <b>`scale`</b>: tensor of type `dtype`, taking values `scale > 0`.
- The known stddev parameter(s).
-* <b>`s`</b>: Tensor of type `dtype`. The sum(s) of observations.
-* <b>`n`</b>: Tensor of type `int`. The number(s) of observations.
-
-##### Returns:
-
- A new Normal posterior distribution object for the unknown observation
- mean `loc`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if dtype of `s` does not match `dtype`, or `prior` is not a
- Normal object.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.framework.assert_same_float_dtype.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.framework.assert_same_float_dtype.md
deleted file mode 100644
index e5ecdd0898..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.framework.assert_same_float_dtype.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.contrib.framework.assert_same_float_dtype(tensors=None, dtype=None)` {#assert_same_float_dtype}
-
-Validate and return float type based on `tensors` and `dtype`.
-
-For ops such as matrix multiplication, inputs and weights must be of the
-same float type. This function validates that all `tensors` are the same type,
-validates that type is `dtype` (if supplied), and returns the type. Type must
-be `dtypes.float32` or `dtypes.float64`. If neither `tensors` nor
-`dtype` is supplied, default to `dtypes.float32`.
-
-##### Args:
-
-
-* <b>`tensors`</b>: Tensors of input values. Can include `None` elements, which will be
- ignored.
-* <b>`dtype`</b>: Expected type.
-
-##### Returns:
-
- Validated type.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if neither `tensors` nor `dtype` is supplied, or result is not
- float.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.framework.get_variable_full_name.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.framework.get_variable_full_name.md
deleted file mode 100644
index 24aa87a829..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.framework.get_variable_full_name.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.contrib.framework.get_variable_full_name(var)` {#get_variable_full_name}
-
-Returns the full name of a variable.
-
-For normal Variables, this is the same as the var.op.name. For
-sliced or PartitionedVariables, this name is the same for all the
-slices/partitions. In both cases, this is normally the name used in
-a checkpoint file.
-
-##### Args:
-
-
-* <b>`var`</b>: A `Variable` object.
-
-##### Returns:
-
- A string that is the full name.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.framework.zero_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.framework.zero_initializer.md
deleted file mode 100644
index 7f78c18e45..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.framework.zero_initializer.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.contrib.framework.zero_initializer(ref, use_locking=True, name='zero_initializer')` {#zero_initializer}
-
-Initialize 'ref' with all zeros, ref tensor should be uninitialized.
-If already initialized, you will get ValueError. This op is intended to
-save memory during initialization.
-
-##### Args:
-
-
-* <b>`ref`</b>: ref of the tensor need to be zero initialized.
-* <b>`name`</b>: optional name for this operation.
-
-##### Returns:
-
- ref that initialized.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If ref tensor is initialized.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.ControlOutputs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.ControlOutputs.md
deleted file mode 100644
index 30b5d435d1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.ControlOutputs.md
+++ /dev/null
@@ -1,52 +0,0 @@
-The control outputs topology.
-- - -
-
-#### `tf.contrib.graph_editor.ControlOutputs.__init__(graph)` {#ControlOutputs.__init__}
-
-Create a dictionary of control-output dependencies.
-
-##### Args:
-
-
-* <b>`graph`</b>: a `tf.Graph`.
-
-##### Returns:
-
- A dictionary where a key is a `tf.Operation` instance and the
- corresponding value is a list of all the ops which have the key
- as one of their control-input dependencies.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: graph is not a `tf.Graph`.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.ControlOutputs.get(op)` {#ControlOutputs.get}
-
-return the control outputs of op.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.ControlOutputs.get_all()` {#ControlOutputs.get_all}
-
-
-
-
-- - -
-
-#### `tf.contrib.graph_editor.ControlOutputs.graph` {#ControlOutputs.graph}
-
-
-
-
-- - -
-
-#### `tf.contrib.graph_editor.ControlOutputs.update()` {#ControlOutputs.update}
-
-Update the control outputs if the graph has changed.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.Transformer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.Transformer.md
deleted file mode 100644
index 2b8f433b54..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.Transformer.md
+++ /dev/null
@@ -1,64 +0,0 @@
-Transform a subgraph into another one.
-
-By default, the constructor create a transform which copy a subgraph and
-replaces inputs with placeholders. This behavior can be modified by changing
-the handlers.
-- - -
-
-#### `tf.contrib.graph_editor.Transformer.__call__(sgv, dst_graph, dst_scope, src_scope='', reuse_dst_scope=False)` {#Transformer.__call__}
-
-Execute the transformation.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the source subgraph-view.
-* <b>`dst_graph`</b>: the destination graph.
-* <b>`dst_scope`</b>: the destination scope.
-* <b>`src_scope`</b>: the source scope, which specify the path from which the
- relative path of the transformed nodes are computed. For instance, if
- src_scope is a/ and dst_scoped is b/, then the node a/x/y will have a
- relative path of x/y and will be transformed into b/x/y.
-* <b>`reuse_dst_scope`</b>: if True the dst_scope is re-used if it already exists.
- Otherwise, the scope is given a unique name based on the one given
- by appending an underscore followed by a digit (default).
-
-##### Returns:
-
- A tuple `(sgv, info)` where:
- `sgv` is the transformed subgraph view;
- `info` is an instance of TransformerInfo containing
- information about the transform, including mapping between
- original and transformed tensors and operations.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the arguments are invalid.
-
-
-- - -
-
-#### `tf.contrib.graph_editor.Transformer.__init__()` {#Transformer.__init__}
-
-Transformer constructor.
-
-The following members can be modified:
-transform_op_handler: handle the transformation of a `tf.Operation`.
- This handler defaults to a simple copy.
-assign_collections_handler: handle the assignment of collections.
- This handler defaults to assigning new collections created under the
- given name-scope.
-transform_external_input_handler: handle the transform of the inputs to
- the given subgraph. This handler defaults to creating placeholders
- instead of the ops just before the input tensors of the subgraph.
-transform_external_hidden_input_handler: handle the transform of the
- hidden inputs of the subgraph, that is, the inputs which are not listed
- in sgv.inputs. This handler defaults to a transform which keep the same
- input if the source and destination graphs are the same, otherwise
- use placeholders.
-transform_original_op_handler: handle the transform of original_op. This
- handler defaults to transforming original_op only if they are in the
- subgraph, otherwise they are ignored.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.check_cios.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.check_cios.md
deleted file mode 100644
index 6943d50376..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.check_cios.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.contrib.graph_editor.check_cios(control_inputs=False, control_outputs=None, control_ios=None)` {#check_cios}
-
-Do various check on control_inputs and control_outputs.
-
-##### Args:
-
-
-* <b>`control_inputs`</b>: A boolean indicating whether control inputs are enabled.
-* <b>`control_outputs`</b>: An instance of util.ControlOutputs or None. If not None,
- control outputs are enabled.
-* <b>`control_ios`</b>: An instance of util.ControlOutputs or None. If not None, both
- control inputs and control outputs are enabled. This is equivalent to set
- control_inputs to True and control_outputs to the util.ControlOutputs
- instance.
-
-##### Returns:
-
- A tuple `(control_inputs, control_outputs)` where:
- `control_inputs` is a boolean indicating whether to use control inputs.
- `control_outputs` is an instance of util.ControlOutputs or None
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if control_inputs is an instance of util.ControlOutputs but
- control_outputs is not None
-* <b>`TypeError`</b>: if control_outputs is not None and is not a util.ControlOutputs.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.copy_with_input_replacements.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.copy_with_input_replacements.md
deleted file mode 100644
index 47a30fe1be..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.copy_with_input_replacements.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.contrib.graph_editor.copy_with_input_replacements(sgv, replacement_ts, dst_graph=None, dst_scope='', src_scope='', reuse_dst_scope=False)` {#copy_with_input_replacements}
-
-Copy a subgraph, replacing some of its inputs.
-
-Note a replacement only happens if the tensor to be replaced
-is an input of the given subgraph. The inputs of a subgraph can
-be queried using sgv.inputs.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the source subgraph-view. This argument is converted to a subgraph
- using the same rules as the function subgraph.make_view.
-* <b>`replacement_ts`</b>: dictionary mapping from original tensors to the
- replaced one.
-* <b>`dst_graph`</b>: the destination graph.
-* <b>`dst_scope`</b>: the destination scope.
-* <b>`src_scope`</b>: the source scope.
-* <b>`reuse_dst_scope`</b>: if True the dst_scope is re-used if it already exists.
- Otherwise, the scope is given a unique name based on the one given
- by appending an underscore followed by a digit (default).
-
-##### Returns:
-
- A tuple `(sgv, info)` where:
- `sgv` is the transformed subgraph view;
- `info` is an instance of TransformerInfo containing
- information about the transform, including mapping between
- original and transformed tensors and operations.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if dst_graph is not a tf.Graph.
-* <b>`StandardError`</b>: if sgv cannot be converted to a SubGraphView using
- the same rules as the function subgraph.make_view.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.detach.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.detach.md
deleted file mode 100644
index 8230e9b3cd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.detach.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.contrib.graph_editor.detach(sgv, control_inputs=False, control_outputs=None, control_ios=None)` {#detach}
-
-Detach both the inputs and the outputs of a subgraph view.
-
-##### Args:
-
-
-* <b>`sgv`</b>: the subgraph view to be detached. This argument is converted to a
- subgraph using the same rules as the function subgraph.make_view.
- Note that sgv is modified in place.
-* <b>`control_inputs`</b>: A boolean indicating whether control inputs are enabled.
-* <b>`control_outputs`</b>: An instance of util.ControlOutputs or None. If not None,
- control outputs are enabled.
-* <b>`control_ios`</b>: An instance of util.ControlOutputs or None. If not None, both
- control inputs and control outputs are enabled. This is equivalent to set
- control_inputs to True and control_outputs to the util.ControlOutputs
- instance.
-
-##### Returns:
-
- A tuple `(sgv, detached_inputs, detached_outputs)` where:
- `sgv` is a new subgraph view of the detached subgraph;
- `detach_inputs` is a list of the created input placeholders;
- `detach_outputs` is a list of the created output placeholders.
-
-##### Raises:
-
-
-* <b>`StandardError`</b>: if sgv cannot be converted to a SubGraphView using
- the same rules than the function subgraph.make_view.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.get_backward_walk_ops.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.get_backward_walk_ops.md
deleted file mode 100644
index f22dc2ed1c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.get_backward_walk_ops.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.contrib.graph_editor.get_backward_walk_ops(seed_ops, inclusive=True, within_ops=None, stop_at_ts=(), control_inputs=False)` {#get_backward_walk_ops}
-
-Do a backward graph walk and return all the visited ops.
-
-##### Args:
-
-
-* <b>`seed_ops`</b>: an iterable of operations from which the backward graph
- walk starts. If a list of tensors is given instead, the seed_ops are set
- to be the generators of those tensors.
-* <b>`inclusive`</b>: if True the given seed_ops are also part of the resulting set.
-* <b>`within_ops`</b>: an iterable of `tf.Operation` within which the search is
- restricted. If `within_ops` is `None`, the search is performed within
- the whole graph.
-* <b>`stop_at_ts`</b>: an iterable of tensors at which the graph walk stops.
-* <b>`control_inputs`</b>: if True, control inputs will be used while moving backward.
-
-##### Returns:
-
- A Python set of all the `tf.Operation` behind `seed_ops`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `seed_ops` or `within_ops` cannot be converted to a list of
- `tf.Operation`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.get_ops_ios.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.get_ops_ios.md
deleted file mode 100644
index 30491b740c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.graph_editor.get_ops_ios.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.contrib.graph_editor.get_ops_ios(ops, control_inputs=False, control_outputs=None, control_ios=None)` {#get_ops_ios}
-
-Return all the `tf.Operation` which are connected to an op in ops.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of `tf.Operation`.
-* <b>`control_inputs`</b>: A boolean indicating whether control inputs are enabled.
-* <b>`control_outputs`</b>: An instance of `util.ControlOutputs` or `None`. If not
- `None`, control outputs are enabled.
-* <b>`control_ios`</b>: An instance of `util.ControlOutputs` or `None`. If not `None`,
- both control inputs and control outputs are enabled. This is equivalent to
- set `control_inputs` to `True` and `control_outputs` to the
- `util.ControlOutputs` instance.
-
-##### Returns:
-
- All the `tf.Operation` surrounding the given ops.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `ops` cannot be converted to a list of `tf.Operation`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.apply_regularization.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.apply_regularization.md
deleted file mode 100644
index ec6beee0af..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.apply_regularization.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.contrib.layers.apply_regularization(regularizer, weights_list=None)` {#apply_regularization}
-
-Returns the summed penalty by applying `regularizer` to the `weights_list`.
-
-Adding a regularization penalty over the layer weights and embedding weights
-can help prevent overfitting the training data. Regularization over layer
-biases is less common/useful, but assuming proper data preprocessing/mean
-subtraction, it usually shouldn't hurt much either.
-
-##### Args:
-
-
-* <b>`regularizer`</b>: A function that takes a single `Tensor` argument and returns
- a scalar `Tensor` output.
-* <b>`weights_list`</b>: List of weights `Tensors` or `Variables` to apply
- `regularizer` over. Defaults to the `GraphKeys.WEIGHTS` collection if
- `None`.
-
-##### Returns:
-
- A scalar representing the overall regularization penalty.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `regularizer` does not return a scalar output, or if we find
- no weights.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.conv2d_in_plane.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.conv2d_in_plane.md
deleted file mode 100644
index 83319a6d6b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.conv2d_in_plane.md
+++ /dev/null
@@ -1,49 +0,0 @@
-### `tf.contrib.layers.conv2d_in_plane(*args, **kwargs)` {#conv2d_in_plane}
-
-Performs the same in-plane convolution to each channel independently.
-
-This is useful for performing various simple channel-independent convolution
-operations such as image gradients:
-
- image = tf.constant(..., shape=(16, 240, 320, 3))
- vert_gradients = layers.conv2d_in_plane(image,
- kernel=[1, -1],
- kernel_size=[2, 1])
- horz_gradients = layers.conv2d_in_plane(image,
- kernel=[1, -1],
- kernel_size=[1, 2])
-
-##### Args:
-
-
-* <b>`inputs`</b>: A 4-D tensor with dimensions [batch_size, height, width, channels].
-* <b>`kernel_size`</b>: A list of length 2 holding the [kernel_height, kernel_width] of
- of the pooling. Can be an int if both values are the same.
-* <b>`stride`</b>: A list of length 2 `[stride_height, stride_width]`.
- Can be an int if both strides are the same. Note that presently
- both strides must have the same value.
-* <b>`padding`</b>: The padding type to use, either 'SAME' or 'VALID'.
-* <b>`activation_fn`</b>: Activation function. The default value is a ReLU function.
- Explicitly set it to None to skip it and maintain a linear activation.
-* <b>`normalizer_fn`</b>: Normalization function to use instead of `biases`. If
- `normalizer_fn` is provided then `biases_initializer` and
- `biases_regularizer` are ignored and `biases` are not created nor added.
- default set to None for no normalizer function
-* <b>`normalizer_params`</b>: Normalization function parameters.
-* <b>`weights_initializer`</b>: An initializer for the weights.
-* <b>`weights_regularizer`</b>: Optional regularizer for the weights.
-* <b>`biases_initializer`</b>: An initializer for the biases. If None skip biases.
-* <b>`biases_regularizer`</b>: Optional regularizer for the biases.
-* <b>`reuse`</b>: Whether or not the layer and its variables should be reused. To be
- able to reuse the layer scope must be given.
-* <b>`variables_collections`</b>: Optional list of collections for all the variables or
- a dictionary containing a different list of collection per variable.
-* <b>`outputs_collections`</b>: Collection to add the outputs.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for `variable_scope`.
-
-##### Returns:
-
- A `Tensor` representing the output of the operation.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.max_pool2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.max_pool2d.md
deleted file mode 100644
index 823ec0f734..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.max_pool2d.md
+++ /dev/null
@@ -1,33 +0,0 @@
-### `tf.contrib.layers.max_pool2d(*args, **kwargs)` {#max_pool2d}
-
-Adds a 2D Max Pooling op.
-
-It is assumed that the pooling is done per image but not in batch or channels.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A 4-D tensor of shape `[batch_size, height, width, channels]` if
- `data_format` is `NHWC`, and `[batch_size, channels, height, width]` if
- `data_format` is `NCHW`.
-* <b>`kernel_size`</b>: A list of length 2: [kernel_height, kernel_width] of the
- pooling kernel over which the op is computed. Can be an int if both
- values are the same.
-* <b>`stride`</b>: A list of length 2: [stride_height, stride_width].
- Can be an int if both strides are the same. Note that presently
- both strides must have the same value.
-* <b>`padding`</b>: The padding method, either 'VALID' or 'SAME'.
-* <b>`data_format`</b>: A string. `NHWC` (default) and `NCHW` are supported.
-* <b>`outputs_collections`</b>: The collections to which the outputs are added.
-* <b>`scope`</b>: Optional scope for name_scope.
-
-##### Returns:
-
- A `Tensor` representing the results of the pooling operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `data_format` is neither `NHWC` nor `NCHW`.
-* <b>`ValueError`</b>: If 'kernel_size' is not a 2-D list
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.parse_feature_columns_from_examples.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.parse_feature_columns_from_examples.md
deleted file mode 100644
index 8d2e3543f5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.parse_feature_columns_from_examples.md
+++ /dev/null
@@ -1,52 +0,0 @@
-### `tf.contrib.layers.parse_feature_columns_from_examples(serialized, feature_columns, name=None, example_names=None)` {#parse_feature_columns_from_examples}
-
-Parses tf.Examples to extract tensors for given feature_columns.
-
-This is a wrapper of 'tf.parse_example'.
-
-Example:
-
-```python
-columns_to_tensor = parse_feature_columns_from_examples(
- serialized=my_data,
- feature_columns=my_features)
-
-# Where my_features are:
-# Define features and transformations
-sparse_feature_a = sparse_column_with_keys(
- column_name="sparse_feature_a", keys=["AB", "CD", ...])
-
-embedding_feature_a = embedding_column(
- sparse_id_column=sparse_feature_a, dimension=3, combiner="sum")
-
-sparse_feature_b = sparse_column_with_hash_bucket(
- column_name="sparse_feature_b", hash_bucket_size=1000)
-
-embedding_feature_b = embedding_column(
- sparse_id_column=sparse_feature_b, dimension=16, combiner="sum")
-
-crossed_feature_a_x_b = crossed_column(
- columns=[sparse_feature_a, sparse_feature_b], hash_bucket_size=10000)
-
-real_feature = real_valued_column("real_feature")
-real_feature_buckets = bucketized_column(
- source_column=real_feature, boundaries=[...])
-
-my_features = [embedding_feature_b, real_feature_buckets, embedding_feature_a]
-```
-
-##### Args:
-
-
-* <b>`serialized`</b>: A vector (1-D Tensor) of strings, a batch of binary
- serialized `Example` protos.
-* <b>`feature_columns`</b>: An iterable containing all the feature columns. All items
- should be instances of classes derived from _FeatureColumn.
-* <b>`name`</b>: A name for this operation (optional).
-* <b>`example_names`</b>: A vector (1-D Tensor) of strings (optional), the names of
- the serialized protos in the batch.
-
-##### Returns:
-
- A `dict` mapping FeatureColumn to `Tensor` and `SparseTensor` values.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.parse_feature_columns_from_sequence_examples.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.parse_feature_columns_from_sequence_examples.md
deleted file mode 100644
index 99d37f4f77..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.parse_feature_columns_from_sequence_examples.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.contrib.layers.parse_feature_columns_from_sequence_examples(serialized, context_feature_columns, sequence_feature_columns, name=None, example_name=None)` {#parse_feature_columns_from_sequence_examples}
-
-Parses tf.SequenceExamples to extract tensors for given `FeatureColumn`s.
-
-##### Args:
-
-
-* <b>`serialized`</b>: A scalar (0-D Tensor) of type string, a single serialized
- `SequenceExample` proto.
-* <b>`context_feature_columns`</b>: An iterable containing the feature columns for
- context features. All items should be instances of classes derived from
- `_FeatureColumn`. Can be `None`.
-* <b>`sequence_feature_columns`</b>: An iterable containing the feature columns for
- sequence features. All items should be instances of classes derived from
- `_FeatureColumn`. Can be `None`.
-* <b>`name`</b>: A name for this operation (optional).
-* <b>`example_name`</b>: A scalar (0-D Tensor) of type string (optional), the names of
- the serialized proto.
-
-##### Returns:
-
- A tuple consisting of:
-
-* <b>`context_features`</b>: a dict mapping `FeatureColumns` from
- `context_feature_columns` to their parsed `Tensors`/`SparseTensor`s.
-* <b>`sequence_features`</b>: a dict mapping `FeatureColumns` from
- `sequence_feature_columns` to their parsed `Tensors`/`SparseTensor`s.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.summarize_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.summarize_tensor.md
deleted file mode 100644
index 872ba5c9d4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.summarize_tensor.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.contrib.layers.summarize_tensor(tensor, tag=None)` {#summarize_tensor}
-
-Summarize a tensor using a suitable summary type.
-
-This function adds a summary op for `tensor`. The type of summary depends on
-the shape of `tensor`. For scalars, a `scalar_summary` is created, for all
-other tensors, `histogram_summary` is used.
-
-##### Args:
-
-
-* <b>`tensor`</b>: The tensor to summarize
-* <b>`tag`</b>: The tag to use, if None then use tensor's op's name.
-
-##### Returns:
-
- The summary op created or None for string tensors.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.xavier_initializer_conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.xavier_initializer_conv2d.md
deleted file mode 100644
index 9deeb48b5b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.layers.xavier_initializer_conv2d.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.contrib.layers.xavier_initializer_conv2d(uniform=True, seed=None, dtype=tf.float32)` {#xavier_initializer_conv2d}
-
-Returns an initializer performing "Xavier" initialization for weights.
-
-This function implements the weight initialization from:
-
-Xavier Glorot and Yoshua Bengio (2010):
- Understanding the difficulty of training deep feedforward neural
- networks. International conference on artificial intelligence and
- statistics.
-
-This initializer is designed to keep the scale of the gradients roughly the
-same in all layers. In uniform distribution this ends up being the range:
-`x = sqrt(6. / (in + out)); [-x, x]` and for normal distribution a standard
-deviation of `sqrt(3. / (in + out))` is used.
-
-##### Args:
-
-
-* <b>`uniform`</b>: Whether to use uniform or normal distributed random initialization.
-* <b>`seed`</b>: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`dtype`</b>: The data type. Only floating point types are supported.
-
-##### Returns:
-
- An initializer for a weight matrix.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.Experiment.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.Experiment.md
deleted file mode 100644
index 6e891922f4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.Experiment.md
+++ /dev/null
@@ -1,229 +0,0 @@
-Experiment is a class containing all information needed to train a model.
-
-After an experiment is created (by passing an Estimator and inputs for
-training and evaluation), an Experiment instance knows how to invoke training
-and eval loops in a sensible fashion for distributed training.
-- - -
-
-#### `tf.contrib.learn.Experiment.__init__(*args, **kwargs)` {#Experiment.__init__}
-
-Constructor for `Experiment`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-10-23.
-Instructions for updating:
-local_eval_frequency is deprecated as local_run will be renamed to train_and_evaluate. Use min_eval_frequency and call train_and_evaluate instead. Note, however, that the default for min_eval_frequency is 1, meaning models will be evaluated every time a new checkpoint is available. In contrast, the default for local_eval_frequency is None, resulting in evaluation occurring only after training has completed. min_eval_frequency is ignored when calling the deprecated local_run.
-
-Creates an Experiment instance. None of the functions passed to this
-constructor are executed at construction time. They are stored and used
-when a method is executed which requires it.
-
-##### Args:
-
-
-* <b>`estimator`</b>: Object implementing `Trainable` and `Evaluable`.
-* <b>`train_input_fn`</b>: function, returns features and labels for training.
-* <b>`eval_input_fn`</b>: function, returns features and labels for evaluation. If
- `eval_steps` is `None`, this should be configured only to produce for a
- finite number of batches (generally, 1 epoch over the evaluation data).
-* <b>`eval_metrics`</b>: `dict` of string, metric function. If `None`, default set
- is used.
-* <b>`train_steps`</b>: Perform this many steps of training. `None`, the default,
- means train forever.
-* <b>`eval_steps`</b>: `evaluate` runs until input is exhausted (or another exception
- is raised), or for `eval_steps` steps, if specified.
-* <b>`train_monitors`</b>: A list of monitors to pass to the `Estimator`'s `fit`
- function.
-* <b>`eval_hooks`</b>: A list of `SessionRunHook` hooks to pass to the
- `Estimator`'s `evaluate` function.
-* <b>`local_eval_frequency`</b>: Frequency of running eval in steps,
- when running locally. If `None`, runs evaluation only at the end of
- training.
-* <b>`eval_delay_secs`</b>: Start evaluating after waiting for this many seconds.
-* <b>`continuous_eval_throttle_secs`</b>: Do not re-evaluate unless the last
- evaluation was started at least this many seconds ago for
- continuous_eval().
-* <b>`min_eval_frequency`</b>: (applies only to train_and_evaluate). the minimum
- number of steps between evaluations. Of course, evaluation does not
- occur if no new snapshot is available, hence, this is the minimum.
-* <b>`delay_workers_by_global_step`</b>: if `True` delays training workers
- based on global step instead of time.
-* <b>`export_strategies`</b>: A list of `ExportStrategy`s, or a single one, or None.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `estimator` does not implement `Evaluable` and `Trainable`,
- or if export_strategies has the wrong type.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.continuous_eval(delay_secs=None, throttle_delay_secs=None, evaluate_checkpoint_only_once=True, continuous_eval_predicate_fn=None)` {#Experiment.continuous_eval}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.continuous_eval_on_train_data(delay_secs=None, throttle_delay_secs=None, continuous_eval_predicate_fn=None)` {#Experiment.continuous_eval_on_train_data}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.estimator` {#Experiment.estimator}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.eval_metrics` {#Experiment.eval_metrics}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.eval_steps` {#Experiment.eval_steps}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.evaluate(delay_secs=None)` {#Experiment.evaluate}
-
-Evaluate on the evaluation data.
-
-Runs evaluation on the evaluation data and returns the result. Runs for
-`self._eval_steps` steps, or if it's `None`, then run until input is
-exhausted or another exception is raised. Start the evaluation after
-`delay_secs` seconds, or if it's `None`, defaults to using
-`self._eval_delay_secs` seconds.
-
-##### Args:
-
-
-* <b>`delay_secs`</b>: Start evaluating after this many seconds. If `None`, defaults
- to using `self._eval_delays_secs`.
-
-##### Returns:
-
- The result of the `evaluate` call to the `Estimator`.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.extend_train_hooks(additional_hooks)` {#Experiment.extend_train_hooks}
-
-Extends the hooks for training.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.local_run(*args, **kwargs)` {#Experiment.local_run}
-
-DEPRECATED FUNCTION
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-23.
-Instructions for updating:
-local_run will be renamed to train_and_evaluate and the new default behavior will be to run evaluation every time there is a new checkpoint.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.reset_export_strategies(new_export_strategies=None)` {#Experiment.reset_export_strategies}
-
-Resets the export strategies with the `new_export_strategies`.
-
-##### Args:
-
-
-* <b>`new_export_strategies`</b>: A new list of `ExportStrategy`s, or a single one,
- or None.
-
-##### Returns:
-
- The old export strategies.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.run_std_server()` {#Experiment.run_std_server}
-
-Starts a TensorFlow server and joins the serving thread.
-
-Typically used for parameter servers.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if not enough information is available in the estimator's
- config to create a server.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.test()` {#Experiment.test}
-
-Tests training and evaluating the estimator both for a single step.
-
-##### Returns:
-
- The result of the `evaluate` call to the `Estimator`.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.train(delay_secs=None)` {#Experiment.train}
-
-Fit the estimator using the training data.
-
-Train the estimator for `self._train_steps` steps, after waiting for
-`delay_secs` seconds. If `self._train_steps` is `None`, train forever.
-
-##### Args:
-
-
-* <b>`delay_secs`</b>: Start training after this many seconds.
-
-##### Returns:
-
- The trained estimator.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.train_and_evaluate()` {#Experiment.train_and_evaluate}
-
-Interleaves training and evaluation.
-
-The frequency of evaluation is controlled by the contructor arg
-`min_eval_frequency`. When this parameter is None or 0, evaluation happens
-only after training has completed. Note that evaluation cannot happen
-more frequently than checkpoints are taken. If no new snapshots are
-available when evaluation is supposed to occur, then evaluation doesn't
-happen for another `min_eval_frequency` steps (assuming a checkpoint is
-available at that point). Thus, settings `min_eval_frequency` to 1 means
-that the model will be evaluated everytime there is a new checkpoint.
-
-This is particular useful for a "Master" task in the cloud, whose
-responsibility it is to take checkpoints, evaluate those checkpoints,
-and write out summaries. Participating in training as the supervisor
-allows such a task to accomplish the first and last items, while
-performing evaluation allows for the second.
-
-##### Returns:
-
- The result of the `evaluate` call to the `Estimator`.
-
-
-- - -
-
-#### `tf.contrib.learn.Experiment.train_steps` {#Experiment.train_steps}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.KMeansClustering.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.KMeansClustering.md
deleted file mode 100644
index 712b3d1140..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.KMeansClustering.md
+++ /dev/null
@@ -1,413 +0,0 @@
-An Estimator for K-Means clustering.
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.__init__(num_clusters, model_dir=None, initial_clusters='random', distance_metric='squared_euclidean', random_seed=0, use_mini_batch=True, mini_batch_steps_per_iteration=1, kmeans_plus_plus_num_retries=2, relative_tolerance=None, config=None)` {#KMeansClustering.__init__}
-
-Creates a model for running KMeans training and inference.
-
-##### Args:
-
-
-* <b>`num_clusters`</b>: number of clusters to train.
-* <b>`model_dir`</b>: the directory to save the model results and log files.
-* <b>`initial_clusters`</b>: specifies how to initialize the clusters for training.
- See clustering_ops.kmeans for the possible values.
-* <b>`distance_metric`</b>: the distance metric used for clustering.
- See clustering_ops.kmeans for the possible values.
-* <b>`random_seed`</b>: Python integer. Seed for PRNG used to initialize centers.
-* <b>`use_mini_batch`</b>: If true, use the mini-batch k-means algorithm. Else assume
- full batch.
-* <b>`mini_batch_steps_per_iteration`</b>: number of steps after which the updated
- cluster centers are synced back to a master copy. See clustering_ops.py
- for more details.
-* <b>`kmeans_plus_plus_num_retries`</b>: For each point that is sampled during
- kmeans++ initialization, this parameter specifies the number of
- additional points to draw from the current distribution before selecting
- the best. If a negative value is specified, a heuristic is used to
- sample O(log(num_to_sample)) additional points.
-* <b>`relative_tolerance`</b>: A relative tolerance of change in the loss between
- iterations. Stops learning if the loss changes less than this amount.
- Note that this may not work correctly if use_mini_batch=True.
-* <b>`config`</b>: See Estimator
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.__repr__()` {#KMeansClustering.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.clusters()` {#KMeansClustering.clusters}
-
-Returns cluster centers.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.config` {#KMeansClustering.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.evaluate(*args, **kwargs)` {#KMeansClustering.evaluate}
-
-See `Evaluable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` or `y` is provided, and at least one of
- `input_fn` or `feed_fn` is provided.
- Or if `metrics` is not `None` or `dict`.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.export(*args, **kwargs)` {#KMeansClustering.export}
-
-Exports inference graph into given dir. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-23.
-Instructions for updating:
-The signature of the input_fn accepted by export is changing to be consistent with what's used by tf.Learn Estimator's train/evaluate. input_fn (and in most cases, input_feature_key) will become required args, and use_deprecated_input_fn will default to False and be removed altogether.
-
-##### Args:
-
-
-* <b>`export_dir`</b>: A string containing a directory to write the exported graph
- and checkpoints.
-* <b>`input_fn`</b>: If `use_deprecated_input_fn` is true, then a function that given
- `Tensor` of `Example` strings, parses it into features that are then
- passed to the model. Otherwise, a function that takes no argument and
- returns a tuple of (features, labels), where features is a dict of
- string key to `Tensor` and labels is a `Tensor` that's currently not
- used (and so can be `None`).
-* <b>`input_feature_key`</b>: Only used if `use_deprecated_input_fn` is false. String
- key into the features dict returned by `input_fn` that corresponds to a
- the raw `Example` strings `Tensor` that the exported model will take as
- input. Can only be `None` if you're using a custom `signature_fn` that
- does not use the first arg (examples).
-* <b>`use_deprecated_input_fn`</b>: Determines the signature format of `input_fn`.
-* <b>`signature_fn`</b>: Function that returns a default signature and a named
- signature map, given `Tensor` of `Example` strings, `dict` of `Tensor`s
- for features and `Tensor` or `dict` of `Tensor`s for predictions.
-* <b>`prediction_key`</b>: The key for a tensor in the `predictions` dict (output
- from the `model_fn`) to use as the `predictions` input to the
- `signature_fn`. Optional. If `None`, predictions will pass to
- `signature_fn` without filtering.
-* <b>`default_batch_size`</b>: Default batch size of the `Example` placeholder.
-* <b>`exports_to_keep`</b>: Number of exports to keep.
-* <b>`checkpoint_path`</b>: the checkpoint path of the model to be exported. If it is
- `None` (which is default), will use the latest checkpoint in
- export_dir.
-
-##### Returns:
-
- The string path to the exported directory. NB: this functionality was
- added ca. 2016/09/25; clients that depend on the return value may need
- to handle the case where this function returns None because subclasses
- are not returning a value.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#KMeansClustering.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.fit(*args, **kwargs)` {#KMeansClustering.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.get_params(deep=True)` {#KMeansClustering.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.get_variable_names()` {#KMeansClustering.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.get_variable_value(name)` {#KMeansClustering.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.model_dir` {#KMeansClustering.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.partial_fit(*args, **kwargs)` {#KMeansClustering.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.predict(*args, **kwargs)` {#KMeansClustering.predict}
-
-Returns predictions for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x` and 'batch_size' must be `None`.
-* <b>`batch_size`</b>: Override default batch size. If set, 'input_fn' must be
- 'None'.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns all.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- A numpy array of predicted classes or regression values if the
- constructor's `model_fn` returns a `Tensor` for `predictions` or a `dict`
- of numpy arrays if `model_fn` returns a `dict`. Returns an iterable of
- predictions if as_iterable is True.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If x and input_fn are both provided or both `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.predict_cluster_idx(input_fn=None)` {#KMeansClustering.predict_cluster_idx}
-
-Yields predicted cluster indices.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.score(input_fn=None, steps=None)` {#KMeansClustering.score}
-
-Predict total sum of distances to nearest clusters.
-
-Note that this function is different from the corresponding one in sklearn
-which returns the negative of the sum of distances.
-
-##### Args:
-
-
-* <b>`input_fn`</b>: see predict.
-* <b>`steps`</b>: see predict.
-
-##### Returns:
-
- Total sum of distances to nearest clusters.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.set_params(**params)` {#KMeansClustering.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
-- - -
-
-#### `tf.contrib.learn.KMeansClustering.transform(input_fn=None, as_iterable=False)` {#KMeansClustering.transform}
-
-Transforms each element to distances to cluster centers.
-
-Note that this function is different from the corresponding one in sklearn.
-For SQUARED_EUCLIDEAN distance metric, sklearn transform returns the
-EUCLIDEAN distance, while this function returns the SQUARED_EUCLIDEAN
-distance.
-
-##### Args:
-
-
-* <b>`input_fn`</b>: see predict.
-* <b>`as_iterable`</b>: see predict
-
-##### Returns:
-
- Array with same number of rows as x, and num_clusters columns, containing
- distances to the cluster centers.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.TaskType.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.TaskType.md
deleted file mode 100644
index 8b13789179..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.TaskType.md
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.extract_pandas_labels.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.extract_pandas_labels.md
deleted file mode 100644
index 2cbb8e0652..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.learn.extract_pandas_labels.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.contrib.learn.extract_pandas_labels(labels)` {#extract_pandas_labels}
-
-Extract data from pandas.DataFrame for labels.
-
-##### Args:
-
-
-* <b>`labels`</b>: `pandas.DataFrame` or `pandas.Series` containing one column of
- labels to be extracted.
-
-##### Returns:
-
- A numpy `ndarray` of labels from the DataFrame.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if more than one column is found or type is not int, float or
- bool.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.linalg.LinearOperatorTriL.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.linalg.LinearOperatorTriL.md
deleted file mode 100644
index 403e5dac69..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.linalg.LinearOperatorTriL.md
+++ /dev/null
@@ -1,521 +0,0 @@
-`LinearOperator` acting like a [batch] square lower triangular matrix.
-
-This operator acts like a [batch] lower triangular matrix `A` with shape
-`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a
-batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is
-an `N x N` matrix.
-
-`LinearOperatorTriL` is initialized with a `Tensor` having dimensions
-`[B1,...,Bb, N, N]`. The upper triangle of the last two dimensions is ignored.
-
-```python
-# Create a 2 x 2 lower-triangular linear operator.
-tril = [[1., 2.], [3., 4.]]
-operator = LinearOperatorTriL(tril)
-
-# The upper triangle is ignored.
-operator.to_dense()
-==> [[1., 0.]
- [3., 4.]]
-
-operator.shape
-==> [2, 2]
-
-operator.log_determinant()
-==> scalar Tensor
-
-x = ... Shape [2, 4] Tensor
-operator.apply(x)
-==> Shape [2, 4] Tensor
-
-# Create a [2, 3] batch of 4 x 4 linear operators.
-tril = tf.random_normal(shape=[2, 3, 4, 4])
-operator = LinearOperatorTriL(tril)
-```
-
-#### Shape compatibility
-
-This operator acts on [batch] matrix with compatible shape.
-`x` is a batch matrix with compatible shape for `apply` and `solve` if
-
-```
-operator.shape = [B1,...,Bb] + [N, N], with b >= 0
-x.shape = [B1,...,Bb] + [N, R], with R >= 0.
-```
-
-#### Performance
-
-Suppose `operator` is a `LinearOperatorTriL` of shape `[N, N]`,
-and `x.shape = [N, R]`. Then
-
-* `operator.apply(x)` involves `N^2 * R` multiplications.
-* `operator.solve(x)` involves `N * R` size `N` back-substitutions.
-* `operator.determinant()` involves a size `N` `reduce_prod`.
-
-If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and
-`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`.
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite`.
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.__init__(tril, is_non_singular=None, is_self_adjoint=None, is_positive_definite=None, name='LinearOperatorTriL')` {#LinearOperatorTriL.__init__}
-
-Initialize a `LinearOperatorTriL`.
-
-##### Args:
-
-
-* <b>`tril`</b>: Shape `[B1,...,Bb, N, N]` with `b >= 0`, `N >= 0`.
- The lower triangular part of `tril` defines this operator. The strictly
- upper triangle is ignored. Allowed dtypes: `float32`, `float64`.
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
- This operator is non-singular if and only if its diagonal elements are
- all non-zero.
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose. This operator is self-adjoint only if it is diagonal with
- real-valued diagonal entries. In this case it is advised to use
- `LinearOperatorDiag`.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite,
- meaning the real part of all eigenvalues is positive. We do not require
- the operator to be self-adjoint to be positive-definite. See:
-* <b>`https`</b>: //en.wikipedia.org/wiki/Positive-definite_matrix
- #Extension_for_non_symmetric_matrices
-* <b>`name`</b>: A name for this `LinearOperator`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `diag.dtype` is not an allowed type.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.add_to_tensor(x, name='add_to_tensor')` {#LinearOperatorTriL.add_to_tensor}
-
-Add matrix represented by this operator to `x`. Equivalent to `A + x`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with same `dtype` and shape broadcastable to `self.shape`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.apply(x, adjoint=False, name='apply')` {#LinearOperatorTriL.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.assert_non_singular(name='assert_non_singular')` {#LinearOperatorTriL.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.assert_positive_definite(name='assert_positive_definite')` {#LinearOperatorTriL.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperatorTriL.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.batch_shape` {#LinearOperatorTriL.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperatorTriL.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.determinant(name='det')` {#LinearOperatorTriL.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.diag_part(name='diag_part')` {#LinearOperatorTriL.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.domain_dimension` {#LinearOperatorTriL.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperatorTriL.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.dtype` {#LinearOperatorTriL.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.graph_parents` {#LinearOperatorTriL.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.is_non_singular` {#LinearOperatorTriL.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.is_positive_definite` {#LinearOperatorTriL.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.is_self_adjoint` {#LinearOperatorTriL.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.is_square` {#LinearOperatorTriL.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.log_abs_determinant(name='log_abs_det')` {#LinearOperatorTriL.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.name` {#LinearOperatorTriL.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.range_dimension` {#LinearOperatorTriL.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperatorTriL.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.shape` {#LinearOperatorTriL.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.shape_tensor(name='shape_tensor')` {#LinearOperatorTriL.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.solve(rhs, adjoint=False, name='solve')` {#LinearOperatorTriL.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.tensor_rank` {#LinearOperatorTriL.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperatorTriL.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperatorTriL.to_dense(name='to_dense')` {#LinearOperatorTriL.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.losses.sparse_softmax_cross_entropy.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.losses.sparse_softmax_cross_entropy.md
deleted file mode 100644
index 4e774b2741..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.losses.sparse_softmax_cross_entropy.md
+++ /dev/null
@@ -1,33 +0,0 @@
-### `tf.contrib.losses.sparse_softmax_cross_entropy(*args, **kwargs)` {#sparse_softmax_cross_entropy}
-
-Cross-entropy loss using `tf.nn.sparse_softmax_cross_entropy_with_logits`. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.sparse_softmax_cross_entropy instead. Note that the order of the logits and labels arguments has been changed.
-
-`weights` acts as a coefficient for the loss. If a scalar is provided,
-then the loss is simply scaled by the given value. If `weights` is a
-tensor of size [`batch_size`], then the loss weights apply to each
-corresponding sample.
-
-##### Args:
-
-
-* <b>`logits`</b>: [batch_size, num_classes] logits outputs of the network .
-* <b>`labels`</b>: [batch_size, 1] or [batch_size] labels of dtype `int32` or `int64`
- in the range `[0, num_classes)`.
-* <b>`weights`</b>: Coefficients for the loss. The tensor must be a scalar or a tensor
- of shape [batch_size] or [batch_size, 1].
-* <b>`scope`</b>: the scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the mean loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shapes of `logits`, `labels`, and `weights` are
- incompatible, or if `weights` is None.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.confusion_matrix.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.confusion_matrix.md
deleted file mode 100644
index 831de8ac6b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.confusion_matrix.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.metrics.confusion_matrix(labels, predictions, num_classes=None, dtype=tf.int32, name=None, weights=None)` {#confusion_matrix}
-
-Deprecated. Use tf.confusion_matrix instead.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_accuracy.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_accuracy.md
deleted file mode 100644
index 3a930314a3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_accuracy.md
+++ /dev/null
@@ -1,50 +0,0 @@
-### `tf.contrib.metrics.streaming_accuracy(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_accuracy}
-
-Calculates how often `predictions` matches `labels`.
-
-The `streaming_accuracy` function creates two local variables, `total` and
-`count` that are used to compute the frequency with which `predictions`
-matches `labels`. This frequency is ultimately returned as `accuracy`: an
-idempotent operation that simply divides `total` by `count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the `accuracy`.
-Internally, an `is_correct` operation computes a `Tensor` with elements 1.0
-where the corresponding elements of `predictions` and `labels` match and 0.0
-otherwise. Then `update_op` increments `total` with the reduced sum of the
-product of `weights` and `is_correct`, and it increments `count` with the
-reduced sum of `weights`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted values, a `Tensor` of any shape.
-* <b>`labels`</b>: The ground truth values, a `Tensor` whose shape matches
- `predictions`.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or the same rank as `labels`, and
- must be broadcastable to `labels` (i.e., all dimensions must be either
- `1`, or the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that `accuracy` should
- be added to.
-* <b>`updates_collections`</b>: An optional list of collections that `update_op` should
- be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`accuracy`</b>: A `Tensor` representing the accuracy, the value of `total` divided
- by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately and whose value matches `accuracy`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_mean_cosine_distance.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_mean_cosine_distance.md
deleted file mode 100644
index 442187dfa4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_mean_cosine_distance.md
+++ /dev/null
@@ -1,46 +0,0 @@
-### `tf.contrib.metrics.streaming_mean_cosine_distance(predictions, labels, dim, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_mean_cosine_distance}
-
-Computes the cosine distance between the labels and predictions.
-
-The `streaming_mean_cosine_distance` function creates two local variables,
-`total` and `count` that are used to compute the average cosine distance
-between `predictions` and `labels`. This average is weighted by `weights`,
-and it is ultimately returned as `mean_distance`, which is an idempotent
-operation that simply divides `total` by `count`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`mean_distance`.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of the same shape as `labels`.
-* <b>`labels`</b>: A `Tensor` of arbitrary shape.
-* <b>`dim`</b>: The dimension along which the cosine distance is computed.
-* <b>`weights`</b>: An optional `Tensor` whose shape is broadcastable to `predictions`,
- and whose dimension `dim` is 1.
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`mean_distance`</b>: A `Tensor` representing the current mean, the value of
- `total` divided by `count`.
-* <b>`update_op`</b>: An operation that increments the `total` and `count` variables
- appropriately.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_pearson_correlation.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_pearson_correlation.md
deleted file mode 100644
index c2bc594874..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_pearson_correlation.md
+++ /dev/null
@@ -1,52 +0,0 @@
-### `tf.contrib.metrics.streaming_pearson_correlation(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_pearson_correlation}
-
-Computes Pearson correlation coefficient between `predictions`, `labels`.
-
-The `streaming_pearson_correlation` function delegates to
-`streaming_covariance` the tracking of three [co]variances:
-
-- `streaming_covariance(predictions, labels)`, i.e. covariance
-- `streaming_covariance(predictions, predictions)`, i.e. variance
-- `streaming_covariance(labels, labels)`, i.e. variance
-
-The product-moment correlation ultimately returned is an idempotent operation
-`cov(predictions, labels) / sqrt(var(predictions) * var(labels))`. To
-facilitate correlation computation across multiple batches, the function
-groups the `update_op`s of the underlying streaming_covariance and returns an
-`update_op`.
-
-If `weights` is not None, then it is used to compute a weighted correlation.
-NOTE: these weights are treated as "frequency weights", as opposed to
-"reliability weights". See discussion of the difference on
-https://wikipedia.org/wiki/Weighted_arithmetic_mean#Weighted_sample_variance
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of arbitrary size.
-* <b>`labels`</b>: A `Tensor` of the same size as predictions.
-* <b>`weights`</b>: Optional `Tensor` indicating the frequency with which an example is
- sampled. Rank must be 0, or the same rank as `labels`, and must be
- broadcastable to `labels` (i.e., all dimensions must be either `1`, or
- the same as the corresponding `labels` dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`pearson_r`</b>: A `Tensor` representing the current Pearson product-moment
- correlation coefficient, the value of
- `cov(predictions, labels) / sqrt(var(predictions) * var(labels))`.
-* <b>`update_op`</b>: An operation that updates the underlying variables appropriately.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `labels` and `predictions` are of different sizes, or if
- `weights` is the wrong size, or if either `metrics_collections` or
- `updates_collections` are not a `list` or `tuple`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_sparse_average_precision_at_k.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_sparse_average_precision_at_k.md
deleted file mode 100644
index f6cbb8c90a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_sparse_average_precision_at_k.md
+++ /dev/null
@@ -1,57 +0,0 @@
-### `tf.contrib.metrics.streaming_sparse_average_precision_at_k(predictions, labels, k, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_sparse_average_precision_at_k}
-
-Computes average precision@k of predictions with respect to sparse labels.
-
-See `sparse_average_precision_at_k` for details on formula. `weights` are
-applied to the result of `sparse_average_precision_at_k`
-
-`streaming_sparse_average_precision_at_k` creates two local variables,
-`average_precision_at_<k>/total` and `average_precision_at_<k>/max`, that
-are used to compute the frequency. This frequency is ultimately returned as
-`average_precision_at_<k>`: an idempotent operation that simply divides
-`average_precision_at_<k>/total` by `average_precision_at_<k>/max`.
-
-For estimation of the metric over a stream of data, the function creates an
-`update_op` operation that updates these variables and returns the
-`precision_at_<k>`. Internally, a `top_k` operation computes a `Tensor`
-indicating the top `k` `predictions`. Set operations applied to `top_k` and
-`labels` calculate the true positives and false positives weighted by
-`weights`. Then `update_op` increments `true_positive_at_<k>` and
-`false_positive_at_<k>` using these values.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: Float `Tensor` with shape [D1, ... DN, num_classes] where
- N >= 1. Commonly, N=1 and `predictions` has shape
- [batch size, num_classes]. The final dimension contains the logit values
- for each class. [D1, ... DN] must match `labels`.
-* <b>`labels`</b>: `int64` `Tensor` or `SparseTensor` with shape
- [D1, ... DN, num_labels], where N >= 1 and num_labels is the number of
- target classes for the associated prediction. Commonly, N=1 and `labels`
- has shape [batch_size, num_labels]. [D1, ... DN] must match
- `predictions_`. Values should be in range [0, num_classes), where
- num_classes is the last dimension of `predictions`. Values outside this
- range are ignored.
-* <b>`k`</b>: Integer, k for @k metric. This will calculate an average precision for
- range `[1,k]`, as documented above.
-* <b>`weights`</b>: `Tensor` whose rank is either 0, or n-1, where n is the rank of
- `labels`. If the latter, it must be broadcastable to `labels` (i.e., all
- dimensions must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that values should
- be added to.
-* <b>`updates_collections`</b>: An optional list of collections that updates should
- be added to.
-* <b>`name`</b>: Name of new update operation, and namespace for other dependent ops.
-
-##### Returns:
-
-
-* <b>`mean_average_precision`</b>: Scalar `float64` `Tensor` with the mean average
- precision values.
-* <b>`update`</b>: `Operation` that increments variables appropriately, and whose
- value matches `metric`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_true_negatives.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_true_negatives.md
deleted file mode 100644
index 5b9dfd33f4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.metrics.streaming_true_negatives.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.contrib.metrics.streaming_true_negatives(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_true_negatives}
-
-Sum the weights of true_negatives.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted values, a `Tensor` of arbitrary dimensions. Will
- be cast to `bool`.
-* <b>`labels`</b>: The ground truth values, a `Tensor` whose dimensions must match
- `predictions`. Will be cast to `bool`.
-* <b>`weights`</b>: Optional `Tensor` whose rank is either 0, or the same rank as
- `labels`, and must be broadcastable to `labels` (i.e., all dimensions
- must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`value_tensor`</b>: A `Tensor` representing the current value of the metric.
-* <b>`update_op`</b>: An operation that accumulates the error from a batch of data.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `predictions` and `labels` have mismatched shapes, or if
- `weights` is not `None` and its shape doesn't match `predictions`, or if
- either `metrics_collections` or `updates_collections` are not a list or
- tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.opt.MovingAverageOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.opt.MovingAverageOptimizer.md
deleted file mode 100644
index 582deec24d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.opt.MovingAverageOptimizer.md
+++ /dev/null
@@ -1,217 +0,0 @@
-Optimizer that computes a moving average of the variables.
-
-Empirically it has been found that using the moving average of the trained
-parameters of a deep network is better than using its trained parameters
-directly. This optimizer allows you to compute this moving average and swap
-the variables at save time so that any code outside of the training loop will
-use by default the averaged values instead of the original ones.
-
-Example of usage:
-
-```python
-
-// Encapsulate your favorite optimizer (here the momentum one)
-// inside the MovingAverageOptimizer.
-opt = tf.train.MomentumOptimizer(learning_rate, FLAGS.momentum)
-opt = tf.contrib.opt.MovingAverageOptimizer(opt)
-// Then create your model and all its variables.
-model = build_model()
-// Add the training op that optimizes using opt.
-// This needs to be called before swapping_saver().
-opt.minimize(cost, var_list)
-// Then create your saver like this:
-saver = opt.swapping_saver()
-// Pass it to your training loop.
- slim.learning.train(
- model,
- ...
- saver=saver)
-```
-
-Note that for evaluation, the normal saver should be used instead of
-swapping_saver().
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.__init__(opt, average_decay=0.9999, num_updates=None, sequential_update=True)` {#MovingAverageOptimizer.__init__}
-
-Construct a new MovingAverageOptimizer.
-
-##### Args:
-
-
-* <b>`opt`</b>: A tf.Optimizer that will be used to compute and apply gradients.
-* <b>`average_decay`</b>: Float. Decay to use to maintain the moving averages
- of trained variables.
- See tf.train.ExponentialMovingAverage for details.
-* <b>`num_updates`</b>: Optional count of number of updates applied to variables.
- See tf.train.ExponentialMovingAverage for details.
-* <b>`sequential_update`</b>: Bool. If False, will compute the moving average at the
- same time as the model is updated, potentially doing
- benign data races.
- If True, will update the moving average after gradient
- updates.
-
-
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#MovingAverageOptimizer.apply_gradients}
-
-
-
-
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#MovingAverageOptimizer.compute_gradients}
-
-Compute gradients of `loss` for the variables in `var_list`.
-
-This is the first part of `minimize()`. It returns a list
-of (gradient, variable) pairs where "gradient" is the gradient
-for "variable". Note that "gradient" can be a `Tensor`, an
-`IndexedSlices`, or `None` if there is no gradient for the
-given variable.
-
-##### Args:
-
-
-* <b>`loss`</b>: A Tensor containing the value to minimize.
-* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKey.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- A list of (gradient, variable) pairs. Variable is always present, but
- gradient can be `None`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
-* <b>`ValueError`</b>: If some arguments are invalid.
-
-
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.get_name()` {#MovingAverageOptimizer.get_name}
-
-
-
-
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.get_slot(var, name)` {#MovingAverageOptimizer.get_slot}
-
-Return a slot named `name` created for `var` by the Optimizer.
-
-Some `Optimizer` subclasses use additional variables. For example
-`Momentum` and `Adagrad` use variables to accumulate updates. This method
-gives access to these `Variable` objects if for some reason you need them.
-
-Use `get_slot_names()` to get the list of slot names created by the
-`Optimizer`.
-
-##### Args:
-
-
-* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
-* <b>`name`</b>: A string.
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.get_slot_names()` {#MovingAverageOptimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-See `get_slot()`.
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#MovingAverageOptimizer.minimize}
-
-Add operations to minimize `loss` by updating `var_list`.
-
-This method simply combines calls `compute_gradients()` and
-`apply_gradients()`. If you want to process the gradient before applying
-them call `compute_gradients()` and `apply_gradients()` explicitly instead
-of using this function.
-
-##### Args:
-
-
-* <b>`loss`</b>: A `Tensor` containing the value to minimize.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`name`</b>: Optional name for the returned operation.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- An Operation that updates the variables in `var_list`. If `global_step`
- was not `None`, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
-
-
-- - -
-
-#### `tf.contrib.opt.MovingAverageOptimizer.swapping_saver(var_list=None, name='swapping_saver', **kwargs)` {#MovingAverageOptimizer.swapping_saver}
-
-Create a saver swapping moving averages and variables.
-
-You should use this saver during training. It will save the moving averages
-of the trained parameters under the original parameter names. For
-evaluations or inference you should use a regular saver and it will
-automatically use the moving averages for the trained variable.
-
-You must call this function after all variables have been created and after
-you have called Optimizer.minimize().
-
-##### Args:
-
-
-* <b>`var_list`</b>: List of variables to save, as per `Saver()`.
- If set to None, will save all the variables that have been
- created before this call.
-* <b>`name`</b>: The name of the saver.
-* <b>`**kwargs`</b>: Keyword arguments of `Saver()`.
-
-##### Returns:
-
- A `tf.Saver` object.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If apply_gradients or minimize has not been called before.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.rnn.FusedRNNCell.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.rnn.FusedRNNCell.md
deleted file mode 100644
index d862d9ea9d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.rnn.FusedRNNCell.md
+++ /dev/null
@@ -1,47 +0,0 @@
-Abstract object representing a fused RNN cell.
-
-A fused RNN cell represents the entire RNN expanded over the time
-dimension. In effect, this represents an entire recurrent network.
-
-Unlike RNN cells which are subclasses of `rnn_cell.RNNCell`, a `FusedRNNCell`
-operates on the entire time sequence at once, by putting the loop over time
-inside the cell. This usually leads to much more efficient, but more complex
-and less flexible implementations.
-
-Every `FusedRNNCell` must implement `__call__` with the following signature.
-- - -
-
-#### `tf.contrib.rnn.FusedRNNCell.__call__(inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)` {#FusedRNNCell.__call__}
-
-Run this fused RNN on inputs, starting from the given state.
-
-##### Args:
-
-
-* <b>`inputs`</b>: `3-D` tensor with shape `[time_len x batch_size x input_size]`
- or a list of `time_len` tensors of shape `[batch_size x input_size]`.
-* <b>`initial_state`</b>: either a tensor with shape `[batch_size x state_size]`
- or a tuple with shapes `[batch_size x s] for s in state_size`, if the
- cell takes tuples. If this is not provided, the cell is expected to
- create a zero initial state of type `dtype`.
-* <b>`dtype`</b>: The data type for the initial state and expected output. Required
- if `initial_state` is not provided or RNN state has a heterogeneous
- dtype.
-* <b>`sequence_length`</b>: Specifies the length of each sequence in inputs. An
- `int32` or `int64` vector (tensor) size `[batch_size]`, values in `[0,
- time_len)`.
- Defaults to `time_len` for each element.
-* <b>`scope`</b>: `VariableScope` or `string` for the created subgraph; defaults to
- class name.
-
-##### Returns:
-
- A pair containing:
-
- - Output: A `3-D` tensor of shape `[time_len x batch_size x output_size]`
- or a list of `time_len` tensors of shape `[batch_size x output_size]`,
- to match the type of the `inputs`.
- - Final state: Either a single `2-D` tensor, or a tuple of tensors
- matching the arity and shapes of `initial_state`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.rnn.LSTMStateTuple.__new__.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.rnn.LSTMStateTuple.__new__.md
deleted file mode 100644
index fec450ce78..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.rnn.LSTMStateTuple.__new__.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.contrib.rnn.LSTMStateTuple.__new__(_cls, c, h)` {#LSTMStateTuple.__new__}
-
-Create new instance of LSTMStateTuple(c, h)
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.rnn.static_rnn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.rnn.static_rnn.md
deleted file mode 100644
index fb32ce3d2e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.rnn.static_rnn.md
+++ /dev/null
@@ -1,65 +0,0 @@
-### `tf.contrib.rnn.static_rnn(cell, inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)` {#static_rnn}
-
-Creates a recurrent neural network specified by RNNCell `cell`.
-
-The simplest form of RNN network generated is:
-
-```python
- state = cell.zero_state(...)
- outputs = []
- for input_ in inputs:
- output, state = cell(input_, state)
- outputs.append(output)
- return (outputs, state)
-```
-However, a few other options are available:
-
-An initial state can be provided.
-If the sequence_length vector is provided, dynamic calculation is performed.
-This method of calculation does not compute the RNN steps past the maximum
-sequence length of the minibatch (thus saving computational time),
-and properly propagates the state at an example's sequence length
-to the final state output.
-
-The dynamic calculation performed is, at time `t` for batch row `b`,
-
-```python
- (output, state)(b, t) =
- (t >= sequence_length(b))
- ? (zeros(cell.output_size), states(b, sequence_length(b) - 1))
- : cell(input(b, t), state(b, t - 1))
-```
-
-##### Args:
-
-
-* <b>`cell`</b>: An instance of RNNCell.
-* <b>`inputs`</b>: A length T list of inputs, each a `Tensor` of shape
- `[batch_size, input_size]`, or a nested tuple of such elements.
-* <b>`initial_state`</b>: (optional) An initial state for the RNN.
- If `cell.state_size` is an integer, this must be
- a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`.
- If `cell.state_size` is a tuple, this should be a tuple of
- tensors having shapes `[batch_size, s] for s in cell.state_size`.
-* <b>`dtype`</b>: (optional) The data type for the initial state and expected output.
- Required if initial_state is not provided or RNN state has a heterogeneous
- dtype.
-* <b>`sequence_length`</b>: Specifies the length of each sequence in inputs.
- An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
-
-##### Returns:
-
- A pair (outputs, state) where:
-
- - outputs is a length T list of outputs (one for each input), or a nested
- tuple of such elements.
- - state is the final state
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cell` is not an instance of RNNCell.
-* <b>`ValueError`</b>: If `inputs` is `None` or an empty list, or if the input depth
- (column size) cannot be inferred from inputs via shape inference.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.training.bucket_by_sequence_length.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.training.bucket_by_sequence_length.md
deleted file mode 100644
index f2e69fbb88..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.contrib.training.bucket_by_sequence_length.md
+++ /dev/null
@@ -1,55 +0,0 @@
-### `tf.contrib.training.bucket_by_sequence_length(input_length, tensors, batch_size, bucket_boundaries, num_threads=1, capacity=32, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, keep_input=True, shared_name=None, name=None)` {#bucket_by_sequence_length}
-
-Lazy bucketing of inputs according to their length.
-
-This method calls `tf.contrib.training.bucket` under the hood, after first
-subdividing the bucket boundaries into separate buckets and identifying which
-bucket the given `input_length` belongs to. See the documentation for
-`which_bucket` for details of the other arguments.
-
-##### Args:
-
-
-* <b>`input_length`</b>: `int32` scalar `Tensor`, the sequence length of tensors.
-* <b>`tensors`</b>: The list or dictionary of tensors, representing a single element,
- to bucket. Nested lists are not supported.
-* <b>`batch_size`</b>: The new batch size pulled from the queue (all queues will have
- the same size). If a list is passed in then each bucket will have a
- different batch_size.
- (python int, int32 scalar or iterable of integers of length num_buckets).
-* <b>`bucket_boundaries`</b>: int list, increasing non-negative numbers.
- The edges of the buckets to use when bucketing tensors. Two extra buckets
- are created, one for `input_length < bucket_boundaries[0]` and
- one for `input_length >= bucket_boundaries[-1]`.
-* <b>`num_threads`</b>: An integer. The number of threads enqueuing `tensors`.
-* <b>`capacity`</b>: An integer. The maximum number of minibatches in the top queue,
- and also the maximum number of elements within each bucket.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensors`.
-* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
- The given dimensions are padded upon dequeue so that tensors within a
- batch have the same shapes.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batches to be smaller if there are insufficient items left in the queues.
-* <b>`keep_input`</b>: A `bool` scalar Tensor. If provided, this tensor controls
- whether the input is added to the queue or not. If it evaluates `True`,
- then `tensors` are added to the bucket; otherwise they are dropped. This
- tensor essentially acts as a filtering mechanism.
-* <b>`shared_name`</b>: (Optional). If set, the queues will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A tuple `(sequence_length, outputs)` where `sequence_length` is
- a 1-D `Tensor` of size `batch_size` and `outputs` is a list or dictionary
- of batched, bucketed, outputs corresponding to elements of `tensors`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `bucket_boundaries` is not a list of python integers.
-* <b>`ValueError`</b>: if `bucket_boundaries` is empty or contains non-increasing
- values or if batch_size is a list and it's length doesn't equal the number
- of buckets.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.cross.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.cross.md
deleted file mode 100644
index eecf2e869b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.cross.md
+++ /dev/null
@@ -1,22 +0,0 @@
-### `tf.cross(a, b, name=None)` {#cross}
-
-Compute the pairwise cross product.
-
-`a` and `b` must be the same shape; they can either be simple 3-element vectors,
-or any shape where the innermost dimension is 3. In the latter case, each pair
-of corresponding 3-element vectors is cross-multiplied independently.
-
-##### Args:
-
-
-* <b>`a`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
- A tensor containing 3-element vectors.
-* <b>`b`</b>: A `Tensor`. Must have the same type as `a`.
- Another tensor, of same type and shape as `a`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `a`.
- Pairwise cross product of the vectors in `a` and `b`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.equal.md
deleted file mode 100644
index 332a12f725..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.equal.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.equal(x, y, name=None)` {#equal}
-
-Returns the truth value of (x == y) element-wise.
-
-*NOTE*: `Equal` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `quint8`, `qint8`, `qint32`, `string`, `bool`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.hessians.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.hessians.md
deleted file mode 100644
index 0aea7659ce..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.hessians.md
+++ /dev/null
@@ -1,36 +0,0 @@
-### `tf.hessians(ys, xs, name='hessians', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None)` {#hessians}
-
-Constructs the Hessian of sum of `ys` with respect to `x` in `xs`.
-
-`hessians()` adds ops to the graph to output the Hessian matrix of `ys`
-with respect to `xs`. It returns a list of `Tensor` of length `len(xs)`
-where each tensor is the Hessian of `sum(ys)`. This function currently
-only supports evaluating the Hessian with respect to (a list of) one-
-dimensional tensors.
-
-The Hessian is a matrix of second-order partial derivatives of a scalar
-tensor (see https://en.wikipedia.org/wiki/Hessian_matrix for more details).
-
-##### Args:
-
-
-* <b>`ys`</b>: A `Tensor` or list of tensors to be differentiated.
-* <b>`xs`</b>: A `Tensor` or list of tensors to be used for differentiation.
-* <b>`name`</b>: Optional name to use for grouping all the gradient ops together.
- defaults to 'hessians'.
-* <b>`colocate_gradients_with_ops`</b>: See `gradients()` documentation for details.
-* <b>`gate_gradients`</b>: See `gradients()` documentation for details.
-* <b>`aggregation_method`</b>: See `gradients()` documentation for details.
-
-##### Returns:
-
- A list of Hessian matrices of `sum(y)` for each `x` in `xs`.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: if one of the operations between `xs` and `ys` does not
- have a registered gradient function.
-* <b>`ValueError`</b>: if the arguments are invalid or not supported. Currently,
- this function only supports one-dimensional `x` in `xs`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.crop_to_bounding_box.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.crop_to_bounding_box.md
deleted file mode 100644
index 1ca4247a9b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.crop_to_bounding_box.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.image.crop_to_bounding_box(image, offset_height, offset_width, target_height, target_width)` {#crop_to_bounding_box}
-
-Crops an image to a specified bounding box.
-
-This op cuts a rectangular part out of `image`. The top-left corner of the
-returned image is at `offset_height, offset_width` in `image`, and its
-lower-right corner is at
-`offset_height + target_height, offset_width + target_width`.
-
-##### Args:
-
-
-* <b>`image`</b>: 3-D tensor with shape `[height, width, channels]`
-* <b>`offset_height`</b>: Vertical coordinate of the top-left corner of the result in
- the input.
-* <b>`offset_width`</b>: Horizontal coordinate of the top-left corner of the result in
- the input.
-* <b>`target_height`</b>: Height of the result.
-* <b>`target_width`</b>: Width of the result.
-
-##### Returns:
-
- 3-D tensor of image with shape `[target_height, target_width, channels]`
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `image` is incompatible with the `offset_*` or
- `target_*` arguments, or either `offset_height` or `offset_width` is
- negative, or either `target_height` or `target_width` is not positive.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.draw_bounding_boxes.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.draw_bounding_boxes.md
deleted file mode 100644
index fff67cd42f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.draw_bounding_boxes.md
+++ /dev/null
@@ -1,32 +0,0 @@
-### `tf.image.draw_bounding_boxes(images, boxes, name=None)` {#draw_bounding_boxes}
-
-Draw bounding boxes on a batch of images.
-
-Outputs a copy of `images` but draws on top of the pixels zero or more bounding
-boxes specified by the locations in `boxes`. The coordinates of the each
-bounding box in `boxes` are encoded as `[y_min, x_min, y_max, x_max]`. The
-bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and
-height of the underlying image.
-
-For example, if an image is 100 x 200 pixels and the bounding box is
-`[0.1, 0.2, 0.5, 0.9]`, the bottom-left and upper-right coordinates of the
-bounding box will be `(10, 40)` to `(50, 180)`.
-
-Parts of the bounding box may fall outside the image.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `float32`, `half`.
- 4-D with shape `[batch, height, width, depth]`. A batch of images.
-* <b>`boxes`</b>: A `Tensor` of type `float32`.
- 3-D with shape `[batch, num_bounding_boxes, 4]` containing bounding
- boxes.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `images`.
- 4-D with the same shape as `images`. The batch of input images with
- bounding boxes drawn on the images.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.per_image_standardization.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.per_image_standardization.md
deleted file mode 100644
index 8b7b848443..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.per_image_standardization.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.image.per_image_standardization(image)` {#per_image_standardization}
-
-Linearly scales `image` to have zero mean and unit norm.
-
-This op computes `(x - mean) / adjusted_stddev`, where `mean` is the average
-of all values in image, and
-`adjusted_stddev = max(stddev, 1.0/sqrt(image.NumElements()))`.
-
-`stddev` is the standard deviation of all values in `image`. It is capped
-away from zero to protect against division by 0 when handling uniform images.
-
-##### Args:
-
-
-* <b>`image`</b>: 3-D tensor of shape `[height, width, channels]`.
-
-##### Returns:
-
- The standardized image with same shape as `image`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape of 'image' is incompatible with this function.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_bilinear.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_bilinear.md
deleted file mode 100644
index a9580ca199..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_bilinear.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.image.resize_bilinear(images, size, align_corners=None, name=None)` {#resize_bilinear}
-
-Resize `images` to `size` using bilinear interpolation.
-
-Input images can be of different types but output images are always float.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`.
- 4-D with shape `[batch, height, width, channels]`.
-* <b>`size`</b>: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The
- new size for the images.
-* <b>`align_corners`</b>: An optional `bool`. Defaults to `False`.
- If true, rescale input by (new_height - 1) / (height - 1), which
- exactly aligns the 4 corners of images and resized images. If false, rescale
- by new_height / height. Treat similarly the width dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`. 4-D with shape
- `[batch, new_height, new_width, channels]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_images.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_images.md
deleted file mode 100644
index a4b7e8f57a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.resize_images.md
+++ /dev/null
@@ -1,41 +0,0 @@
-### `tf.image.resize_images(images, size, method=0, align_corners=False)` {#resize_images}
-
-Resize `images` to `size` using the specified `method`.
-
-Resized images will be distorted if their original aspect ratio is not
-the same as `size`. To avoid distortions see
-[`resize_image_with_crop_or_pad`](#resize_image_with_crop_or_pad).
-
-`method` can be one of:
-
-* <b>`ResizeMethod.BILINEAR`</b>: [Bilinear interpolation.](https://en.wikipedia.org/wiki/Bilinear_interpolation)
-* <b>`ResizeMethod.NEAREST_NEIGHBOR`</b>: [Nearest neighbor interpolation.](https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation)
-* <b>`ResizeMethod.BICUBIC`</b>: [Bicubic interpolation.](https://en.wikipedia.org/wiki/Bicubic_interpolation)
-* <b>`ResizeMethod.AREA`</b>: Area interpolation.
-
-##### Args:
-
-
-* <b>`images`</b>: 4-D Tensor of shape `[batch, height, width, channels]` or
- 3-D Tensor of shape `[height, width, channels]`.
-* <b>`size`</b>: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The
- new size for the images.
-* <b>`method`</b>: ResizeMethod. Defaults to `ResizeMethod.BILINEAR`.
-* <b>`align_corners`</b>: bool. If true, exactly align all 4 corners of the input and
- output. Defaults to `false`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape of `images` is incompatible with the
- shape arguments to this function
-* <b>`ValueError`</b>: if `size` has invalid shape or type.
-* <b>`ValueError`</b>: if an unsupported resize method is specified.
-
-##### Returns:
-
- If `images` was 4-D, a 4-D float Tensor of shape
- `[batch, new_height, new_width, channels]`.
- If `images` was 3-D, a 3-D float Tensor of shape
- `[new_height, new_width, channels]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.transpose_image.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.transpose_image.md
deleted file mode 100644
index 1cc527d345..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.image.transpose_image.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.image.transpose_image(image)` {#transpose_image}
-
-Transpose an image by swapping the first and second dimension.
-
-See also `transpose()`.
-
-##### Args:
-
-
-* <b>`image`</b>: 3-D tensor of shape `[height, width, channels]`
-
-##### Returns:
-
- A 3-D tensor of shape `[width, height, channels]`
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape of `image` not supported.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.is_inf.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.is_inf.md
deleted file mode 100644
index 56663d6417..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.is_inf.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.is_inf(x, name=None)` {#is_inf}
-
-Returns which elements of x are Inf.
-
-@compatibility(numpy)
-Equivalent to np.isinf
-@end_compatibility
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.lbeta.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.lbeta.md
deleted file mode 100644
index e3ee18dfb3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.lbeta.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.lbeta(x, name='lbeta')` {#lbeta}
-
-Computes `ln(|Beta(x)|)`, reducing along the last dimension.
-
-Given one-dimensional `z = [z_0,...,z_{K-1}]`, we define
-
-```Beta(z) = \prod_j Gamma(z_j) / Gamma(\sum_j z_j)```
-
-And for `n + 1` dimensional `x` with shape `[N1, ..., Nn, K]`, we define
-`lbeta(x)[i1, ..., in] = Log(|Beta(x[i1, ..., in, :])|)`. In other words,
-the last dimension is treated as the `z` vector.
-
-Note that if `z = [u, v]`, then
-`Beta(z) = int_0^1 t^{u-1} (1 - t)^{v-1} dt`, which defines the traditional
-bivariate beta function.
-
-##### Args:
-
-
-* <b>`x`</b>: A rank `n + 1` `Tensor` with type `float`, or `double`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The logarithm of `|Beta(x)|` reducing along the last dimension.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` is empty with rank one or less.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.less_equal.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.less_equal.md
deleted file mode 100644
index c8ce84b669..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.less_equal.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.less_equal(x, y, name=None)` {#less_equal}
-
-Returns the truth value of (x <= y) element-wise.
-
-*NOTE*: `LessEqual` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.matrix_inverse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.matrix_inverse.md
deleted file mode 100644
index ff49493f0c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.matrix_inverse.md
+++ /dev/null
@@ -1,32 +0,0 @@
-### `tf.matrix_inverse(input, adjoint=None, name=None)` {#matrix_inverse}
-
-Computes the inverse of one or more square invertible matrices or their
-
-adjoints (conjugate transposes).
-
-The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
-form square matrices. The output is a tensor of the same shape as the input
-containing the inverse for all input submatrices `[..., :, :]`.
-
-The op uses LU decomposition with partial pivoting to compute the inverses.
-
-If a matrix is not invertible there is no guarantee what the op does. It
-may detect the condition and raise an exception or it may simply return a
-garbage result.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`.
- Shape is `[..., M, M]`.
-* <b>`adjoint`</b>: An optional `bool`. Defaults to `False`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`. Shape is `[..., M, M]`.
-
- @compatibility(numpy)
- Equivalent to np.linalg.inv
- @end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.matrix_set_diag.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.matrix_set_diag.md
deleted file mode 100644
index a8f9bf6be8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.matrix_set_diag.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.matrix_set_diag(input, diagonal, name=None)` {#matrix_set_diag}
-
-Returns a batched matrix tensor with new batched diagonal values.
-
-Given `input` and `diagonal`, this operation returns a tensor with the
-same shape and values as `input`, except for the main diagonal of the
-innermost matrices. These will be overwritten by the values in `diagonal`.
-
-The output is computed as follows:
-
-Assume `input` has `k+1` dimensions `[I, J, K, ..., M, N]` and `diagonal` has
-`k` dimensions `[I, J, K, ..., min(M, N)]`. Then the output is a
-tensor of rank `k+1` with dimensions `[I, J, K, ..., M, N]` where:
-
- * `output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n]` for `m == n`.
- * `output[i, j, k, ..., m, n] = input[i, j, k, ..., m, n]` for `m != n`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Rank `k+1`, where `k >= 1`.
-* <b>`diagonal`</b>: A `Tensor`. Must have the same type as `input`.
- Rank `k`, where `k >= 1`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- Rank `k+1`, with `output.shape = input.shape`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.batch_normalization.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.batch_normalization.md
deleted file mode 100644
index 4ef94aeda2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.batch_normalization.md
+++ /dev/null
@@ -1,47 +0,0 @@
-### `tf.nn.batch_normalization(x, mean, variance, offset, scale, variance_epsilon, name=None)` {#batch_normalization}
-
-Batch normalization.
-
-As described in http://arxiv.org/abs/1502.03167.
-Normalizes a tensor by `mean` and `variance`, and applies (optionally) a
-`scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):
-
-\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)
-
-`mean`, `variance`, `offset` and `scale` are all expected to be of one of two
-shapes:
-
- * In all generality, they can have the same number of dimensions as the
- input `x`, with identical sizes as `x` for the dimensions that are not
- normalized over (the 'depth' dimension(s)), and dimension 1 for the
- others which are being normalized over.
- `mean` and `variance` in this case would typically be the outputs of
- `tf.nn.moments(..., keep_dims=True)` during training, or running averages
- thereof during inference.
- * In the common case where the 'depth' dimension is the last dimension in
- the input tensor `x`, they may be one dimensional tensors of the same
- size as the 'depth' dimension.
- This is the case for example for the common `[batch, depth]` layout of
- fully-connected layers, and `[batch, height, width, depth]` for
- convolutions.
- `mean` and `variance` in this case would typically be the outputs of
- `tf.nn.moments(..., keep_dims=False)` during training, or running averages
- thereof during inference.
-
-##### Args:
-
-
-* <b>`x`</b>: Input `Tensor` of arbitrary dimensionality.
-* <b>`mean`</b>: A mean `Tensor`.
-* <b>`variance`</b>: A variance `Tensor`.
-* <b>`offset`</b>: An offset `Tensor`, often denoted \\(\beta\\) in equations, or
- None. If present, will be added to the normalized tensor.
-* <b>`scale`</b>: A scale `Tensor`, often denoted \\(\gamma\\) in equations, or
- `None`. If present, the scale is applied to the normalized tensor.
-* <b>`variance_epsilon`</b>: A small float number to avoid dividing by 0.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- the normalized, scaled, offset tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.bidirectional_dynamic_rnn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.bidirectional_dynamic_rnn.md
deleted file mode 100644
index e57bbb03d0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.bidirectional_dynamic_rnn.md
+++ /dev/null
@@ -1,84 +0,0 @@
-### `tf.nn.bidirectional_dynamic_rnn(cell_fw, cell_bw, inputs, sequence_length=None, initial_state_fw=None, initial_state_bw=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None)` {#bidirectional_dynamic_rnn}
-
-Creates a dynamic version of bidirectional recurrent neural network.
-
-Similar to the unidirectional case above (rnn) but takes input and builds
-independent forward and backward RNNs. The input_size of forward and
-backward cell must match. The initial state for both directions is zero by
-default (but can be set optionally) and no intermediate states are ever
-returned -- the network is fully unrolled for the given (passed in)
-length(s) of the sequence(s) or completely unrolled if length(s) is not
-given.
-
-##### Args:
-
-
-* <b>`cell_fw`</b>: An instance of RNNCell, to be used for forward direction.
-* <b>`cell_bw`</b>: An instance of RNNCell, to be used for backward direction.
-* <b>`inputs`</b>: The RNN inputs.
- If time_major == False (default), this must be a tensor of shape:
- `[batch_size, max_time, input_size]`.
- If time_major == True, this must be a tensor of shape:
- `[max_time, batch_size, input_size]`.
- [batch_size, input_size].
-* <b>`sequence_length`</b>: An int32/int64 vector, size `[batch_size]`,
- containing the actual lengths for each of the sequences.
-* <b>`initial_state_fw`</b>: (optional) An initial state for the forward RNN.
- This must be a tensor of appropriate type and shape
- `[batch_size, cell_fw.state_size]`.
- If `cell_fw.state_size` is a tuple, this should be a tuple of
- tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
-* <b>`initial_state_bw`</b>: (optional) Same as for `initial_state_fw`, but using
- the corresponding properties of `cell_bw`.
-* <b>`dtype`</b>: (optional) The data type for the initial states and expected output.
- Required if initial_states are not provided or RNN states have a
- heterogeneous dtype.
-* <b>`parallel_iterations`</b>: (Default: 32). The number of iterations to run in
- parallel. Those operations which do not have any temporal dependency
- and can be run in parallel, will be. This parameter trades off
- time for space. Values >> 1 use more memory but take less time,
- while smaller values use less memory but computations take longer.
-* <b>`swap_memory`</b>: Transparently swap the tensors produced in forward inference
- but needed for back prop from GPU to CPU. This allows training RNNs
- which would typically not fit on a single GPU, with very minimal (or no)
- performance penalty.
-* <b>`time_major`</b>: The shape format of the `inputs` and `outputs` Tensors.
- If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`.
- If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`.
- Using `time_major = True` is a bit more efficient because it avoids
- transposes at the beginning and end of the RNN calculation. However,
- most TensorFlow data is batch-major, so by default this function
- accepts input and emits output in batch-major form.
-* <b>`dtype`</b>: (optional) The data type for the initial state. Required if
- either of the initial states are not provided.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "bidirectional_rnn"
-
-##### Returns:
-
- A tuple (outputs, output_states) where:
-
-* <b>`outputs`</b>: A tuple (output_fw, output_bw) containing the forward and
- the backward rnn output `Tensor`.
- If time_major == False (default),
- output_fw will be a `Tensor` shaped:
- `[batch_size, max_time, cell_fw.output_size]`
- and output_bw will be a `Tensor` shaped:
- `[batch_size, max_time, cell_bw.output_size]`.
- If time_major == True,
- output_fw will be a `Tensor` shaped:
- `[max_time, batch_size, cell_fw.output_size]`
- and output_bw will be a `Tensor` shaped:
- `[max_time, batch_size, cell_bw.output_size]`.
- It returns a tuple instead of a single concatenated `Tensor`, unlike
- in the `bidirectional_rnn`. If the concatenated one is preferred,
- the forward and backward outputs can be concatenated as
- `tf.concat(outputs, 2)`.
-* <b>`output_states`</b>: A tuple (output_state_fw, output_state_bw) containing
- the forward and the backward final states of bidirectional rnn.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cell_fw` or `cell_bw` is not an instance of `RNNCell`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.conv2d.md
deleted file mode 100644
index d40ed35657..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.conv2d.md
+++ /dev/null
@@ -1,49 +0,0 @@
-### `tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)` {#conv2d}
-
-Computes a 2-D convolution given 4-D `input` and `filter` tensors.
-
-Given an input tensor of shape `[batch, in_height, in_width, in_channels]`
-and a filter / kernel tensor of shape
-`[filter_height, filter_width, in_channels, out_channels]`, this op
-performs the following:
-
-1. Flattens the filter to a 2-D matrix with shape
- `[filter_height * filter_width * in_channels, output_channels]`.
-2. Extracts image patches from the input tensor to form a *virtual*
- tensor of shape `[batch, out_height, out_width,
- filter_height * filter_width * in_channels]`.
-3. For each patch, right-multiplies the filter matrix and the image patch
- vector.
-
-In detail, with the default NHWC format,
-
- output[b, i, j, k] =
- sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] *
- filter[di, dj, q, k]
-
-Must have `strides[0] = strides[3] = 1`. For the most common case of the same
-horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`filter`</b>: A `Tensor`. Must have the same type as `input`.
-* <b>`strides`</b>: A list of `ints`.
- 1-D of length 4. The stride of the sliding window for each dimension
- of `input`. Must be in the same order as the dimension specified with format.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`use_cudnn_on_gpu`</b>: An optional `bool`. Defaults to `True`.
-* <b>`data_format`</b>: An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`.
- Specify the data format of the input and output data. With the
- default format "NHWC", the data is stored in the order of:
- [batch, in_height, in_width, in_channels].
- Alternatively, the format could be "NCHW", the data storage order of:
- [batch, in_channels, in_height, in_width].
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.convolution.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.convolution.md
deleted file mode 100644
index f1ed0e2f53..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.convolution.md
+++ /dev/null
@@ -1,116 +0,0 @@
-### `tf.nn.convolution(input, filter, padding, strides=None, dilation_rate=None, name=None, data_format=None)` {#convolution}
-
-Computes sums of N-D convolutions (actually cross-correlation).
-
-This also supports either output striding via the optional `strides` parameter
-or atrous convolution (also known as convolution with holes or dilated
-convolution, based on the French word "trous" meaning holes in English) via
-the optional `dilation_rate` parameter. Currently, however, output striding
-is not supported for atrous convolutions.
-
-Specifically, in the case that `data_format` does not start with "NC", given
-a rank (N+2) `input` Tensor of shape
-
- [num_batches,
- input_spatial_shape[0],
- ...,
- input_spatial_shape[N-1],
- num_input_channels],
-
-a rank (N+2) `filter` Tensor of shape
-
- [spatial_filter_shape[0],
- ...,
- spatial_filter_shape[N-1],
- num_input_channels,
- num_output_channels],
-
-an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N)
-specifying the filter upsampling/input downsampling rate, and an optional list
-of N `strides` (defaulting [1]*N), this computes for each N-D spatial output
-position (x[0], ..., x[N-1]):
-
- output[b, x[0], ..., x[N-1], k] =
-
- sum_{z[0], ..., z[N-1], q}
-
- filter[z[0], ..., z[N-1], q, k] *
- padded_input[b,
- x[0]*strides[0] + dilation_rate[0]*z[0],
- ...,
- x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1],
- q]
-
-where `padded_input` is obtained by zero padding the input using an effective
-spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and
-output striding `strides` as described in the
-[comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution).
-
-In the case that `data_format` does start with `"NC"`, the `input` and output
-(but not the `filter`) are simply transposed as follows:
-
- convolution(input, data_format, **kwargs) =
- tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]),
- **kwargs),
- [0, N+1] + range(1, N+1))
-
-It is required that 1 <= N <= 3.
-
-##### Args:
-
-
-* <b>`input`</b>: An N-D `Tensor` of type `T`, of shape
- `[batch_size] + input_spatial_shape + [in_channels]` if data_format does
- not start with "NC" (default), or
- `[batch_size, in_channels] + input_spatial_shape` if data_format starts
- with "NC".
-* <b>`filter`</b>: An N-D `Tensor` with the same type as `input` and shape
- `spatial_filter_shape + [in_channels, out_channels]`.
-* <b>`padding`</b>: A string, either `"VALID"` or `"SAME"`. The padding algorithm.
-* <b>`strides`</b>: Optional. Sequence of N ints >= 1. Specifies the output stride.
- Defaults to [1]*N. If any value of strides is > 1, then all values of
- dilation_rate must be 1.
-* <b>`dilation_rate`</b>: Optional. Sequence of N ints >= 1. Specifies the filter
- upsampling/input downsampling rate. In the literature, the same parameter
- is sometimes called `input stride` or `dilation`. The effective filter
- size used for the convolution will be `spatial_filter_shape +
- (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting
- (dilation_rate[i]-1) zeros between consecutive elements of the original
- filter in each spatial dimension i. If any value of dilation_rate is > 1,
- then all values of strides must be 1.
-* <b>`name`</b>: Optional name for the returned tensor.
-* <b>`data_format`</b>: A string or None. Specifies whether the channel dimension of
- the `input` and output is the last dimension (default, or if `data_format`
- does not start with "NC"), or the second dimension (if `data_format`
- starts with "NC"). For N=1, the valid values are "NWC" (default) and
- "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For
- N=3, the valid value is "NDHWC".
-
-##### Returns:
-
- A `Tensor` with the same type as `input` of shape
-
- `[batch_size] + output_spatial_shape + [out_channels]`
-
- if data_format is None or does not start with "NC", or
-
- `[batch_size, out_channels] + output_spatial_shape`
-
- if data_format starts with "NC",
- where `output_spatial_shape` depends on the value of `padding`.
-
- If padding == "SAME":
- output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])
-
- If padding == "VALID":
- output_spatial_shape[i] =
- ceil((input_spatial_shape[i] -
- (spatial_filter_shape[i]-1) * dilation_rate[i])
- / strides[i]).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If input/output depth does not match `filter` shape, if padding
- is other than `"VALID"` or `"SAME"`, or if data_format is invalid.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.dynamic_rnn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.dynamic_rnn.md
deleted file mode 100644
index f2ae7527e7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.dynamic_rnn.md
+++ /dev/null
@@ -1,102 +0,0 @@
-### `tf.nn.dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None)` {#dynamic_rnn}
-
-Creates a recurrent neural network specified by RNNCell `cell`.
-
-This function is functionally identical to the function `rnn` above, but
-performs fully dynamic unrolling of `inputs`.
-
-Unlike `rnn`, the input `inputs` is not a Python list of `Tensors`, one for
-each frame. Instead, `inputs` may be a single `Tensor` where
-the maximum time is either the first or second dimension (see the parameter
-`time_major`). Alternatively, it may be a (possibly nested) tuple of
-Tensors, each of them having matching batch and time dimensions.
-The corresponding output is either a single `Tensor` having the same number
-of time steps and batch size, or a (possibly nested) tuple of such tensors,
-matching the nested structure of `cell.output_size`.
-
-The parameter `sequence_length` is optional and is used to copy-through state
-and zero-out outputs when past a batch element's sequence length. So it's more
-for correctness than performance, unlike in rnn().
-
-##### Args:
-
-
-* <b>`cell`</b>: An instance of RNNCell.
-* <b>`inputs`</b>: The RNN inputs.
-
- If `time_major == False` (default), this must be a `Tensor` of shape:
- `[batch_size, max_time, ...]`, or a nested tuple of such
- elements.
-
- If `time_major == True`, this must be a `Tensor` of shape:
- `[max_time, batch_size, ...]`, or a nested tuple of such
- elements.
-
- This may also be a (possibly nested) tuple of Tensors satisfying
- this property. The first two dimensions must match across all the inputs,
- but otherwise the ranks and other shape components may differ.
- In this case, input to `cell` at each time-step will replicate the
- structure of these tuples, except for the time dimension (from which the
- time is taken).
-
- The input to `cell` at each time step will be a `Tensor` or (possibly
- nested) tuple of Tensors each with dimensions `[batch_size, ...]`.
-
-* <b>`sequence_length`</b>: (optional) An int32/int64 vector sized `[batch_size]`.
-* <b>`initial_state`</b>: (optional) An initial state for the RNN.
- If `cell.state_size` is an integer, this must be
- a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`.
- If `cell.state_size` is a tuple, this should be a tuple of
- tensors having shapes `[batch_size, s] for s in cell.state_size`.
-* <b>`dtype`</b>: (optional) The data type for the initial state and expected output.
- Required if initial_state is not provided or RNN state has a heterogeneous
- dtype.
-* <b>`parallel_iterations`</b>: (Default: 32). The number of iterations to run in
- parallel. Those operations which do not have any temporal dependency
- and can be run in parallel, will be. This parameter trades off
- time for space. Values >> 1 use more memory but take less time,
- while smaller values use less memory but computations take longer.
-* <b>`swap_memory`</b>: Transparently swap the tensors produced in forward inference
- but needed for back prop from GPU to CPU. This allows training RNNs
- which would typically not fit on a single GPU, with very minimal (or no)
- performance penalty.
-* <b>`time_major`</b>: The shape format of the `inputs` and `outputs` Tensors.
- If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`.
- If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`.
- Using `time_major = True` is a bit more efficient because it avoids
- transposes at the beginning and end of the RNN calculation. However,
- most TensorFlow data is batch-major, so by default this function
- accepts input and emits output in batch-major form.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
-
-##### Returns:
-
- A pair (outputs, state) where:
-
-
-* <b>`outputs`</b>: The RNN output `Tensor`.
-
- If time_major == False (default), this will be a `Tensor` shaped:
- `[batch_size, max_time, cell.output_size]`.
-
- If time_major == True, this will be a `Tensor` shaped:
- `[max_time, batch_size, cell.output_size]`.
-
- Note, if `cell.output_size` is a (possibly nested) tuple of integers
- or `TensorShape` objects, then `outputs` will be a tuple having the
- same structure as `cell.output_size`, containing Tensors having shapes
- corresponding to the shape data in `cell.output_size`.
-
-
-* <b>`state`</b>: The final state. If `cell.state_size` is an int, this
- will be shaped `[batch_size, cell.state_size]`. If it is a
- `TensorShape`, this will be shaped `[batch_size] + cell.state_size`.
- If it is a (possibly nested) tuple of ints or `TensorShape`, this will
- be a tuple having the corresponding shapes.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cell` is not an instance of RNNCell.
-* <b>`ValueError`</b>: If inputs is None or an empty list.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.learned_unigram_candidate_sampler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.learned_unigram_candidate_sampler.md
deleted file mode 100644
index 4f69938e59..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.learned_unigram_candidate_sampler.md
+++ /dev/null
@@ -1,53 +0,0 @@
-### `tf.nn.learned_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)` {#learned_unigram_candidate_sampler}
-
-Samples a set of classes from a distribution learned during training.
-
-This operation randomly samples a tensor of sampled classes
-(`sampled_candidates`) from the range of integers `[0, range_max)`.
-
-The elements of `sampled_candidates` are drawn without replacement
-(if `unique=True`) or with replacement (if `unique=False`) from
-the base distribution.
-
-The base distribution for this operation is constructed on the fly
-during training. It is a unigram distribution over the target
-classes seen so far during training. Every integer in `[0, range_max)`
-begins with a weight of 1, and is incremented by 1 each time it is
-seen as a target class. The base distribution is not saved to checkpoints,
-so it is reset when the model is reloaded.
-
-In addition, this operation returns tensors `true_expected_count`
-and `sampled_expected_count` representing the number of times each
-of the target classes (`true_classes`) and the sampled
-classes (`sampled_candidates`) is expected to occur in an average
-tensor of sampled classes. These values correspond to `Q(y|x)`
-defined in [this
-document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
-If `unique=True`, then these are post-rejection probabilities and we
-compute them approximately.
-
-##### Args:
-
-
-* <b>`true_classes`</b>: A `Tensor` of type `int64` and shape `[batch_size,
- num_true]`. The target classes.
-* <b>`num_true`</b>: An `int`. The number of target classes per training example.
-* <b>`num_sampled`</b>: An `int`. The number of classes to randomly sample per batch.
-* <b>`unique`</b>: A `bool`. Determines whether all sampled classes in a batch are
- unique.
-* <b>`range_max`</b>: An `int`. The number of possible classes.
-* <b>`seed`</b>: An `int`. An operation-specific seed. Default is 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
-
-* <b>`sampled_candidates`</b>: A tensor of type `int64` and shape `[num_sampled]`.
- The sampled classes.
-* <b>`true_expected_count`</b>: A tensor of type `float`. Same shape as
- `true_classes`. The expected counts under the sampling distribution
- of each of `true_classes`.
-* <b>`sampled_expected_count`</b>: A tensor of type `float`. Same shape as
- `sampled_candidates`. The expected counts under the sampling distribution
- of each of `sampled_candidates`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.pool.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.pool.md
deleted file mode 100644
index 98a70fde53..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.pool.md
+++ /dev/null
@@ -1,80 +0,0 @@
-### `tf.nn.pool(input, window_shape, pooling_type, padding, dilation_rate=None, strides=None, name=None, data_format=None)` {#pool}
-
-Performs an N-D pooling operation.
-
-In the case that `data_format` does not start with "NC", computes for
- 0 <= b < batch_size,
- 0 <= x[i] < output_spatial_shape[i],
- 0 <= c < num_channels:
-
- output[b, x[0], ..., x[N-1], c] =
- REDUCE_{z[0], ..., z[N-1]}
- input[b,
- x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0],
- ...
- x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1],
- c],
-
-where the reduction function REDUCE depends on the value of `pooling_type`,
-and pad_before is defined based on the value of `padding` as described in the
-[comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution).
-The reduction never includes out-of-bounds positions.
-
-In the case that `data_format` starts with `"NC"`, the `input` and output are
-simply transposed as follows:
-
- pool(input, data_format, **kwargs) =
- tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]),
- **kwargs),
- [0, N+1] + range(1, N+1))
-
-##### Args:
-
-
-* <b>`input`</b>: Tensor of rank N+2, of shape
- `[batch_size] + input_spatial_shape + [num_channels]` if data_format does
- not start with "NC" (default), or
- `[batch_size, num_channels] + input_spatial_shape` if data_format starts
- with "NC". Pooling happens over the spatial dimensions only.
-* <b>`window_shape`</b>: Sequence of N ints >= 1.
-* <b>`pooling_type`</b>: Specifies pooling operation, must be "AVG" or "MAX".
-* <b>`padding`</b>: The padding algorithm, must be "SAME" or "VALID".
- See the [comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`dilation_rate`</b>: Optional. Dilation rate. List of N ints >= 1.
- Defaults to [1]*N. If any value of dilation_rate is > 1, then all values
- of strides must be 1.
-* <b>`strides`</b>: Optional. Sequence of N ints >= 1. Defaults to [1]*N.
- If any value of strides is > 1, then all values of dilation_rate must be
- 1.
-* <b>`name`</b>: Optional. Name of the op.
-* <b>`data_format`</b>: A string or None. Specifies whether the channel dimension of
- the `input` and output is the last dimension (default, or if `data_format`
- does not start with "NC"), or the second dimension (if `data_format`
- starts with "NC"). For N=1, the valid values are "NWC" (default) and
- "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For
- N=3, the valid value is "NDHWC".
-
-##### Returns:
-
- Tensor of rank N+2, of shape
- [batch_size] + output_spatial_shape + [num_channels]
-
- if data_format is None or does not start with "NC", or
-
- [batch_size, num_channels] + output_spatial_shape
-
- if data_format starts with "NC",
- where `output_spatial_shape` depends on the value of padding:
-
- If padding = "SAME":
- output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])
- If padding = "VALID":
- output_spatial_shape[i] =
- ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i])
- / strides[i]).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if arguments are invalid.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.quantized_conv2d.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.quantized_conv2d.md
deleted file mode 100644
index 0c9fd8f1db..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.quantized_conv2d.md
+++ /dev/null
@@ -1,39 +0,0 @@
-### `tf.nn.quantized_conv2d(input, filter, min_input, max_input, min_filter, max_filter, strides, padding, out_type=None, name=None)` {#quantized_conv2d}
-
-Computes a 2D convolution given quantized 4D input and filter tensors.
-
-The inputs are quantized tensors where the lowest value represents the real
-number of the associated minimum, and the highest represents the maximum.
-This means that you can only interpret the quantized output in the same way, by
-taking the returned minimum and maximum values into account.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
-* <b>`filter`</b>: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
- filter's input_depth dimension must match input's depth dimensions.
-* <b>`min_input`</b>: A `Tensor` of type `float32`.
- The float value that the lowest quantized input value represents.
-* <b>`max_input`</b>: A `Tensor` of type `float32`.
- The float value that the highest quantized input value represents.
-* <b>`min_filter`</b>: A `Tensor` of type `float32`.
- The float value that the lowest quantized filter value represents.
-* <b>`max_filter`</b>: A `Tensor` of type `float32`.
- The float value that the highest quantized filter value represents.
-* <b>`strides`</b>: A list of `ints`.
- The stride of the sliding window for each dimension of the input
- tensor.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`out_type`</b>: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32`. Defaults to `tf.qint32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, min_output, max_output).
-
-* <b>`output`</b>: A `Tensor` of type `out_type`.
-* <b>`min_output`</b>: A `Tensor` of type `float32`. The float value that the lowest quantized output value represents.
-* <b>`max_output`</b>: A `Tensor` of type `float32`. The float value that the highest quantized output value represents.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.relu6.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.relu6.md
deleted file mode 100644
index 9695e557eb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.relu6.md
+++ /dev/null
@@ -1,15 +0,0 @@
-### `tf.nn.relu6(features, name=None)` {#relu6}
-
-Computes Rectified Linear 6: `min(max(features, 0), 6)`.
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`,
- `int16`, or `int8`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` with the same type as `features`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.sufficient_statistics.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.sufficient_statistics.md
deleted file mode 100644
index 84aa616331..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.sufficient_statistics.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.nn.sufficient_statistics(x, axes, shift=None, keep_dims=False, name=None)` {#sufficient_statistics}
-
-Calculate the sufficient statistics for the mean and variance of `x`.
-
-These sufficient statistics are computed using the one pass algorithm on
-an input that's optionally shifted. See:
-https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Computing_shifted_data
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`.
-* <b>`axes`</b>: Array of ints. Axes along which to compute mean and variance.
-* <b>`shift`</b>: A `Tensor` containing the value by which to shift the data for
- numerical stability, or `None` if no shift is to be performed. A shift
- close to the true mean provides the most numerically stable results.
-* <b>`keep_dims`</b>: produce statistics with the same dimensionality as the input.
-* <b>`name`</b>: Name used to scope the operations that compute the sufficient stats.
-
-##### Returns:
-
- Four `Tensor` objects of the same type as `x`:
-
- * the count (number of elements to average over).
- * the (possibly shifted) sum of the elements in the array.
- * the (possibly shifted) sum of squares of the elements in the array.
- * the shift by which the mean must be corrected or None if `shift` is None.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.weighted_cross_entropy_with_logits.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.weighted_cross_entropy_with_logits.md
deleted file mode 100644
index 12593f8412..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.nn.weighted_cross_entropy_with_logits.md
+++ /dev/null
@@ -1,52 +0,0 @@
-### `tf.nn.weighted_cross_entropy_with_logits(targets, logits, pos_weight, name=None)` {#weighted_cross_entropy_with_logits}
-
-Computes a weighted cross entropy.
-
-This is like `sigmoid_cross_entropy_with_logits()` except that `pos_weight`,
-allows one to trade off recall and precision by up- or down-weighting the
-cost of a positive error relative to a negative error.
-
-The usual cross-entropy cost is defined as:
-
- targets * -log(sigmoid(logits)) + (1 - targets) * -log(1 - sigmoid(logits))
-
-The argument `pos_weight` is used as a multiplier for the positive targets:
-
- targets * -log(sigmoid(logits)) * pos_weight +
- (1 - targets) * -log(1 - sigmoid(logits))
-
-For brevity, let `x = logits`, `z = targets`, `q = pos_weight`.
-The loss is:
-
- qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
- = qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))
- = qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))
- = qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))
- = (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x))
- = (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x))
-
-Setting `l = (1 + (q - 1) * z)`, to ensure stability and avoid overflow,
-the implementation uses
-
- (1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0))
-
-`logits` and `targets` must have the same type and shape.
-
-##### Args:
-
-
-* <b>`targets`</b>: A `Tensor` of the same type and shape as `logits`.
-* <b>`logits`</b>: A `Tensor` of type `float32` or `float64`.
-* <b>`pos_weight`</b>: A coefficient to use on the positive examples.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of the same shape as `logits` with the componentwise
- weighted logistic losses.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `logits` and `targets` do not have the same shape.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.no_regularizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.no_regularizer.md
deleted file mode 100644
index cb55675641..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.no_regularizer.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.no_regularizer(_)` {#no_regularizer}
-
-Use this function to prevent regularization of variables.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.ones_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.ones_initializer.md
deleted file mode 100644
index 871e73ba25..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.ones_initializer.md
+++ /dev/null
@@ -1,15 +0,0 @@
-Initializer that generates tensors initialized to 1.
-- - -
-
-#### `tf.ones_initializer.__call__(shape, dtype=None, partition_info=None)` {#ones_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.ones_initializer.__init__(dtype=tf.float32)` {#ones_initializer.__init__}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.python_io.TFRecordOptions.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.python_io.TFRecordOptions.md
deleted file mode 100644
index 3c05efe834..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.python_io.TFRecordOptions.md
+++ /dev/null
@@ -1,15 +0,0 @@
-Options used for manipulating TFRecord files.
-- - -
-
-#### `tf.python_io.TFRecordOptions.__init__(compression_type)` {#TFRecordOptions.__init__}
-
-
-
-
-- - -
-
-#### `tf.python_io.TFRecordOptions.get_compression_type_string(cls, options)` {#TFRecordOptions.get_compression_type_string}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.quantized_concat.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.quantized_concat.md
deleted file mode 100644
index 0bb94d727d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.quantized_concat.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.quantized_concat(concat_dim, values, input_mins, input_maxes, name=None)` {#quantized_concat}
-
-Concatenates quantized tensors along one dimension.
-
-##### Args:
-
-
-* <b>`concat_dim`</b>: A `Tensor` of type `int32`.
- 0-D. The dimension along which to concatenate. Must be in the
- range [0, rank(values)).
-* <b>`values`</b>: A list of at least 2 `Tensor` objects of the same type.
- The `N` Tensors to concatenate. Their ranks and types must match,
- and their sizes must match in all dimensions except `concat_dim`.
-* <b>`input_mins`</b>: A list with the same number of `Tensor` objects as `values` of `Tensor` objects of type `float32`.
- The minimum scalar values for each of the input tensors.
-* <b>`input_maxes`</b>: A list with the same number of `Tensor` objects as `values` of `Tensor` objects of type `float32`.
- The maximum scalar values for each of the input tensors.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, output_min, output_max).
-
-* <b>`output`</b>: A `Tensor`. Has the same type as `values`. A `Tensor` with the concatenation of values stacked along the
- `concat_dim` dimension. This tensor's shape matches that of `values` except
- in `concat_dim` where it has the sum of the sizes.
-* <b>`output_min`</b>: A `Tensor` of type `float32`. The float value that the minimum quantized output value represents.
-* <b>`output_max`</b>: A `Tensor` of type `float32`. The float value that the maximum quantized output value represents.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reduce_prod.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reduce_prod.md
deleted file mode 100644
index 89810f8459..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reduce_prod.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.reduce_prod(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_prod}
-
-Computes the product of elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The tensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
-@compatibility(numpy)
-Equivalent to np.prod
-@end_compatibility
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reset_default_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reset_default_graph.md
deleted file mode 100644
index ae5a906a0d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reset_default_graph.md
+++ /dev/null
@@ -1,10 +0,0 @@
-### `tf.reset_default_graph()` {#reset_default_graph}
-
-Clears the default graph stack and resets the global default graph.
-
-NOTE: The default graph is a property of the current thread. This
-function applies only to the current thread. Calling this function while
-a `tf.Session` or `tf.InteractiveSession` is active will result in undefined
-behavior. Using any previously created `tf.Operation` or `tf.Tensor` objects
-after calling this function will result in undefined behavior.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reverse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reverse.md
deleted file mode 100644
index d040ecf92a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.reverse.md
+++ /dev/null
@@ -1,64 +0,0 @@
-### `tf.reverse(tensor, axis, name=None)` {#reverse}
-
-Reverses specific dimensions of a tensor.
-
-NOTE `tf.reverse` has now changed behavior in preparation for 1.0.
-`tf.reverse_v2` is currently an alias that will be deprecated before TF 1.0.
-
-Given a `tensor`, and a `int32` tensor `axis` representing the set of
-dimensions of `tensor` to reverse. This operation reverses each dimension
-`i` for which there exists `j` s.t. `axis[j] == i`.
-
-`tensor` can have up to 8 dimensions. The number of dimensions specified
-in `axis` may be 0 or more entries. If an index is specified more than
-once, a InvalidArgument error is raised.
-
-For example:
-
-```prettyprint
-# tensor 't' is [[[[ 0, 1, 2, 3],
-# [ 4, 5, 6, 7],
-# [ 8, 9, 10, 11]],
-# [[12, 13, 14, 15],
-# [16, 17, 18, 19],
-# [20, 21, 22, 23]]]]
-# tensor 't' shape is [1, 2, 3, 4]
-
-# 'dims' is [3] or 'dims' is -1
-reverse(t, dims) ==> [[[[ 3, 2, 1, 0],
- [ 7, 6, 5, 4],
- [ 11, 10, 9, 8]],
- [[15, 14, 13, 12],
- [19, 18, 17, 16],
- [23, 22, 21, 20]]]]
-
-# 'dims' is '[1]' (or 'dims' is '[-3]')
-reverse(t, dims) ==> [[[[12, 13, 14, 15],
- [16, 17, 18, 19],
- [20, 21, 22, 23]
- [[ 0, 1, 2, 3],
- [ 4, 5, 6, 7],
- [ 8, 9, 10, 11]]]]
-
-# 'dims' is '[2]' (or 'dims' is '[-2]')
-reverse(t, dims) ==> [[[[8, 9, 10, 11],
- [4, 5, 6, 7],
- [0, 1, 2, 3]]
- [[20, 21, 22, 23],
- [16, 17, 18, 19],
- [12, 13, 14, 15]]]]
-```
-
-##### Args:
-
-
-* <b>`tensor`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int32`, `int64`, `bool`, `half`, `float32`, `float64`, `complex64`, `complex128`.
- Up to 8-D.
-* <b>`axis`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 1-D. The indices of the dimensions to reverse.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `tensor`. The same shape as `tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.round.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.round.md
deleted file mode 100644
index 1693dbe61f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.round.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.round(x, name=None)` {#round}
-
-Rounds the values of a tensor to the nearest integer, element-wise.
-
-Rounds half to even. Also known as bankers rounding. If you want to round
-according to the current system rounding mode use tf::cint.
-For example:
-
-```python
-# 'a' is [0.9, 2.5, 2.3, 1.5, -4.5]
-tf.round(a) ==> [ 1.0, 2.0, 2.0, 2.0, -4.0 ]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `float32` or `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of same shape and type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.rsqrt.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.rsqrt.md
deleted file mode 100644
index 5f76fcd593..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.rsqrt.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.rsqrt(x, name=None)` {#rsqrt}
-
-Computes reciprocal of square root of x element-wise.
-
-I.e., \\(y = 1 / \sqrt{x}\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scatter_add.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scatter_add.md
deleted file mode 100644
index a8f8b7a9b0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scatter_add.md
+++ /dev/null
@@ -1,46 +0,0 @@
-### `tf.scatter_add(ref, indices, updates, use_locking=None, name=None)` {#scatter_add}
-
-Adds sparse updates to a variable reference.
-
-This operation computes
-
- # Scalar indices
- ref[indices, ...] += updates[...]
-
- # Vector indices (for each i)
- ref[indices[i], ...] += updates[i, ...]
-
- # High rank indices (for each i, ..., j)
- ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]
-
-This operation outputs `ref` after the update is done.
-This makes it easier to chain operations that need to use the reset value.
-
-Duplicate entries are handled correctly: if multiple `indices` reference
-the same location, their contributions add.
-
-Requires `updates.shape = indices.shape + ref.shape[1:]`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/ScatterAdd.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Should be from a `Variable` node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A tensor of indices into the first dimension of `ref`.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A tensor of updated values to add to `ref`.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- If True, the addition will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as `ref`. Returned as a convenience for operations that want
- to use the updated values after the update is done.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scatter_div.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scatter_div.md
deleted file mode 100644
index ecd8e8b890..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.scatter_div.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.scatter_div(ref, indices, updates, use_locking=None, name=None)` {#scatter_div}
-
-Divides a variable reference by sparse updates.
-
-This operation computes
-
- # Scalar indices
- ref[indices, ...] /= updates[...]
-
- # Vector indices (for each i)
- ref[indices[i], ...] /= updates[i, ...]
-
- # High rank indices (for each i, ..., j)
- ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]
-
-This operation outputs `ref` after the update is done.
-This makes it easier to chain operations that need to use the reset value.
-
-Duplicate entries are handled correctly: if multiple `indices` reference
-the same location, their contributions divide.
-
-Requires `updates.shape = indices.shape + ref.shape[1:]`.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Should be from a `Variable` node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A tensor of indices into the first dimension of `ref`.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A tensor of values that `ref` is divided by.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- If True, the operation will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as `ref`. Returned as a convenience for operations that want
- to use the updated values after the update is done.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sequence_mask.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sequence_mask.md
deleted file mode 100644
index 7c0144f2f4..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sequence_mask.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.sequence_mask(lengths, maxlen=None, dtype=tf.bool, name=None)` {#sequence_mask}
-
-Return a mask tensor representing the first N positions of each row.
-
-Example:
-
-```python
-tf.sequence_mask([1, 3, 2], 5) =
- [[True, False, False, False, False],
- [True, True, True, False, False],
- [True, True, False, False, False]]
-```
-
-##### Args:
-
-
-* <b>`lengths`</b>: 1D integer tensor, all its values < maxlen.
-* <b>`maxlen`</b>: scalar integer tensor, maximum length of each row. Default: use
- maximum over lengths.
-* <b>`dtype`</b>: output type of the resulting tensor.
-* <b>`name`</b>: name of the op.
-
-##### Returns:
-
- A 2D mask tensor, as shown in the example above, cast to specified dtype.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the arguments have invalid rank.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.set_random_seed.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.set_random_seed.md
deleted file mode 100644
index d8d3abc5eb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.set_random_seed.md
+++ /dev/null
@@ -1,98 +0,0 @@
-### `tf.set_random_seed(seed)` {#set_random_seed}
-
-Sets the graph-level random seed.
-
-Operations that rely on a random seed actually derive it from two seeds:
-the graph-level and operation-level seeds. This sets the graph-level seed.
-
-Its interactions with operation-level seeds is as follows:
-
- 1. If neither the graph-level nor the operation seed is set:
- A random seed is used for this op.
- 2. If the graph-level seed is set, but the operation seed is not:
- The system deterministically picks an operation seed in conjunction
- with the graph-level seed so that it gets a unique random sequence.
- 3. If the graph-level seed is not set, but the operation seed is set:
- A default graph-level seed and the specified operation seed are used to
- determine the random sequence.
- 4. If both the graph-level and the operation seed are set:
- Both seeds are used in conjunction to determine the random sequence.
-
-To illustrate the user-visible effects, consider these examples:
-
-To generate different sequences across sessions, set neither
-graph-level nor op-level seeds:
-
-```python
-a = tf.random_uniform([1])
-b = tf.random_normal([1])
-
-print("Session 1")
-with tf.Session() as sess1:
- print(sess1.run(a)) # generates 'A1'
- print(sess1.run(a)) # generates 'A2'
- print(sess1.run(b)) # generates 'B1'
- print(sess1.run(b)) # generates 'B2'
-
-print("Session 2")
-with tf.Session() as sess2:
- print(sess2.run(a)) # generates 'A3'
- print(sess2.run(a)) # generates 'A4'
- print(sess2.run(b)) # generates 'B3'
- print(sess2.run(b)) # generates 'B4'
-```
-
-To generate the same repeatable sequence for an op across sessions, set the
-seed for the op:
-
-```python
-a = tf.random_uniform([1], seed=1)
-b = tf.random_normal([1])
-
-# Repeatedly running this block with the same graph will generate the same
-# sequence of values for 'a', but different sequences of values for 'b'.
-print("Session 1")
-with tf.Session() as sess1:
- print(sess1.run(a)) # generates 'A1'
- print(sess1.run(a)) # generates 'A2'
- print(sess1.run(b)) # generates 'B1'
- print(sess1.run(b)) # generates 'B2'
-
-print("Session 2")
-with tf.Session() as sess2:
- print(sess2.run(a)) # generates 'A1'
- print(sess2.run(a)) # generates 'A2'
- print(sess2.run(b)) # generates 'B3'
- print(sess2.run(b)) # generates 'B4'
-```
-
-To make the random sequences generated by all ops be repeatable across
-sessions, set a graph-level seed:
-
-```python
-tf.set_random_seed(1234)
-a = tf.random_uniform([1])
-b = tf.random_normal([1])
-
-# Repeatedly running this block with the same graph will generate the same
-# sequences of 'a' and 'b'.
-print("Session 1")
-with tf.Session() as sess1:
- print(sess1.run(a)) # generates 'A1'
- print(sess1.run(a)) # generates 'A2'
- print(sess1.run(b)) # generates 'B1'
- print(sess1.run(b)) # generates 'B2'
-
-print("Session 2")
-with tf.Session() as sess2:
- print(sess2.run(a)) # generates 'A1'
- print(sess2.run(a)) # generates 'A2'
- print(sess2.run(b)) # generates 'B1'
- print(sess2.run(b)) # generates 'B2'
-```
-
-##### Args:
-
-
-* <b>`seed`</b>: integer.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_fill_empty_rows.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_fill_empty_rows.md
deleted file mode 100644
index 3ea1697f3d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_fill_empty_rows.md
+++ /dev/null
@@ -1,54 +0,0 @@
-### `tf.sparse_fill_empty_rows(sp_input, default_value, name=None)` {#sparse_fill_empty_rows}
-
-Fills empty rows in the input 2-D `SparseTensor` with a default value.
-
-This op adds entries with the specified `default_value` at index
-`[row, 0]` for any row in the input that does not already have a value.
-
-For example, suppose `sp_input` has shape `[5, 6]` and non-empty values:
-
- [0, 1]: a
- [0, 3]: b
- [2, 0]: c
- [3, 1]: d
-
-Rows 1 and 4 are empty, so the output will be of shape `[5, 6]` with values:
-
- [0, 1]: a
- [0, 3]: b
- [1, 0]: default_value
- [2, 0]: c
- [3, 1]: d
- [4, 0]: default_value
-
-Note that the input may have empty columns at the end, with no effect on
-this op.
-
-The output `SparseTensor` will be in row-major order and will have the
-same shape as the input.
-
-This op also returns an indicator vector such that
-
- empty_row_indicator[i] = True iff row i was an empty row.
-
-##### Args:
-
-
-* <b>`sp_input`</b>: A `SparseTensor` with shape `[N, M]`.
-* <b>`default_value`</b>: The value to fill for empty rows, with the same type as
- `sp_input.`
-* <b>`name`</b>: A name prefix for the returned tensors (optional)
-
-##### Returns:
-
-
-* <b>`sp_ordered_output`</b>: A `SparseTensor` with shape `[N, M]`, and with all empty
- rows filled in with `default_value`.
-* <b>`empty_row_indicator`</b>: A bool vector of length `N` indicating whether each
- input row was empty.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_reorder.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_reorder.md
deleted file mode 100644
index 1e7b8fd857..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_reorder.md
+++ /dev/null
@@ -1,41 +0,0 @@
-### `tf.sparse_reorder(sp_input, name=None)` {#sparse_reorder}
-
-Reorders a `SparseTensor` into the canonical, row-major ordering.
-
-Note that by convention, all sparse ops preserve the canonical ordering
-along increasing dimension number. The only time ordering can be violated
-is during manual manipulation of the indices and values to add entries.
-
-Reordering does not affect the shape of the `SparseTensor`.
-
-For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`:
-
- [0, 3]: b
- [0, 1]: a
- [3, 1]: d
- [2, 0]: c
-
-then the output will be a `SparseTensor` of shape `[4, 5]` and
-`indices` / `values`:
-
- [0, 1]: a
- [0, 3]: b
- [2, 0]: c
- [3, 1]: d
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The input `SparseTensor`.
-* <b>`name`</b>: A name prefix for the returned tensors (optional)
-
-##### Returns:
-
- A `SparseTensor` with the same shape and non-empty values, but in
- canonical ordering.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_retain.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_retain.md
deleted file mode 100644
index dcaa303627..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_retain.md
+++ /dev/null
@@ -1,33 +0,0 @@
-### `tf.sparse_retain(sp_input, to_retain)` {#sparse_retain}
-
-Retains specified non-empty values within a `SparseTensor`.
-
-For example, if `sp_input` has shape `[4, 5]` and 4 non-empty string values:
-
- [0, 1]: a
- [0, 3]: b
- [2, 0]: c
- [3, 1]: d
-
-and `to_retain = [True, False, False, True]`, then the output will
-be a `SparseTensor` of shape `[4, 5]` with 2 non-empty values:
-
- [0, 1]: a
- [3, 1]: d
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The input `SparseTensor` with `N` non-empty elements.
-* <b>`to_retain`</b>: A bool vector of length `N` with `M` true values.
-
-##### Returns:
-
- A `SparseTensor` with the same shape as the input and `M` non-empty
- elements corresponding to the true positions in `to_retain`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_segment_sqrt_n.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_segment_sqrt_n.md
deleted file mode 100644
index 83ae3d67ec..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_segment_sqrt_n.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.sparse_segment_sqrt_n(data, indices, segment_ids, name=None)` {#sparse_segment_sqrt_n}
-
-Computes the sum along sparse segments of a tensor divided by the sqrt of N.
-
-N is the size of the segment being reduced.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor. Has same rank as `segment_ids`.
-* <b>`segment_ids`</b>: A `Tensor` of type `int32`.
- A 1-D tensor. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_to_dense.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_to_dense.md
deleted file mode 100644
index d4df5a9183..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.sparse_to_dense.md
+++ /dev/null
@@ -1,45 +0,0 @@
-### `tf.sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value=0, validate_indices=True, name=None)` {#sparse_to_dense}
-
-Converts a sparse representation into a dense tensor.
-
-Builds an array `dense` with shape `output_shape` such that
-
-```python
-# If sparse_indices is scalar
-dense[i] = (i == sparse_indices ? sparse_values : default_value)
-
-# If sparse_indices is a vector, then for each i
-dense[sparse_indices[i]] = sparse_values[i]
-
-# If sparse_indices is an n by d matrix, then for each i in [0, n)
-dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i]
-```
-
-All other values in `dense` are set to `default_value`. If `sparse_values`
-is a scalar, all sparse indices are set to this single value.
-
-Indices should be sorted in lexicographic order, and indices must not
-contain any repeats. If `validate_indices` is True, these properties
-are checked during execution.
-
-##### Args:
-
-
-* <b>`sparse_indices`</b>: A 0-D, 1-D, or 2-D `Tensor` of type `int32` or `int64`.
- `sparse_indices[i]` contains the complete index where `sparse_values[i]`
- will be placed.
-* <b>`output_shape`</b>: A 1-D `Tensor` of the same type as `sparse_indices`. Shape
- of the dense output tensor.
-* <b>`sparse_values`</b>: A 0-D or 1-D `Tensor`. Values corresponding to each row of
- `sparse_indices`, or a scalar value to be used for all sparse indices.
-* <b>`default_value`</b>: A 0-D `Tensor` of the same type as `sparse_values`. Value
- to set for indices not specified in `sparse_indices`. Defaults to zero.
-* <b>`validate_indices`</b>: A boolean value. If True, indices are checked to make
- sure they are sorted in lexicographic order and that there are no repeats.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Dense `Tensor` of shape `output_shape`. Has the same type as
- `sparse_values`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.strided_slice.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.strided_slice.md
deleted file mode 100644
index 25abd415c3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.strided_slice.md
+++ /dev/null
@@ -1,86 +0,0 @@
-### `tf.strided_slice(input_, begin, end, strides=None, begin_mask=0, end_mask=0, ellipsis_mask=0, new_axis_mask=0, shrink_axis_mask=0, var=None, name=None)` {#strided_slice}
-
-Extracts a strided slice from a tensor.
-
-To a first order, this operation extracts a slice of size `end - begin`
-from a tensor `input`
-starting at the location specified by `begin`. The slice continues by adding
-`stride` to the `begin` index until all dimensions are not less than `end`.
-Note that components of stride can be negative, which causes a reverse
-slice.
-
-This operation can be thought of an encoding of a numpy style sliced
-range. Given a python slice input[<spec0>, <spec1>, ..., <specn>]
-this function will be called as follows.
-
-`begin`, `end`, and `strides` will be all length n. n is in general
-not the same dimensionality as `input`.
-
-For the ith spec,
-`begin_mask`, `end_mask`, `ellipsis_mask`, `new_axis_mask`,
-and `shrink_axis_mask` will have the ith bit corresponding to
-the ith spec.
-
-If the ith bit of `begin_mask` is non-zero, `begin[i]` is ignored and
-the fullest possible range in that dimension is used instead.
-`end_mask` works analogously, except with the end range.
-
-`foo[5:,:,:3]` on a 7x8x9 tensor is equivalent to `foo[5:7,0:8,0:3]`.
-`foo[::-1]` reverses a tensor with shape 8.
-
-
-If the ith bit of `ellipsis_mask`, as many unspecified dimensions
-as needed will be inserted between other dimensions. Only one
-non-zero bit is allowed in `ellipsis_mask`.
-
-For example `foo[3:5,...,4:5]` on a shape 10x3x3x10 tensor is
-equivalent to `foo[3:5,:,:,4:5]` and
-`foo[3:5,...]` is equivalent to `foo[3:5,:,:,:]`.
-
-If the ith bit of `new_axis_mask` is one, then a `begin`,
-`end`, and `stride` are ignored and a new length 1 dimension is
-added at this point in the output tensor.
-
-For example `foo[3:5,4]` on a 10x8 tensor produces a shape 2 tensor
-whereas `foo[3:5,4:5]` produces a shape 2x1 tensor with shrink_mask
-being 1<<1 == 2.
-
-If the ith bit of `shrink_axis_mask` is one, then `begin`,
-`end[i]`, and `stride[i]` are used to do a slice in the appropriate
-dimension, but the output tensor will be reduced in dimensionality
-by one. This is only valid if the ith entry of slice[i]==1.
-
-NOTE: `begin` and `end` are zero-indexed`.
-`strides` entries must be non-zero.
-
-
-```python
-# 'input' is [[[1, 1, 1], [2, 2, 2]],
-# [[3, 3, 3], [4, 4, 4]],
-# [[5, 5, 5], [6, 6, 6]]]
-tf.strided_slice(input, [1, 0, 0], [2, 1, 3], [1, 1, 1]) ==> [[[3, 3, 3]]]
-tf.strided_slice(input, [1, 0, 0], [2, 2, 3], [1, 1, 1]) ==> [[[3, 3, 3],
- [4, 4, 4]]]
-tf.strided_slice(input, [1, 1, 0], [2, -1, 3], [1, -1, 1]) ==>[[[4, 4, 4],
- [3, 3, 3]]]
-```
-
-##### Args:
-
-
-* <b>`input_`</b>: A `Tensor`.
-* <b>`begin`</b>: An `int32` or `int64` `Tensor`.
-* <b>`end`</b>: An `int32` or `int64` `Tensor`.
-* <b>`strides`</b>: An `int32` or `int64` `Tensor`.
-* <b>`begin_mask`</b>: An `int32` mask.
-* <b>`end_mask`</b>: An `int32` mask.
-* <b>`ellipsis_mask`</b>: An `int32` mask.
-* <b>`new_axis_mask`</b>: An `int32` mask.
-* <b>`shrink_axis_mask`</b>: An `int32` mask.
-* <b>`var`</b>: The variable corresponding to `input_` or None
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` the same type as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.subtract.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.subtract.md
deleted file mode 100644
index 93a00899c9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.subtract.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.subtract(x, y, name=None)` {#subtract}
-
-Returns x - y element-wise.
-
-*NOTE*: `tf.subtract` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.summary.histogram.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.summary.histogram.md
deleted file mode 100644
index 19df48fd3f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.summary.histogram.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### `tf.summary.histogram(name, values, collections=None)` {#histogram}
-
-Outputs a `Summary` protocol buffer with a histogram.
-
-The generated
-[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)
-has one summary value containing a histogram for `values`.
-
-This op reports an `InvalidArgument` error if any value is not finite.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the generated node. Will also serve as a series name in
- TensorBoard.
-* <b>`values`</b>: A real numeric `Tensor`. Any shape. Values to use to
- build the histogram.
-* <b>`collections`</b>: Optional list of graph collections keys. The new summary op is
- added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.
-
-##### Returns:
-
- A scalar `Tensor` of type `string`. The serialized `Summary` protocol
- buffer.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.summary.merge.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.summary.merge.md
deleted file mode 100644
index 5a7bd8a0f5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.summary.merge.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.summary.merge(inputs, collections=None, name=None)` {#merge}
-
-Merges summaries.
-
-This op creates a
-[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)
-protocol buffer that contains the union of all the values in the input
-summaries.
-
-When the Op is run, it reports an `InvalidArgument` error if multiple values
-in the summaries to merge use the same tag.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A list of `string` `Tensor` objects containing serialized `Summary`
- protocol buffers.
-* <b>`collections`</b>: Optional list of graph collections keys. The new summary op is
- added to these collections. Defaults to `[]`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A scalar `Tensor` of type `string`. The serialized `Summary` protocol
- buffer resulting from the merging.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.tensordot.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.tensordot.md
deleted file mode 100644
index 76c811e213..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.tensordot.md
+++ /dev/null
@@ -1,54 +0,0 @@
-### `tf.tensordot(a, b, axes, name=None)` {#tensordot}
-
-Tensor contraction of a and b along specified axes.
-
-Tensordot (also known as tensor contraction) sums the product of elements
-from `a` and `b` over the indices specified by `a_axes` and `b_axes`.
-The lists `a_axes` and `b_axes` specify those pairs of axes along which to
-contract the tensors. The axis `a_axes[i]` of `a` must have the same dimension
-as axis `b_axes[i]` of `b` for all `i` in `range(0, len(a_axes))`. The lists
-`a_axes` and `b_axes` must have identical length and consist of unique
-integers that specify valid axes for each of the tensors.
-
-This operation corresponds to `numpy.tensordot(a, b, axes)`.
-
-Example 1: When `a` and `b` are matrices (order 2), the case `axes = 1`
-is equivalent to matrix multiplication.
-
-Example 2: When `a` and `b` are matrices (order 2), the case
-`axes = [[1], [0]]` is equivalent to matrix multiplication.
-
-Example 3: Suppose that \\(a_ijk\\) and \\(b_lmn\\) represent two
-tensors of order 3. Then, `contract(a, b, [0], [2])` is the order 4 tensor
-\\(c_{jklm}\\) whose entry
-corresponding to the indices \\((j,k,l,m)\\) is given by:
-
-\\( c_{jklm} = \sum_i a_{ijk} b_{lmi} \\).
-
-In general, `order(c) = order(a) + order(b) - 2*len(axes[0])`.
-
-##### Args:
-
-
-* <b>`a`</b>: `Tensor` of type `float32` or `float64`.
-* <b>`b`</b>: `Tensor` with the same type as `a`.
-* <b>`axes`</b>: Either a scalar `N`, or a list or an `int32` `Tensor` of shape [2, k].
- If axes is a scalar, sum over the last N axes of a and the first N axes
- of b in order.
- If axes is a list or `Tensor` the first and second row contain the set of
- unique integers specifying axes along which the contraction is computed,
- for `a` and `b`, respectively. The number of axes for `a` and `b` must
- be equal.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` with the same type as `a`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shapes of `a`, `b`, and `axes` are incompatible.
-* <b>`IndexError`</b>: If the values in axes exceed the rank of the corresponding
- tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdagradDAOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdagradDAOptimizer.md
deleted file mode 100644
index f25ccc018a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.AdagradDAOptimizer.md
+++ /dev/null
@@ -1,194 +0,0 @@
-Adagrad Dual Averaging algorithm for sparse linear models.
-
-See this [paper](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf).
-
-This optimizer takes care of regularization of unseen features in a mini batch
-by updating them when they are seen with a closed form update rule that is
-equivalent to having updated them on every mini-batch.
-
-AdagradDA is typically used when there is a need for large sparsity in the
-trained model. This optimizer only guarantees sparsity for linear models. Be
-careful when using AdagradDA for deep networks as it will require careful
-initialization of the gradient accumulators for it to train.
-- - -
-
-#### `tf.train.AdagradDAOptimizer.__init__(learning_rate, global_step, initial_gradient_squared_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='AdagradDA')` {#AdagradDAOptimizer.__init__}
-
-Construct a new AdagradDA optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A `Tensor` or a floating point value. The learning rate.
-* <b>`global_step`</b>: A `Tensor` containing the current training step number.
-* <b>`initial_gradient_squared_accumulator_value`</b>: A floating point value.
- Starting value for the accumulators, must be positive.
-* <b>`l1_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`l2_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`use_locking`</b>: If `True` use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "AdagradDA".
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `initial_gradient_squared_accumulator_value` is
- invalid.
-
-
-- - -
-
-#### `tf.train.AdagradDAOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdagradDAOptimizer.apply_gradients}
-
-Apply gradients to variables.
-
-This is the second part of `minimize()`. It returns an `Operation` that
-applies gradients.
-
-##### Args:
-
-
-* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
- `compute_gradients()`.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`name`</b>: Optional name for the returned operation. Default to the
- name passed to the `Optimizer` constructor.
-
-##### Returns:
-
- An `Operation` that applies the specified gradients. If `global_step`
- was not None, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
-* <b>`ValueError`</b>: If none of the variables have gradients.
-
-
-- - -
-
-#### `tf.train.AdagradDAOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdagradDAOptimizer.compute_gradients}
-
-Compute gradients of `loss` for the variables in `var_list`.
-
-This is the first part of `minimize()`. It returns a list
-of (gradient, variable) pairs where "gradient" is the gradient
-for "variable". Note that "gradient" can be a `Tensor`, an
-`IndexedSlices`, or `None` if there is no gradient for the
-given variable.
-
-##### Args:
-
-
-* <b>`loss`</b>: A Tensor containing the value to minimize.
-* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKey.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- A list of (gradient, variable) pairs. Variable is always present, but
- gradient can be `None`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
-* <b>`ValueError`</b>: If some arguments are invalid.
-
-
-- - -
-
-#### `tf.train.AdagradDAOptimizer.get_name()` {#AdagradDAOptimizer.get_name}
-
-
-
-
-- - -
-
-#### `tf.train.AdagradDAOptimizer.get_slot(var, name)` {#AdagradDAOptimizer.get_slot}
-
-Return a slot named `name` created for `var` by the Optimizer.
-
-Some `Optimizer` subclasses use additional variables. For example
-`Momentum` and `Adagrad` use variables to accumulate updates. This method
-gives access to these `Variable` objects if for some reason you need them.
-
-Use `get_slot_names()` to get the list of slot names created by the
-`Optimizer`.
-
-##### Args:
-
-
-* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
-* <b>`name`</b>: A string.
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-- - -
-
-#### `tf.train.AdagradDAOptimizer.get_slot_names()` {#AdagradDAOptimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-See `get_slot()`.
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.train.AdagradDAOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdagradDAOptimizer.minimize}
-
-Add operations to minimize `loss` by updating `var_list`.
-
-This method simply combines calls `compute_gradients()` and
-`apply_gradients()`. If you want to process the gradient before applying
-them call `compute_gradients()` and `apply_gradients()` explicitly instead
-of using this function.
-
-##### Args:
-
-
-* <b>`loss`</b>: A `Tensor` containing the value to minimize.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`name`</b>: Optional name for the returned operation.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- An Operation that updates the variables in `var_list`. If `global_step`
- was not `None`, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.Scaffold.get_or_default.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.Scaffold.get_or_default.md
deleted file mode 100644
index ecb8dc31eb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.Scaffold.get_or_default.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.train.Scaffold.get_or_default(arg_name, collection_key, default_constructor)` {#Scaffold.get_or_default}
-
-Get from cache or create a default operation.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.SessionRunArgs.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.SessionRunArgs.md
deleted file mode 100644
index 85695d22be..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.SessionRunArgs.md
+++ /dev/null
@@ -1,64 +0,0 @@
-Represents arguments to be added to a `Session.run()` call.
-
-Args:
- fetches: Exactly like the 'fetches' argument to Session.Run().
- Can be a single tensor or op, a list of 'fetches' or a dictionary
- of fetches. For example:
- fetches = global_step_tensor
- fetches = [train_op, summary_op, global_step_tensor]
- fetches = {'step': global_step_tensor, 'summ': summary_op}
- Note that this can recurse as expected:
- fetches = {'step': global_step_tensor,
- 'ops': [train_op, check_nan_op]}
- feed_dict: Exactly like the `feed_dict` argument to `Session.Run()`
- options: Exactly like the `options` argument to `Session.run()`, i.e., a
- config_pb2.RunOptions proto.
-- - -
-
-#### `tf.train.SessionRunArgs.__getnewargs__()` {#SessionRunArgs.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.train.SessionRunArgs.__getstate__()` {#SessionRunArgs.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.train.SessionRunArgs.__new__(cls, fetches, feed_dict=None, options=None)` {#SessionRunArgs.__new__}
-
-
-
-
-- - -
-
-#### `tf.train.SessionRunArgs.__repr__()` {#SessionRunArgs.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.train.SessionRunArgs.feed_dict` {#SessionRunArgs.feed_dict}
-
-Alias for field number 1
-
-
-- - -
-
-#### `tf.train.SessionRunArgs.fetches` {#SessionRunArgs.fetches}
-
-Alias for field number 0
-
-
-- - -
-
-#### `tf.train.SessionRunArgs.options` {#SessionRunArgs.options}
-
-Alias for field number 2
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.SummarySaverHook.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.SummarySaverHook.md
deleted file mode 100644
index 2d09da7b0c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.SummarySaverHook.md
+++ /dev/null
@@ -1,79 +0,0 @@
-Saves summaries every N steps.
-- - -
-
-#### `tf.train.SummarySaverHook.__init__(save_steps=None, save_secs=None, output_dir=None, summary_writer=None, scaffold=None, summary_op=None)` {#SummarySaverHook.__init__}
-
-Initializes a `SummarySaver` monitor.
-
-##### Args:
-
-
-* <b>`save_steps`</b>: `int`, save summaries every N steps. Exactly one of
- `save_secs` and `save_steps` should be set.
-* <b>`save_secs`</b>: `int`, save summaries every N seconds.
-* <b>`output_dir`</b>: `string`, the directory to save the summaries to. Only used
- if no `summary_writer` is supplied.
-* <b>`summary_writer`</b>: `SummaryWriter`. If `None` and an `output_dir` was passed,
- one will be created accordingly.
-* <b>`scaffold`</b>: `Scaffold` to get summary_op if it's not provided.
-* <b>`summary_op`</b>: `Tensor` of type `string` containing the serialized `Summary`
- protocol buffer or a list of `Tensor`. They are most likely an output
- by TF summary methods like `tf.summary.scalar` or
- `tf.summary.merge_all`. It can be passed in as one tensor; if more
- than one, they must be passed in as a list.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: Exactly one of scaffold or summary_op should be set.
-
-
-- - -
-
-#### `tf.train.SummarySaverHook.after_create_session(session, coord)` {#SummarySaverHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.SummarySaverHook.after_run(run_context, run_values)` {#SummarySaverHook.after_run}
-
-
-
-
-- - -
-
-#### `tf.train.SummarySaverHook.before_run(run_context)` {#SummarySaverHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.SummarySaverHook.begin()` {#SummarySaverHook.begin}
-
-
-
-
-- - -
-
-#### `tf.train.SummarySaverHook.end(session=None)` {#SummarySaverHook.end}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.batch.md
deleted file mode 100644
index 965f4f2eef..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.batch.md
+++ /dev/null
@@ -1,81 +0,0 @@
-### `tf.train.batch(tensors, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#batch}
-
-Creates batches of tensors in `tensors`.
-
-The argument `tensors` can be a list or a dictionary of tensors.
-The value returned by the function will be of the same type
-as `tensors`.
-
-This function is implemented using a queue. A `QueueRunner` for the
-queue is added to the current `Graph`'s `QUEUE_RUNNER` collection.
-
-If `enqueue_many` is `False`, `tensors` is assumed to represent a single
-example. An input tensor with shape `[x, y, z]` will be output as a tensor
-with shape `[batch_size, x, y, z]`.
-
-If `enqueue_many` is `True`, `tensors` is assumed to represent a batch of
-examples, where the first dimension is indexed by example, and all members of
-`tensors` should have the same size in the first dimension. If an input
-tensor has shape `[*, x, y, z]`, the output will have shape `[batch_size, x,
-y, z]`. The `capacity` argument controls the how long the prefetching is
-allowed to grow the queues.
-
-The returned operation is a dequeue operation and will throw
-`tf.errors.OutOfRangeError` if the input queue is exhausted. If this
-operation is feeding another input queue, its queue runner will catch
-this exception, however, if this operation is used in your main thread
-you are responsible for catching this yourself.
-
-*N.B.:* If `dynamic_pad` is `False`, you must ensure that either
-(i) the `shapes` argument is passed, or (ii) all of the tensors in
-`tensors` must have fully-defined shapes. `ValueError` will be
-raised if neither of these conditions holds.
-
-If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
-tensors is known, but individual dimensions may have shape `None`.
-In this case, for each enqueue the dimensions with value `None`
-may have a variable length; upon dequeue, the output tensors will be padded
-on the right to the maximum shape of the tensors in the current minibatch.
-For numbers, this padding takes value 0. For strings, this padding is
-the empty string. See `PaddingFIFOQueue` for more info.
-
-If `allow_smaller_final_batch` is `True`, a smaller batch value than
-`batch_size` is returned when the queue is closed and there are not enough
-elements to fill the batch, otherwise the pending elements are discarded.
-In addition, all output tensors' static shapes, as accessed via the
-`get_shape` method will have a first `Dimension` value of `None`, and
-operations that depend on fixed batch_size would fail.
-
-Note: if `num_epochs` is not `None`, this function creates local counter
-`epochs`. Use `local_variables_initializer()` to initialize local variables.
-
-##### Args:
-
-
-* <b>`tensors`</b>: The list or dictionary of tensors to enqueue.
-* <b>`batch_size`</b>: The new batch size pulled from the queue.
-* <b>`num_threads`</b>: The number of threads enqueuing `tensors`.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensors` is a single example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensors`.
-* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
- The given dimensions are padded upon dequeue so that tensors within a
- batch have the same shapes.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (Optional). If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the same types as `tensors` (except if
- the input is a list of one element, then it returns a tensor, not a list).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensors`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.do_quantize_training_on_graphdef.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.do_quantize_training_on_graphdef.md
deleted file mode 100644
index 6fbb908133..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.do_quantize_training_on_graphdef.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.train.do_quantize_training_on_graphdef(input_graph, num_bits)` {#do_quantize_training_on_graphdef}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.import_meta_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.import_meta_graph.md
deleted file mode 100644
index d0fa7f551e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.train.import_meta_graph.md
+++ /dev/null
@@ -1,70 +0,0 @@
-### `tf.train.import_meta_graph(meta_graph_or_file, clear_devices=False, import_scope=None, **kwargs)` {#import_meta_graph}
-
-Recreates a Graph saved in a `MetaGraphDef` proto.
-
-This function takes a `MetaGraphDef` protocol buffer as input. If
-the argument is a file containing a `MetaGraphDef` protocol buffer ,
-it constructs a protocol buffer from the file content. The function
-then adds all the nodes from the `graph_def` field to the
-current graph, recreates all the collections, and returns a saver
-constructed from the `saver_def` field.
-
-In combination with `export_meta_graph()`, this function can be used to
-
-* Serialize a graph along with other Python objects such as `QueueRunner`,
- `Variable` into a `MetaGraphDef`.
-
-* Restart training from a saved graph and checkpoints.
-
-* Run inference from a saved graph and checkpoints.
-
-```Python
-...
-# Create a saver.
-saver = tf.train.Saver(...variables...)
-# Remember the training_op we want to run by adding it to a collection.
-tf.add_to_collection('train_op', train_op)
-sess = tf.Session()
-for step in xrange(1000000):
- sess.run(train_op)
- if step % 1000 == 0:
- # Saves checkpoint, which by default also exports a meta_graph
- # named 'my-model-global_step.meta'.
- saver.save(sess, 'my-model', global_step=step)
-```
-
-Later we can continue training from this saved `meta_graph` without building
-the model from scratch.
-
-```Python
-with tf.Session() as sess:
- new_saver = tf.train.import_meta_graph('my-save-dir/my-model-10000.meta')
- new_saver.restore(sess, 'my-save-dir/my-model-10000')
- # tf.get_collection() returns a list. In this example we only want the
- # first one.
- train_op = tf.get_collection('train_op')[0]
- for step in xrange(1000000):
- sess.run(train_op)
-```
-
-NOTE: Restarting training from saved `meta_graph` only works if the
-device assignments have not changed.
-
-##### Args:
-
-
-* <b>`meta_graph_or_file`</b>: `MetaGraphDef` protocol buffer or filename (including
- the path) containing a `MetaGraphDef`.
-* <b>`clear_devices`</b>: Whether or not to clear the device field for an `Operation`
- or `Tensor` during import.
-* <b>`import_scope`</b>: Optional `string`. Name scope to add. Only used when
- initializing from protocol buffer.
-* <b>`**kwargs`</b>: Optional keyed arguments.
-
-##### Returns:
-
- A saver constructed from `saver_def` in `MetaGraphDef` or None.
-
- A None value is returned if no variables exist in the `MetaGraphDef`
- (i.e., there are no variables to restore).
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.truncatemod.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.truncatemod.md
deleted file mode 100644
index c75108fc55..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.truncatemod.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.truncatemod(x, y, name=None)` {#truncatemod}
-
-Returns element-wise remainder of division. This emulates C semantics where
-
-true, this follows C semantics in that the result here is consistent
-with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.
-
-*NOTE*: `Mod` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.variable_axis_size_partitioner.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.variable_axis_size_partitioner.md
deleted file mode 100644
index 5d8822e83c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.variable_axis_size_partitioner.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.variable_axis_size_partitioner(max_shard_bytes, axis=0, bytes_per_string_element=16, max_shards=None)` {#variable_axis_size_partitioner}
-
-Get a partitioner for VariableScope to keep shards below `max_shard_bytes`.
-
-This partitioner will shard a Variable along one axis, attempting to keep
-the maximum shard size below `max_shard_bytes`. In practice, this is not
-always possible when sharding along only one axis. When this happens,
-this axis is sharded as much as possible (i.e., every dimension becomes
-a separate shard).
-
-If the partitioner hits the `max_shards` limit, then each shard may end up
-larger than `max_shard_bytes`. By default `max_shards` equals `None` and no
-limit on the number of shards is enforced.
-
-One reasonable value for `max_shard_bytes` is `(64 << 20) - 1`, or almost
-`64MB`, to keep below the protobuf byte limit.
-
-##### Args:
-
-
-* <b>`max_shard_bytes`</b>: The maximum size any given shard is allowed to be.
-* <b>`axis`</b>: The axis to partition along. Default: outermost axis.
-* <b>`bytes_per_string_element`</b>: If the `Variable` is of type string, this provides
- an estimate of how large each scalar in the `Variable` is.
-* <b>`max_shards`</b>: The maximum number of shards in int created taking precedence
- over `max_shard_bytes`.
-
-##### Returns:
-
- A partition function usable as the `partitioner` argument to
- `variable_scope`, `get_variable`, and `get_partitioned_variable_list`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If any of the byte counts are non-positive.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.zeta.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.zeta.md
deleted file mode 100644
index ed66237d38..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard8/tf.zeta.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.zeta(x, q, name=None)` {#zeta}
-
-Compute the Hurwitz zeta function \\(\zeta(x, q)\\).
-
-The Hurwitz zeta function is defined as:
-
-```
-\zeta(x, q) = \sum_{n=0}^{\infty} (q + n)^{-x}
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`q`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.DeviceSpec.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.DeviceSpec.md
deleted file mode 100644
index 2355a91e54..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.DeviceSpec.md
+++ /dev/null
@@ -1,147 +0,0 @@
-Represents a (possibly partial) specification for a TensorFlow device.
-
-`DeviceSpec`s are used throughout TensorFlow to describe where state is stored
-and computations occur. Using `DeviceSpec` allows you to parse device spec
-strings to verify their validity, merge them or compose them programmatically.
-
-Example:
-
-```python
-# Place the operations on device "GPU:0" in the "ps" job.
-device_spec = DeviceSpec(job="ps", device_type="GPU", device_index=0)
-with tf.device(device_spec):
- # Both my_var and squared_var will be placed on /job:ps/device:GPU:0.
- my_var = tf.Variable(..., name="my_variable")
- squared_var = tf.square(my_var)
-```
-
-If a `DeviceSpec` is partially specified, it will be merged with other
-`DeviceSpec`s according to the scope in which it is defined. `DeviceSpec`
-components defined in inner scopes take precedence over those defined in
-outer scopes.
-
-```python
-with tf.device(DeviceSpec(job="train", )):
- with tf.device(DeviceSpec(job="ps", device_type="GPU", device_index=0):
- # Nodes created here will be assigned to /job:ps/device:GPU:0.
- with tf.device(DeviceSpec(device_type="GPU", device_index=1):
- # Nodes created here will be assigned to /job:train/device:GPU:1.
-```
-
-A `DeviceSpec` consists of 5 components -- each of
-which is optionally specified:
-
-* Job: The job name.
-* Replica: The replica index.
-* Task: The task index.
-* Device type: The device type string (e.g. "CPU" or "GPU").
-* Device index: The device index.
-- - -
-
-#### `tf.DeviceSpec.__init__(job=None, replica=None, task=None, device_type=None, device_index=None)` {#DeviceSpec.__init__}
-
-Create a new `DeviceSpec` object.
-
-##### Args:
-
-
-* <b>`job`</b>: string. Optional job name.
-* <b>`replica`</b>: int. Optional replica index.
-* <b>`task`</b>: int. Optional task index.
-* <b>`device_type`</b>: Optional device type string (e.g. "CPU" or "GPU")
-* <b>`device_index`</b>: int. Optional device index. If left
- unspecified, device represents 'any' device_index.
-
-
-- - -
-
-#### `tf.DeviceSpec.from_string(spec)` {#DeviceSpec.from_string}
-
-Construct a `DeviceSpec` from a string.
-
-##### Args:
-
-
-* <b>`spec`</b>: a string of the form
- /job:<name>/replica:<id>/task:<id>/device:CPU:<id>
- or
- /job:<name>/replica:<id>/task:<id>/device:GPU:<id>
- as cpu and gpu are mutually exclusive.
- All entries are optional.
-
-##### Returns:
-
- A DeviceSpec.
-
-
-- - -
-
-#### `tf.DeviceSpec.job` {#DeviceSpec.job}
-
-
-
-
-- - -
-
-#### `tf.DeviceSpec.merge_from(dev)` {#DeviceSpec.merge_from}
-
-Merge the properties of "dev" into this `DeviceSpec`.
-
-##### Args:
-
-
-* <b>`dev`</b>: a `DeviceSpec`.
-
-
-- - -
-
-#### `tf.DeviceSpec.parse_from_string(spec)` {#DeviceSpec.parse_from_string}
-
-Parse a `DeviceSpec` name into its components.
-
-##### Args:
-
-
-* <b>`spec`</b>: a string of the form
- /job:<name>/replica:<id>/task:<id>/device:CPU:<id>
- or
- /job:<name>/replica:<id>/task:<id>/device:GPU:<id>
- as cpu and gpu are mutually exclusive.
- All entries are optional.
-
-##### Returns:
-
- The `DeviceSpec`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the spec was not valid.
-
-
-- - -
-
-#### `tf.DeviceSpec.replica` {#DeviceSpec.replica}
-
-
-
-
-- - -
-
-#### `tf.DeviceSpec.task` {#DeviceSpec.task}
-
-
-
-
-- - -
-
-#### `tf.DeviceSpec.to_string()` {#DeviceSpec.to_string}
-
-Return a string representation of this `DeviceSpec`.
-
-##### Returns:
-
- a string of the form
- /job:<name>/replica:<id>/task:<id>/device:<device_type>:<id>.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.FixedLenFeature.__new__.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.FixedLenFeature.__new__.md
deleted file mode 100644
index f7838d1884..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.FixedLenFeature.__new__.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.FixedLenFeature.__new__(_cls, shape, dtype, default_value=None)` {#FixedLenFeature.__new__}
-
-Create new instance of FixedLenFeature(shape, dtype, default_value)
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.NotDifferentiable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.NotDifferentiable.md
deleted file mode 100644
index c77655a1d3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.NotDifferentiable.md
+++ /dev/null
@@ -1,32 +0,0 @@
-### `tf.NotDifferentiable(op_type)` {#NotDifferentiable}
-
-Specifies that ops of type `op_type` is not differentiable.
-
-This function should *not* be used for operations that have a
-well-defined gradient that is not yet implemented.
-
-This function is only used when defining a new op type. It may be
-used for ops such as `tf.size()` that are not differentiable. For
-example:
-
-```python
-tf.NotDifferentiable("Size")
-```
-
-The gradient computed for 'op_type' will then propagate zeros.
-
-For ops that have a well-defined gradient but are not yet implemented,
-no declaration should be made, and an error *must* be thrown if
-an attempt to request its gradient is made.
-
-##### Args:
-
-
-* <b>`op_type`</b>: The string type of an operation. This corresponds to the
- `OpDef.name` field for the proto that defines the operation.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `op_type` is not a string.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.Session.reset.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.Session.reset.md
deleted file mode 100644
index 4c47c02264..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.Session.reset.md
+++ /dev/null
@@ -1,29 +0,0 @@
-#### `tf.Session.reset(target, containers=None, config=None)` {#Session.reset}
-
-Resets resource containers on `target`, and close all connected sessions.
-
-A resource container is distributed across all workers in the
-same cluster as `target`. When a resource container on `target`
-is reset, resources associated with that container will be cleared.
-In particular, all Variables in the container will become undefined:
-they lose their values and shapes.
-
-NOTE:
-(i) reset() is currently only implemented for distributed sessions.
-(ii) Any sessions on the master named by `target` will be closed.
-
-If no resource containers are provided, all containers are reset.
-
-##### Args:
-
-
-* <b>`target`</b>: The execution engine to connect to.
-* <b>`containers`</b>: A list of resource container name strings, or `None` if all of
- all the containers are to be reset.
-* <b>`config`</b>: (Optional.) Protocol buffer with configuration options.
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses if an error occurs while
- resetting containers.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.assign_add.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.assign_add.md
deleted file mode 100644
index 60e2875407..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.assign_add.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.assign_add(ref, value, use_locking=None, name=None)` {#assign_add}
-
-Update 'ref' by adding 'value' to it.
-
-This operation outputs "ref" after the update is done.
-This makes it easier to chain operations that need to use the reset value.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types:
- `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`,
- `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Should be from a `Variable` node.
-* <b>`value`</b>: A `Tensor`. Must have the same type as `ref`.
- The value to be added to the variable.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- If True, the addition will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as "ref". Returned as a convenience for operations that want
- to use the new value after the variable has been updated.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.clip_by_average_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.clip_by_average_norm.md
deleted file mode 100644
index 4598e183d8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.clip_by_average_norm.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.clip_by_average_norm(t, clip_norm, name=None)` {#clip_by_average_norm}
-
-Clips tensor values to a maximum average L2-norm.
-
-Given a tensor `t`, and a maximum clip value `clip_norm`, this operation
-normalizes `t` so that its average L2-norm is less than or equal to
-`clip_norm`. Specifically, if the average L2-norm is already less than or
-equal to `clip_norm`, then `t` is not modified. If the average L2-norm is
-greater than `clip_norm`, then this operation returns a tensor of the same
-type and shape as `t` with its values set to:
-
-`t * clip_norm / l2norm_avg(t)`
-
-In this case, the average L2-norm of the output tensor is `clip_norm`.
-
-This operation is typically used to clip gradients before applying them with
-an optimizer.
-
-##### Args:
-
-
-* <b>`t`</b>: A `Tensor`.
-* <b>`clip_norm`</b>: A 0-D (scalar) `Tensor` > 0. A maximum clipping value.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A clipped `Tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.entropy.entropy_shannon.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.entropy.entropy_shannon.md
deleted file mode 100644
index 489d94783d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.entropy.entropy_shannon.md
+++ /dev/null
@@ -1,38 +0,0 @@
-### `tf.contrib.bayesflow.entropy.entropy_shannon(p, z=None, n=None, seed=None, form=None, name='entropy_shannon')` {#entropy_shannon}
-
-Monte Carlo or deterministic computation of Shannon's entropy.
-
-Depending on the kwarg `form`, this `Op` returns either the analytic entropy
-of the distribution `p`, or the sampled entropy:
-
-```
--n^{-1} sum_{i=1}^n p.log_prob(z_i), where z_i ~ p,
- \approx - E_p[ Log[p(Z)] ]
- = Entropy[p]
-```
-
-User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
-
-##### Args:
-
-
-* <b>`p`</b>: `tf.contrib.distributions.Distribution`
-* <b>`z`</b>: `Tensor` of samples from `p`, produced by `p.sample(n)` for some `n`.
-* <b>`n`</b>: Integer `Tensor`. Number of samples to generate if `z` is not provided.
-* <b>`seed`</b>: Python integer to seed the random number generator.
-* <b>`form`</b>: Either `ELBOForms.analytic_entropy` (use formula for entropy of `q`)
- or `ELBOForms.sample` (sample estimate of entropy), or `ELBOForms.default`
- (attempt analytic entropy, fallback on sample).
- Default value is `ELBOForms.default`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with same `dtype` as `p`, and shape equal to `p.batch_shape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `form` not handled by this function.
-* <b>`ValueError`</b>: If `form` is `ELBOForms.analytic_entropy` and `n` was provided.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler_logspace.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler_logspace.md
deleted file mode 100644
index 10f7a67c63..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler_logspace.md
+++ /dev/null
@@ -1,45 +0,0 @@
-### `tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler_logspace(log_f, log_p, sampling_dist_q, z=None, n=None, seed=None, name='expectation_importance_sampler_logspace')` {#expectation_importance_sampler_logspace}
-
-Importance sampling with a positive function, in log-space.
-
-With `p(z) := exp{log_p(z)}`, and `f(z) = exp{log_f(z)}`, this `Op`
-returns
-
-```
-Log[ n^{-1} sum_{i=1}^n [ f(z_i) p(z_i) / q(z_i) ] ], z_i ~ q,
-\approx Log[ E_q[ f(Z) p(Z) / q(Z) ] ]
-= Log[E_p[f(Z)]]
-```
-
-This integral is done in log-space with max-subtraction to better handle the
-often extreme values that `f(z) p(z) / q(z)` can take on.
-
-In contrast to `expectation_importance_sampler`, this `Op` returns values in
-log-space.
-
-
-User supplies either `Tensor` of samples `z`, or number of samples to draw `n`
-
-##### Args:
-
-
-* <b>`log_f`</b>: Callable mapping samples from `sampling_dist_q` to `Tensors` with
- shape broadcastable to `q.batch_shape`.
- For example, `log_f` works "just like" `sampling_dist_q.log_prob`.
-* <b>`log_p`</b>: Callable mapping samples from `sampling_dist_q` to `Tensors` with
- shape broadcastable to `q.batch_shape`.
- For example, `log_p` works "just like" `q.log_prob`.
-* <b>`sampling_dist_q`</b>: The sampling distribution.
- `tf.contrib.distributions.Distribution`.
- `float64` `dtype` recommended.
- `log_p` and `q` should be supported on the same set.
-* <b>`z`</b>: `Tensor` of samples from `q`, produced by `q.sample` for some `n`.
-* <b>`n`</b>: Integer `Tensor`. Number of samples to generate if `z` is not provided.
-* <b>`seed`</b>: Python integer to seed the random number generator.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- Logarithm of the importance sampling estimate. `Tensor` with `shape` equal
- to batch shape of `q`, and `dtype` = `q.dtype`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.stochastic_tensor.get_current_value_type.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.stochastic_tensor.get_current_value_type.md
deleted file mode 100644
index 98bd7241bb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.stochastic_tensor.get_current_value_type.md
+++ /dev/null
@@ -1,4 +0,0 @@
-### `tf.contrib.bayesflow.stochastic_tensor.get_current_value_type()` {#get_current_value_type}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.variational_inference.register_prior.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.variational_inference.register_prior.md
deleted file mode 100644
index 45059c3199..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.bayesflow.variational_inference.register_prior.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.contrib.bayesflow.variational_inference.register_prior(variational, prior)` {#register_prior}
-
-Associate a variational `StochasticTensor` with a `Distribution` prior.
-
-This is a helper function used in conjunction with `elbo` that allows users
-to specify the mapping between variational distributions and their priors
-without having to pass in `variational_with_prior` explicitly.
-
-##### Args:
-
-
-* <b>`variational`</b>: `StochasticTensor` q(Z). Approximating distribution.
-* <b>`prior`</b>: `Distribution` p(Z). Prior distribution.
-
-##### Returns:
-
- None
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if variational is not a `StochasticTensor` or `prior` is not
- a `Distribution`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.BetaWithSoftplusConcentration.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.BetaWithSoftplusConcentration.md
deleted file mode 100644
index e01f84653b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.BetaWithSoftplusConcentration.md
+++ /dev/null
@@ -1,597 +0,0 @@
-Beta with softplus transform of `concentration1` and `concentration0`.
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.__init__(concentration1, concentration0, validate_args=False, allow_nan_stats=True, name='BetaWithSoftplusConcentration')` {#BetaWithSoftplusConcentration.__init__}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.allow_nan_stats` {#BetaWithSoftplusConcentration.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.batch_shape` {#BetaWithSoftplusConcentration.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.batch_shape_tensor(name='batch_shape_tensor')` {#BetaWithSoftplusConcentration.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.cdf(value, name='cdf')` {#BetaWithSoftplusConcentration.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.concentration0` {#BetaWithSoftplusConcentration.concentration0}
-
-Concentration parameter associated with a `0` outcome.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.concentration1` {#BetaWithSoftplusConcentration.concentration1}
-
-Concentration parameter associated with a `1` outcome.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.copy(**override_parameters_kwargs)` {#BetaWithSoftplusConcentration.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.covariance(name='covariance')` {#BetaWithSoftplusConcentration.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.dtype` {#BetaWithSoftplusConcentration.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.entropy(name='entropy')` {#BetaWithSoftplusConcentration.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.event_shape` {#BetaWithSoftplusConcentration.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.event_shape_tensor(name='event_shape_tensor')` {#BetaWithSoftplusConcentration.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.is_continuous` {#BetaWithSoftplusConcentration.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.is_scalar_batch(name='is_scalar_batch')` {#BetaWithSoftplusConcentration.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.is_scalar_event(name='is_scalar_event')` {#BetaWithSoftplusConcentration.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.log_cdf(value, name='log_cdf')` {#BetaWithSoftplusConcentration.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.log_prob(value, name='log_prob')` {#BetaWithSoftplusConcentration.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.log_survival_function(value, name='log_survival_function')` {#BetaWithSoftplusConcentration.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.mean(name='mean')` {#BetaWithSoftplusConcentration.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.mode(name='mode')` {#BetaWithSoftplusConcentration.mode}
-
-Mode.
-
-Additional documentation from `Beta`:
-
-Note: The mode is undefined when `concentration1 <= 1` or
-`concentration0 <= 1`. If `self.allow_nan_stats` is `True`, `NaN`
-is used for undefined modes. If `self.allow_nan_stats` is `False` an
-exception is raised when one or more modes are undefined.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.name` {#BetaWithSoftplusConcentration.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#BetaWithSoftplusConcentration.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.param_static_shapes(cls, sample_shape)` {#BetaWithSoftplusConcentration.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.parameters` {#BetaWithSoftplusConcentration.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.prob(value, name='prob')` {#BetaWithSoftplusConcentration.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Beta`:
-
-Note: `x` must have dtype `self.dtype` and be in
-`[0, 1].` It must have a shape compatible with `self.batch_shape()`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.reparameterization_type` {#BetaWithSoftplusConcentration.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.sample(sample_shape=(), seed=None, name='sample')` {#BetaWithSoftplusConcentration.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.stddev(name='stddev')` {#BetaWithSoftplusConcentration.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.survival_function(value, name='survival_function')` {#BetaWithSoftplusConcentration.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.total_concentration` {#BetaWithSoftplusConcentration.total_concentration}
-
-Sum of concentration parameters.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.validate_args` {#BetaWithSoftplusConcentration.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.BetaWithSoftplusConcentration.variance(name='variance')` {#BetaWithSoftplusConcentration.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.MultivariateNormalTriL.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.MultivariateNormalTriL.md
deleted file mode 100644
index 4bd0c96189..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.MultivariateNormalTriL.md
+++ /dev/null
@@ -1,750 +0,0 @@
-The multivariate normal distribution on `R^k`.
-
-The Multivariate Normal distribution is defined over `R^k` and parameterized
-by a (batch of) length-`k` `loc` vector (aka "mu") and a (batch of) `k x k`
-`scale` matrix; `covariance = scale @ scale.T` where `@` denotes
-matrix-multiplication.
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(x; loc, scale) = exp(-0.5 ||y||**2) / Z,
-y = inv(scale) @ (x - loc),
-Z = (2 pi)**(0.5 k) |det(scale)|,
-```
-
-where:
-
-* `loc` is a vector in `R^k`,
-* `scale` is a linear operator in `R^{k x k}`, `cov = scale @ scale.T`,
-* `Z` denotes the normalization constant, and,
-* `||y||**2` denotes the squared Euclidean norm of `y`.
-
-A (non-batch) `scale` matrix is:
-
-```none
-scale = scale_tril
-```
-
-where `scale_tril` is lower-triangular `k x k` matrix with non-zero diagonal,
-i.e., `tf.diag_part(scale_tril) != 0`.
-
-Additional leading dimensions (if any) will index batches.
-
-The MultivariateNormal distribution is a member of the [location-scale
-family](https://en.wikipedia.org/wiki/Location-scale_family), i.e., it can be
-constructed as,
-
-```none
-X ~ MultivariateNormal(loc=0, scale=1) # Identity scale, zero shift.
-Y = scale @ X + loc
-```
-
-Trainable (batch) lower-triangular matrices can be created with
-`ds.matrix_diag_transform()` and/or `ds.fill_lower_triangular()`
-
-#### Examples
-
-```python
-ds = tf.contrib.distributions
-
-# Initialize a single 3-variate Gaussian.
-mu = [1., 2, 3]
-cov = [[ 0.36, 0.12, 0.06],
- [ 0.12, 0.29, -0.13],
- [ 0.06, -0.13, 0.26]]
-scale = tf.cholesky(cov)
-# ==> [[ 0.6, 0. , 0. ],
-# [ 0.2, 0.5, 0. ],
-# [ 0.1, -0.3, 0.4]])
-mvn = ds.MultivariateNormalTriL(
- loc=mu,
- scale_tril=scale)
-
-mvn.mean().eval()
-# ==> [1., 2, 3]
-
-# Covariance agrees with cholesky(cov) parameterization.
-mvn.covariance().eval()
-# ==> [[ 0.36, 0.12, 0.06],
-# [ 0.12, 0.29, -0.13],
-# [ 0.06, -0.13, 0.26]]
-
-# Compute the pdf of an observation in `R^3` ; return a scalar.
-mvn.prob([-1., 0, 1]).eval() # shape: []
-
-# Initialize a 2-batch of 3-variate Gaussians.
-mu = [[1., 2, 3],
- [11, 22, 33]] # shape: [2, 3]
-tril = ... # shape: [2, 3, 3], lower triangular, non-zero diagonal.
-mvn = ds.MultivariateNormalTriL(
- loc=mu,
- scale_tril=tril)
-
-# Compute the pdf of two `R^3` observations; return a length-2 vector.
-x = [[-0.9, 0, 0.1],
- [-10, 0, 9]] # shape: [2, 3]
-mvn.prob(x).eval() # shape: [2]
-
-```
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.__init__(loc=None, scale_tril=None, validate_args=False, allow_nan_stats=True, name='MultivariateNormalTriL')` {#MultivariateNormalTriL.__init__}
-
-Construct Multivariate Normal distribution on `R^k`.
-
-The `batch_shape` is the broadcast shape between `loc` and `scale`
-arguments.
-
-The `event_shape` is given by the last dimension of `loc` or the last
-dimension of the matrix implied by `scale`.
-
-Recall that `covariance = scale @ scale.T`. A (non-batch) `scale` matrix is:
-
-```none
-scale = scale_tril
-```
-
-where `scale_tril` is lower-triangular `k x k` matrix with non-zero
-diagonal, i.e., `tf.diag_part(scale_tril) != 0`.
-
-Additional leading dimensions (if any) will index batches.
-
-##### Args:
-
-
-* <b>`loc`</b>: Floating-point `Tensor`. If this is set to `None`, `loc` is
- implicitly `0`. When specified, may have shape `[B1, ..., Bb, k]` where
- `b >= 0` and `k` is the event size.
-* <b>`scale_tril`</b>: Floating-point, lower-triangular `Tensor` with non-zero
- diagonal elements. `scale_tril` has shape `[B1, ..., Bb, k, k]` where
- `b >= 0` and `k` is the event size.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`,
- statistics (e.g., mean, mode, variance) use the value "`NaN`" to
- indicate the result is undefined. When `False`, an exception is raised
- if one or more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if neither `loc` nor `scale_tril` are specified.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.allow_nan_stats` {#MultivariateNormalTriL.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.batch_shape` {#MultivariateNormalTriL.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.batch_shape_tensor(name='batch_shape_tensor')` {#MultivariateNormalTriL.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.bijector` {#MultivariateNormalTriL.bijector}
-
-Function transforming x => y.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.cdf(value, name='cdf')` {#MultivariateNormalTriL.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.copy(**override_parameters_kwargs)` {#MultivariateNormalTriL.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.covariance(name='covariance')` {#MultivariateNormalTriL.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.det_covariance(name='det_covariance')` {#MultivariateNormalTriL.det_covariance}
-
-Determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.distribution` {#MultivariateNormalTriL.distribution}
-
-Base distribution, p(x).
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.dtype` {#MultivariateNormalTriL.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.entropy(name='entropy')` {#MultivariateNormalTriL.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.event_shape` {#MultivariateNormalTriL.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.event_shape_tensor(name='event_shape_tensor')` {#MultivariateNormalTriL.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.is_continuous` {#MultivariateNormalTriL.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.is_scalar_batch(name='is_scalar_batch')` {#MultivariateNormalTriL.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.is_scalar_event(name='is_scalar_event')` {#MultivariateNormalTriL.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.loc` {#MultivariateNormalTriL.loc}
-
-The `loc` `Tensor` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.log_cdf(value, name='log_cdf')` {#MultivariateNormalTriL.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.log_det_covariance(name='log_det_covariance')` {#MultivariateNormalTriL.log_det_covariance}
-
-Log of determinant of covariance matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.log_prob(value, name='log_prob')` {#MultivariateNormalTriL.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.log_survival_function(value, name='log_survival_function')` {#MultivariateNormalTriL.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.mean(name='mean')` {#MultivariateNormalTriL.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.mode(name='mode')` {#MultivariateNormalTriL.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.name` {#MultivariateNormalTriL.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#MultivariateNormalTriL.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.param_static_shapes(cls, sample_shape)` {#MultivariateNormalTriL.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.parameters` {#MultivariateNormalTriL.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.prob(value, name='prob')` {#MultivariateNormalTriL.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `MultivariateNormalLinearOperator`:
-
-`value` is a batch vector with compatible shape if `value` is a `Tensor` whose
-shape can be broadcast up to either:
-
-```python
-self.batch_shape + self.event_shape
-```
-
-or
-
-```python
-[M1, ..., Mm] + self.batch_shape + self.event_shape
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.reparameterization_type` {#MultivariateNormalTriL.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.sample(sample_shape=(), seed=None, name='sample')` {#MultivariateNormalTriL.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.scale` {#MultivariateNormalTriL.scale}
-
-The `scale` `LinearOperator` in `Y = scale @ X + loc`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.stddev(name='stddev')` {#MultivariateNormalTriL.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.survival_function(value, name='survival_function')` {#MultivariateNormalTriL.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.validate_args` {#MultivariateNormalTriL.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.MultivariateNormalTriL.variance(name='variance')` {#MultivariateNormalTriL.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.Poisson.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.Poisson.md
deleted file mode 100644
index 1a52643a32..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.Poisson.md
+++ /dev/null
@@ -1,613 +0,0 @@
-Poisson distribution.
-
-The Poisson distribution is parameterized by an event `rate` parameter.
-
-#### Mathematical Details
-
-The probability mass function (pmf) is,
-
-```none
-pmf(k; lambda, k >= 0) = (lambda^k / k!) / Z
-Z = exp(lambda).
-```
-
-where `rate = lambda` and `Z` is the normalizing constant.
-- - -
-
-#### `tf.contrib.distributions.Poisson.__init__(rate, validate_args=False, allow_nan_stats=True, name='Poisson')` {#Poisson.__init__}
-
-Initialize a batch of Poisson distributions.
-
-##### Args:
-
-
-* <b>`rate`</b>: Floating point tensor, the rate parameter of the
- distribution(s). `rate` must be positive.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.allow_nan_stats` {#Poisson.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.batch_shape` {#Poisson.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.batch_shape_tensor(name='batch_shape_tensor')` {#Poisson.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.cdf(value, name='cdf')` {#Poisson.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-
-Additional documentation from `Poisson`:
-
-Note that the input value must be a non-negative floating point tensor with
-dtype `dtype` and whose shape can be broadcast with `self.rate`. `x` is only
-legal if it is non-negative and its components are equal to integer values.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.copy(**override_parameters_kwargs)` {#Poisson.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.covariance(name='covariance')` {#Poisson.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.dtype` {#Poisson.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.entropy(name='entropy')` {#Poisson.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.event_shape` {#Poisson.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.event_shape_tensor(name='event_shape_tensor')` {#Poisson.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.is_continuous` {#Poisson.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.is_scalar_batch(name='is_scalar_batch')` {#Poisson.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.is_scalar_event(name='is_scalar_event')` {#Poisson.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.log_cdf(value, name='log_cdf')` {#Poisson.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-
-Additional documentation from `Poisson`:
-
-Note that the input value must be a non-negative floating point tensor with
-dtype `dtype` and whose shape can be broadcast with `self.rate`. `x` is only
-legal if it is non-negative and its components are equal to integer values.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.log_prob(value, name='log_prob')` {#Poisson.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Poisson`:
-
-Note that the input value must be a non-negative floating point tensor with
-dtype `dtype` and whose shape can be broadcast with `self.rate`. `x` is only
-legal if it is non-negative and its components are equal to integer values.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.log_survival_function(value, name='log_survival_function')` {#Poisson.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.mean(name='mean')` {#Poisson.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.mode(name='mode')` {#Poisson.mode}
-
-Mode.
-
-Additional documentation from `Poisson`:
-
-Note: when `rate` is an integer, there are actually two modes: `rate`
-and `rate - 1`. In this case we return the larger, i.e., `rate`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.name` {#Poisson.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#Poisson.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.param_static_shapes(cls, sample_shape)` {#Poisson.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.parameters` {#Poisson.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.prob(value, name='prob')` {#Poisson.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-
-Additional documentation from `Poisson`:
-
-Note that the input value must be a non-negative floating point tensor with
-dtype `dtype` and whose shape can be broadcast with `self.rate`. `x` is only
-legal if it is non-negative and its components are equal to integer values.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.rate` {#Poisson.rate}
-
-Rate parameter.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.reparameterization_type` {#Poisson.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.sample(sample_shape=(), seed=None, name='sample')` {#Poisson.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.stddev(name='stddev')` {#Poisson.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.survival_function(value, name='survival_function')` {#Poisson.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.validate_args` {#Poisson.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.Poisson.variance(name='variance')` {#Poisson.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.WishartFull.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.WishartFull.md
deleted file mode 100644
index eefa558fca..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.WishartFull.md
+++ /dev/null
@@ -1,669 +0,0 @@
-The matrix Wishart distribution on positive definite matrices.
-
-This distribution is defined by a scalar degrees of freedom `df` and a
-symmetric, positive definite scale matrix.
-
-Evaluation of the pdf, determinant, and sampling are all `O(k^3)` operations
-where `(k, k)` is the event space shape.
-
-#### Mathematical Details
-
-The probability density function (pdf) is,
-
-```none
-pdf(X; df, scale) = det(X)**(0.5 (df-k-1)) exp(-0.5 tr[inv(scale) X]) / Z
-Z = 2**(0.5 df k) |det(scale)|**(0.5 df) Gamma_k(0.5 df)
-```
-
-where:
-* `df >= k` denotes the degrees of freedom,
-* `scale` is a symmetric, positive definite, `k x k` matrix,
-* `Z` is the normalizing constant, and,
-* `Gamma_k` is the [multivariate Gamma function](
- https://en.wikipedia.org/wiki/Multivariate_gamma_function).
-
-#### Examples
-
-```python
-# Initialize a single 3x3 Wishart with Full factored scale matrix and 5
-# degrees-of-freedom.(*)
-df = 5
-scale = ... # Shape is [3, 3]; positive definite.
-dist = tf.contrib.distributions.WishartFull(df=df, scale=scale)
-
-# Evaluate this on an observation in R^3, returning a scalar.
-x = ... # A 3x3 positive definite matrix.
-dist.prob(x) # Shape is [], a scalar.
-
-# Evaluate this on a two observations, each in R^{3x3}, returning a length two
-# Tensor.
-x = [x0, x1] # Shape is [2, 3, 3].
-dist.prob(x) # Shape is [2].
-
-# Initialize two 3x3 Wisharts with Full factored scale matrices.
-df = [5, 4]
-scale = ... # Shape is [2, 3, 3].
-dist = tf.contrib.distributions.WishartFull(df=df, scale=scale)
-
-# Evaluate this on four observations.
-x = [[x0, x1], [x2, x3]] # Shape is [2, 2, 3, 3]; xi is positive definite.
-dist.prob(x) # Shape is [2, 2].
-
-# (*) - To efficiently create a trainable covariance matrix, see the example
-# in tf.contrib.distributions.matrix_diag_transform.
-```
-- - -
-
-#### `tf.contrib.distributions.WishartFull.__init__(df, scale, cholesky_input_output_matrices=False, validate_args=False, allow_nan_stats=True, name='WishartFull')` {#WishartFull.__init__}
-
-Construct Wishart distributions.
-
-##### Args:
-
-
-* <b>`df`</b>: `float` or `double` `Tensor`. Degrees of freedom, must be greater than
- or equal to dimension of the scale matrix.
-* <b>`scale`</b>: `float` or `double` `Tensor`. The symmetric positive definite
- scale matrix of the distribution.
-* <b>`cholesky_input_output_matrices`</b>: Python `bool`. Any function which whose
- input or output is a matrix assumes the input is Cholesky and returns a
- Cholesky factored matrix. Example `log_prob` input takes a Cholesky and
- `sample_n` returns a Cholesky when
- `cholesky_input_output_matrices=True`.
-* <b>`validate_args`</b>: Python `bool`, default `False`. When `True` distribution
- parameters are checked for validity despite possibly degrading runtime
- performance. When `False` invalid inputs may silently render incorrect
- outputs.
-* <b>`allow_nan_stats`</b>: Python `bool`, default `True`. When `True`, statistics
- (e.g., mean, mode, variance) use the value "`NaN`" to indicate the
- result is undefined. When `False`, an exception is raised if one or
- more of the statistic's batch members are undefined.
-* <b>`name`</b>: Python `str` name prefixed to Ops created by this class.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.allow_nan_stats` {#WishartFull.allow_nan_stats}
-
-Python `bool` describing behavior when a stat is undefined.
-
-Stats return +/- infinity when it makes sense. E.g., the variance of a
-Cauchy distribution is infinity. However, sometimes the statistic is
-undefined, e.g., if a distribution's pdf does not achieve a maximum within
-the support of the distribution, the mode is undefined. If the mean is
-undefined, then by definition the variance is undefined. E.g. the mean for
-Student's T for df = 1 is undefined (no clear way to say it is either + or -
-infinity), so the variance = E[(X - mean)**2] is also undefined.
-
-##### Returns:
-
-
-* <b>`allow_nan_stats`</b>: Python `bool`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.batch_shape` {#WishartFull.batch_shape}
-
-Shape of a single sample from a single event index as a `TensorShape`.
-
-May be partially defined or unknown.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.batch_shape_tensor(name='batch_shape_tensor')` {#WishartFull.batch_shape_tensor}
-
-Shape of a single sample from a single event index as a 1-D `Tensor`.
-
-The batch dimensions are indexes into independent, non-identical
-parameterizations of this distribution.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`batch_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.cdf(value, name='cdf')` {#WishartFull.cdf}
-
-Cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-cdf(x) := P[X <= x]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`cdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.cholesky_input_output_matrices` {#WishartFull.cholesky_input_output_matrices}
-
-Boolean indicating if `Tensor` input/outputs are Cholesky factorized.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.copy(**override_parameters_kwargs)` {#WishartFull.copy}
-
-Creates a deep copy of the distribution.
-
-Note: the copy distribution may continue to depend on the original
-intialization arguments.
-
-##### Args:
-
-
-* <b>`**override_parameters_kwargs`</b>: String/value dictionary of initialization
- arguments to override with new values.
-
-##### Returns:
-
-
-* <b>`distribution`</b>: A new instance of `type(self)` intitialized from the union
- of self.parameters and override_parameters_kwargs, i.e.,
- `dict(self.parameters, **override_parameters_kwargs)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.covariance(name='covariance')` {#WishartFull.covariance}
-
-Covariance.
-
-Covariance is (possibly) defined only for non-scalar-event distributions.
-
-For example, for a length-`k`, vector-valued distribution, it is calculated
-as,
-
-```none
-Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]
-```
-
-where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E`
-denotes expectation.
-
-Alternatively, for non-vector, multivariate distributions (e.g.,
-matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices
-under some vectorization of the events, i.e.,
-
-```none
-Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]
-````
-
-where `Cov` is a (batch of) `k' x k'` matrices,
-`0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function
-mapping indices of this distribution's event dimensions to indices of a
-length-`k'` vector.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`covariance`</b>: Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']`
- where the first `n` dimensions are batch coordinates and
- `k' = reduce_prod(self.event_shape)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.df` {#WishartFull.df}
-
-Wishart distribution degree(s) of freedom.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.dimension` {#WishartFull.dimension}
-
-Dimension of underlying vector space. The `p` in `R^(p*p)`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.dtype` {#WishartFull.dtype}
-
-The `DType` of `Tensor`s handled by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.entropy(name='entropy')` {#WishartFull.entropy}
-
-Shannon entropy in nats.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.event_shape` {#WishartFull.event_shape}
-
-Shape of a single sample from a single batch as a `TensorShape`.
-
-May be partially defined or unknown.
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `TensorShape`, possibly unknown.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.event_shape_tensor(name='event_shape_tensor')` {#WishartFull.event_shape_tensor}
-
-Shape of a single sample from a single batch as a 1-D int32 `Tensor`.
-
-##### Args:
-
-
-* <b>`name`</b>: name to give to the op
-
-##### Returns:
-
-
-* <b>`event_shape`</b>: `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.is_continuous` {#WishartFull.is_continuous}
-
-
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.is_scalar_batch(name='is_scalar_batch')` {#WishartFull.is_scalar_batch}
-
-Indicates that `batch_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_batch`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.is_scalar_event(name='is_scalar_event')` {#WishartFull.is_scalar_event}
-
-Indicates that `event_shape == []`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`is_scalar_event`</b>: `bool` scalar `Tensor`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.log_cdf(value, name='log_cdf')` {#WishartFull.log_cdf}
-
-Log cumulative distribution function.
-
-Given random variable `X`, the cumulative distribution function `cdf` is:
-
-```
-log_cdf(x) := Log[ P[X <= x] ]
-```
-
-Often, a numerical approximation can be used for `log_cdf(x)` that yields
-a more accurate answer than simply taking the logarithm of the `cdf` when
-`x << -1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`logcdf`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.log_normalization(name='log_normalization')` {#WishartFull.log_normalization}
-
-Computes the log normalizing constant, log(Z).
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.log_prob(value, name='log_prob')` {#WishartFull.log_prob}
-
-Log probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`log_prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.log_survival_function(value, name='log_survival_function')` {#WishartFull.log_survival_function}
-
-Log survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-log_survival_function(x) = Log[ P[X > x] ]
- = Log[ 1 - P[X <= x] ]
- = Log[ 1 - cdf(x) ]
-```
-
-Typically, different numerical approximations can be used for the log
-survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`.
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.mean(name='mean')` {#WishartFull.mean}
-
-Mean.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.mean_log_det(name='mean_log_det')` {#WishartFull.mean_log_det}
-
-Computes E[log(det(X))] under this Wishart distribution.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.mode(name='mode')` {#WishartFull.mode}
-
-Mode.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.name` {#WishartFull.name}
-
-Name prepended to all ops created by this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.param_shapes(cls, sample_shape, name='DistributionParamShapes')` {#WishartFull.param_shapes}
-
-Shapes of parameters given the desired shape of a call to `sample()`.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`.
-
-Subclasses should override class method `_param_shapes`.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `Tensor` or python list/tuple. Desired shape of a call to
- `sample()`.
-* <b>`name`</b>: name to prepend ops with.
-
-##### Returns:
-
- `dict` of parameter name to `Tensor` shapes.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.param_static_shapes(cls, sample_shape)` {#WishartFull.param_static_shapes}
-
-param_shapes with static (i.e. `TensorShape`) shapes.
-
-This is a class method that describes what key/value arguments are required
-to instantiate the given `Distribution` so that a particular shape is
-returned for that instance's call to `sample()`. Assumes that the sample's
-shape is known statically.
-
-Subclasses should override class method `_param_shapes` to return
-constant-valued tensors when constant values are fed.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: `TensorShape` or python list/tuple. Desired shape of a call
- to `sample()`.
-
-##### Returns:
-
- `dict` of parameter name to `TensorShape`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sample_shape` is a `TensorShape` and is not fully defined.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.parameters` {#WishartFull.parameters}
-
-Dictionary of parameters used to instantiate this `Distribution`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.prob(value, name='prob')` {#WishartFull.prob}
-
-Probability density/mass function (depending on `is_continuous`).
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`prob`</b>: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with
- values of type `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.reparameterization_type` {#WishartFull.reparameterization_type}
-
-Describes how samples from the distribution are reparameterized.
-
-Currently this is one of the static instances
-`distributions.FULLY_REPARAMETERIZED`
-or `distributions.NOT_REPARAMETERIZED`.
-
-##### Returns:
-
- An instance of `ReparameterizationType`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.sample(sample_shape=(), seed=None, name='sample')` {#WishartFull.sample}
-
-Generate samples of the specified shape.
-
-Note that a call to `sample()` without arguments will generate a single
-sample.
-
-##### Args:
-
-
-* <b>`sample_shape`</b>: 0D or 1D `int32` `Tensor`. Shape of the generated samples.
-* <b>`seed`</b>: Python integer seed for RNG
-* <b>`name`</b>: name to give to the op.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` with prepended dimensions `sample_shape`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.scale()` {#WishartFull.scale}
-
-Wishart distribution scale matrix.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.scale_operator_pd` {#WishartFull.scale_operator_pd}
-
-Wishart distribution scale matrix as an OperatorPD.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.stddev(name='stddev')` {#WishartFull.stddev}
-
-Standard deviation.
-
-Standard deviation is defined as,
-
-```none
-stddev = E[(X - E[X])**2]**0.5
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `stddev.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`stddev`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.survival_function(value, name='survival_function')` {#WishartFull.survival_function}
-
-Survival function.
-
-Given random variable `X`, the survival function is defined:
-
-```
-survival_function(x) = P[X > x]
- = 1 - P[X <= x]
- = 1 - cdf(x).
-```
-
-##### Args:
-
-
-* <b>`value`</b>: `float` or `double` `Tensor`.
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
- `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type
- `self.dtype`.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.validate_args` {#WishartFull.validate_args}
-
-Python `bool` indicating possibly expensive checks are enabled.
-
-
-- - -
-
-#### `tf.contrib.distributions.WishartFull.variance(name='variance')` {#WishartFull.variance}
-
-Variance.
-
-Variance is defined as,
-
-```none
-Var = E[(X - E[X])**2]
-```
-
-where `X` is the random variable associated with this distribution, `E`
-denotes expectation, and `Var.shape = batch_shape + event_shape`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name to give this op.
-
-##### Returns:
-
-
-* <b>`variance`</b>: Floating-point `Tensor` with shape identical to
- `batch_shape + event_shape`, i.e., the same shape as `self.mean()`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.softplus_inverse.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.softplus_inverse.md
deleted file mode 100644
index 6f97b1f959..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.distributions.softplus_inverse.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.contrib.distributions.softplus_inverse(x, name=None)` {#softplus_inverse}
-
-Computes the inverse softplus, i.e., x = softplus_inverse(softplus(x)).
-
-Mathematically this op is equivalent to:
-
-```none
-softplus_inverse = log(exp(x) - 1.)
-```
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor`. Non-negative (not enforced), floating-point.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `Tensor`. Has the same type/shape as input `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.ffmpeg.decode_audio.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.ffmpeg.decode_audio.md
deleted file mode 100644
index 64aab3cffb..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.ffmpeg.decode_audio.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.contrib.ffmpeg.decode_audio(contents, file_format=None, samples_per_second=None, channel_count=None)` {#decode_audio}
-
-Create an op that decodes the contents of an audio file.
-
-Note that ffmpeg is free to select the "best" audio track from an mp4.
-https://trac.ffmpeg.org/wiki/Map
-
-##### Args:
-
-
-* <b>`contents`</b>: The binary contents of the audio file to decode. This is a
- scalar.
-* <b>`file_format`</b>: A string specifying which format the contents will conform
- to. This can be mp3, mp4, ogg, or wav.
-* <b>`samples_per_second`</b>: The number of samples per second that is assumed.
- In some cases, resampling will occur to generate the correct sample
- rate.
-* <b>`channel_count`</b>: The number of channels that should be created from the
- audio contents. If the contents have more than this number, then
- some channels will be merged or dropped. If contents has fewer than
- this, then additional channels will be created from the existing ones.
-
-##### Returns:
-
- A rank 2 tensor that has time along dimension 0 and channels along
- dimension 1. Dimension 0 will be `samples_per_second * length` wide, and
- dimension 1 will be `channel_count` wide. If ffmpeg fails to decode the
- audio then an empty tensor will be returned.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.assign_from_checkpoint_fn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.assign_from_checkpoint_fn.md
deleted file mode 100644
index e4d183b990..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.assign_from_checkpoint_fn.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.contrib.framework.assign_from_checkpoint_fn(model_path, var_list, ignore_missing_vars=False, reshape_variables=False)` {#assign_from_checkpoint_fn}
-
-Returns a function that assigns specific variables from a checkpoint.
-
-##### Args:
-
-
-* <b>`model_path`</b>: The full path to the model checkpoint. To get latest checkpoint
- use `model_path = tf.train.latest_checkpoint(checkpoint_dir)`
-* <b>`var_list`</b>: A list of `Variable` objects or a dictionary mapping names in the
- checkpoint to the correspoing variables to initialize. If empty or None,
- it would return no_op(), None.
-* <b>`ignore_missing_vars`</b>: Boolean, if True it would ignore variables missing in
- the checkpoint with a warning instead of failing.
-* <b>`reshape_variables`</b>: Boolean, if True it would automatically reshape variables
- which are of different shape then the ones stored in the checkpoint but
- which have the same number of elements.
-
-##### Returns:
-
- A function that takes a single argument, a `tf.Session`, that applies the
- assignment operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the checkpoint specified at `model_path` is missing one of
- the variables in `var_list`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.get_or_create_global_step.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.get_or_create_global_step.md
deleted file mode 100644
index bd9e41ee62..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.get_or_create_global_step.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.contrib.framework.get_or_create_global_step(graph=None)` {#get_or_create_global_step}
-
-Returns and create (if necessary) the global step variable.
-
-##### Args:
-
-
-* <b>`graph`</b>: The graph in which to create the global step. If missing, use default
- graph.
-
-##### Returns:
-
- the tensor representing the global step variable.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.is_tensor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.is_tensor.md
deleted file mode 100644
index 9db3544e7e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.is_tensor.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.contrib.framework.is_tensor(x)` {#is_tensor}
-
-Check for tensor types.
-
-Check whether an object is a tensor. Equivalent to
-`isinstance(x, [tf.Tensor, tf.SparseTensor, tf.Variable])`.
-
-##### Args:
-
-
-* <b>`x`</b>: An python object to check.
-
-##### Returns:
-
- `True` if `x` is a tensor, `False` if not.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.model_variable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.model_variable.md
deleted file mode 100644
index daa96911d9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.model_variable.md
+++ /dev/null
@@ -1,34 +0,0 @@
-### `tf.contrib.framework.model_variable(*args, **kwargs)` {#model_variable}
-
-Gets an existing model variable with these parameters or creates a new one.
-
-##### Args:
-
-
-* <b>`name`</b>: the name of the new or existing variable.
-* <b>`shape`</b>: shape of the new or existing variable.
-* <b>`dtype`</b>: type of the new or existing variable (defaults to `DT_FLOAT`).
-* <b>`initializer`</b>: initializer for the variable if one is created.
-* <b>`regularizer`</b>: a (Tensor -> Tensor or None) function; the result of
- applying it on a newly created variable will be added to the collection
- GraphKeys.REGULARIZATION_LOSSES and can be used for regularization.
-* <b>`trainable`</b>: If `True` also add the variable to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
-* <b>`collections`</b>: A list of collection names to which the Variable will be added.
- Note that the variable is always also added to the
- `GraphKeys.GLOBAL_VARIABLES` and `GraphKeys.MODEL_VARIABLES` collections.
-* <b>`caching_device`</b>: Optional device string or function describing where the
- Variable should be cached for reading. Defaults to the Variable's
- device.
-* <b>`device`</b>: Optional device to place the variable. It can be an string or a
- function that is called to get the device for the variable.
-* <b>`partitioner`</b>: Optional callable that accepts a fully defined `TensorShape`
- and dtype of the `Variable` to be created, and returns a list of
- partitions for each axis (currently only one axis can be partitioned).
-* <b>`custom_getter`</b>: Callable that allows overwriting the internal
- get_variable method and has to have the same signature.
-
-##### Returns:
-
- The created or existing variable.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.remove_squeezable_dimensions.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.remove_squeezable_dimensions.md
deleted file mode 100644
index b444be2e1c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.framework.remove_squeezable_dimensions.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.contrib.framework.remove_squeezable_dimensions(predictions, labels, name=None)` {#remove_squeezable_dimensions}
-
-Squeeze last dim if ranks of `predictions` and `labels` differ by 1.
-
-This will use static shape if available. Otherwise, it will add graph
-operations, which could result in a performance hit.
-
-##### Args:
-
-
-* <b>`predictions`</b>: Predicted values, a `Tensor` of arbitrary dimensions.
-* <b>`labels`</b>: Label values, a `Tensor` whose dimensions match `predictions`.
-* <b>`name`</b>: Name of the op.
-
-##### Returns:
-
- Tuple of `predictions` and `labels`, possibly with last dim squeezed.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.connect.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.connect.md
deleted file mode 100644
index 29d5633ef1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.connect.md
+++ /dev/null
@@ -1,27 +0,0 @@
-### `tf.contrib.graph_editor.connect(sgv0, sgv1, disconnect_first=False)` {#connect}
-
-Connect the outputs of sgv0 to the inputs of sgv1.
-
-##### Args:
-
-
-* <b>`sgv0`</b>: the first subgraph to have its outputs swapped. This argument is
- converted to a subgraph using the same rules as the function
- subgraph.make_view.
- Note that sgv0 is modified in place.
-* <b>`sgv1`</b>: the second subgraph to have its outputs swapped. This argument is
- converted to a subgraph using the same rules as the function
- subgraph.make_view.
- Note that sgv1 is modified in place.
-* <b>`disconnect_first`</b>: if True the current outputs of sgv0 are disconnected.
-
-##### Returns:
-
- A tuple `(sgv0, sgv1)` of the now connected subgraphs.
-
-##### Raises:
-
-
-* <b>`StandardError`</b>: if sgv0 or sgv1 cannot be converted to a SubGraphView using
- the same rules than the function subgraph.make_view.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.get_generating_ops.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.get_generating_ops.md
deleted file mode 100644
index ac42fc9272..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.get_generating_ops.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.contrib.graph_editor.get_generating_ops(ts)` {#get_generating_ops}
-
-Return all the generating ops of the tensors in `ts`.
-
-##### Args:
-
-
-* <b>`ts`</b>: a list of `tf.Tensor`
-
-##### Returns:
-
- A list of all the generating `tf.Operation` of the tensors in `ts`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `ts` cannot be converted to a list of `tf.Tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.get_walks_union_ops.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.get_walks_union_ops.md
deleted file mode 100644
index af6fa7c093..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.get_walks_union_ops.md
+++ /dev/null
@@ -1,38 +0,0 @@
-### `tf.contrib.graph_editor.get_walks_union_ops(forward_seed_ops, backward_seed_ops, forward_inclusive=True, backward_inclusive=True, within_ops=None, control_inputs=False, control_outputs=None, control_ios=None)` {#get_walks_union_ops}
-
-Return the union of a forward and a backward walk.
-
-##### Args:
-
-
-* <b>`forward_seed_ops`</b>: an iterable of operations from which the forward graph
- walk starts. If a list of tensors is given instead, the seed_ops are set
- to be the consumers of those tensors.
-* <b>`backward_seed_ops`</b>: an iterable of operations from which the backward graph
- walk starts. If a list of tensors is given instead, the seed_ops are set
- to be the generators of those tensors.
-* <b>`forward_inclusive`</b>: if True the given forward_seed_ops are also part of the
- resulting set.
-* <b>`backward_inclusive`</b>: if True the given backward_seed_ops are also part of the
- resulting set.
-* <b>`within_ops`</b>: restrict the search within those operations. If within_ops is
- None, the search is done within the whole graph.
-* <b>`control_inputs`</b>: A boolean indicating whether control inputs are enabled.
-* <b>`control_outputs`</b>: An instance of util.ControlOutputs or None. If not None,
- control outputs are enabled.
-* <b>`control_ios`</b>: An instance of util.ControlOutputs or None. If not None, both
- control inputs and control outputs are enabled. This is equivalent to set
- control_inputs to True and control_outputs to the util.ControlOutputs
- instance.
-
-##### Returns:
-
- A Python set of all the tf.Operation in the union of a forward and a
- backward walk.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if forward_seed_ops or backward_seed_ops or within_ops cannot be
- converted to a list of tf.Operation.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.get_within_boundary_ops.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.get_within_boundary_ops.md
deleted file mode 100644
index d49459205b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.get_within_boundary_ops.md
+++ /dev/null
@@ -1,32 +0,0 @@
-### `tf.contrib.graph_editor.get_within_boundary_ops(ops, seed_ops, boundary_ops=(), inclusive=True, control_inputs=False, control_outputs=None, control_ios=None)` {#get_within_boundary_ops}
-
-Return all the `tf.Operation` within the given boundary.
-
-##### Args:
-
-
-* <b>`ops`</b>: an object convertible to a list of `tf.Operation`. those ops define the
- set in which to perform the operation (if a `tf.Graph` is given, it
- will be converted to the list of all its operations).
-* <b>`seed_ops`</b>: the operations from which to start expanding.
-* <b>`boundary_ops`</b>: the ops forming the boundary.
-* <b>`inclusive`</b>: if `True`, the result will also include the boundary ops.
-* <b>`control_inputs`</b>: A boolean indicating whether control inputs are enabled.
-* <b>`control_outputs`</b>: An instance of `util.ControlOutputs` or `None`. If not
- `None`, control outputs are enabled.
-* <b>`control_ios`</b>: An instance of `util.ControlOutputs` or `None`. If not
- `None`, both control inputs and control outputs are enabled. This is
- equivalent to set control_inputs to True and control_outputs to
- the `util.ControlOutputs` instance.
-
-##### Returns:
-
- All the `tf.Operation` surrounding the given ops.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `ops` or `seed_ops` cannot be converted to a list of
- `tf.Operation`.
-* <b>`ValueError`</b>: if the boundary is intersecting with the seeds.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.make_view_from_scope.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.make_view_from_scope.md
deleted file mode 100644
index 5d1fde9416..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.make_view_from_scope.md
+++ /dev/null
@@ -1,14 +0,0 @@
-### `tf.contrib.graph_editor.make_view_from_scope(scope, graph)` {#make_view_from_scope}
-
-Make a subgraph from a name scope.
-
-##### Args:
-
-
-* <b>`scope`</b>: the name of the scope.
-* <b>`graph`</b>: the `tf.Graph`.
-
-##### Returns:
-
- A subgraph view representing the given scope.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.reroute_ts.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.reroute_ts.md
deleted file mode 100644
index c3a5132331..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.graph_editor.reroute_ts.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.contrib.graph_editor.reroute_ts(ts0, ts1, can_modify=None, cannot_modify=None)` {#reroute_ts}
-
-For each tensor's pair, replace the end of t1 by the end of t0.
-
-B0 B1 B0 B1
-| | => |/
-A0 A1 A0 A1
-
-The end of the tensors in ts1 are left dangling.
-
-##### Args:
-
-
-* <b>`ts0`</b>: an object convertible to a list of `tf.Tensor`.
-* <b>`ts1`</b>: an object convertible to a list of `tf.Tensor`.
-* <b>`can_modify`</b>: iterable of operations which can be modified. Any operation
- outside within_ops will be left untouched by this function.
-* <b>`cannot_modify`</b>: iterable of operations which cannot be modified. Any
- operation within cannot_modify will be left untouched by this function.
-
-##### Returns:
-
- The number of individual modifications made by the function.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if ts0 or ts1 cannot be converted to a list of tf.Tensor.
-* <b>`TypeError`</b>: if can_modify or cannot_modify is not None and cannot be
- converted to a list of tf.Operation.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.create_feature_spec_for_parsing.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.create_feature_spec_for_parsing.md
deleted file mode 100644
index 898cecc117..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.create_feature_spec_for_parsing.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.contrib.layers.create_feature_spec_for_parsing(feature_columns)` {#create_feature_spec_for_parsing}
-
-Helper that prepares features config from input feature_columns.
-
-The returned feature config can be used as arg 'features' in tf.parse_example.
-
-Typical usage example:
-
-```python
-# Define features and transformations
-feature_a = sparse_column_with_vocabulary_file(...)
-feature_b = real_valued_column(...)
-feature_c_bucketized = bucketized_column(real_valued_column("feature_c"), ...)
-feature_a_x_feature_c = crossed_column(
- columns=[feature_a, feature_c_bucketized], ...)
-
-feature_columns = set(
- [feature_b, feature_c_bucketized, feature_a_x_feature_c])
-batch_examples = tf.parse_example(
- serialized=serialized_examples,
- features=create_feature_spec_for_parsing(feature_columns))
-```
-
-For the above example, create_feature_spec_for_parsing would return the dict:
-{
- "feature_a": parsing_ops.VarLenFeature(tf.string),
- "feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32),
- "feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32)
-}
-
-##### Args:
-
-
-* <b>`feature_columns`</b>: An iterable containing all the feature columns. All items
- should be instances of classes derived from _FeatureColumn, unless
- feature_columns is a dict -- in which case, this should be true of all
- values in the dict.
-
-##### Returns:
-
- A dict mapping feature keys to FixedLenFeature or VarLenFeature values.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.joint_weighted_sum_from_feature_columns.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.joint_weighted_sum_from_feature_columns.md
deleted file mode 100644
index ccb2e6a606..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.joint_weighted_sum_from_feature_columns.md
+++ /dev/null
@@ -1,35 +0,0 @@
-### `tf.contrib.layers.joint_weighted_sum_from_feature_columns(columns_to_tensors, feature_columns, num_outputs, weight_collections=None, trainable=True, scope=None)` {#joint_weighted_sum_from_feature_columns}
-
-A restricted linear prediction builder based on FeatureColumns.
-
-As long as all feature columns are unweighted sparse columns this computes the
-prediction of a linear model which stores all weights in a single variable.
-
-##### Args:
-
-
-* <b>`columns_to_tensors`</b>: A mapping from feature column to tensors. 'string' key
- means a base feature (not-transformed). It can have FeatureColumn as a
- key too. That means that FeatureColumn is already transformed by input
- pipeline. For example, `inflow` may have handled transformations.
-* <b>`feature_columns`</b>: A set containing all the feature columns. All items in the
- set should be instances of classes derived from FeatureColumn.
-* <b>`num_outputs`</b>: An integer specifying number of outputs. Default value is 1.
-* <b>`weight_collections`</b>: List of graph collections to which weights are added.
-* <b>`trainable`</b>: If `True` also add variables to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
-* <b>`scope`</b>: Optional scope for variable_scope.
-
-##### Returns:
-
- A tuple containing:
-
- * A Tensor which represents predictions of a linear model.
- * A list of Variables storing the weights.
- * A Variable which is used for bias.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if FeatureColumn cannot be used for linear predictions.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.l1_regularizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.l1_regularizer.md
deleted file mode 100644
index edf410d1da..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.l1_regularizer.md
+++ /dev/null
@@ -1,21 +0,0 @@
-### `tf.contrib.layers.l1_regularizer(scale, scope=None)` {#l1_regularizer}
-
-Returns a function that can be used to apply L1 regularization to weights.
-
-L1 regularization encourages sparsity.
-
-##### Args:
-
-
-* <b>`scale`</b>: A scalar multiplier `Tensor`. 0.0 disables the regularizer.
-* <b>`scope`</b>: An optional scope name.
-
-##### Returns:
-
- A function with signature `l1(weights)` that apply L1 regularization.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If scale is negative or if scale is not a float.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.xavier_initializer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.xavier_initializer.md
deleted file mode 100644
index 55631e4b05..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.layers.xavier_initializer.md
+++ /dev/null
@@ -1,29 +0,0 @@
-### `tf.contrib.layers.xavier_initializer(uniform=True, seed=None, dtype=tf.float32)` {#xavier_initializer}
-
-Returns an initializer performing "Xavier" initialization for weights.
-
-This function implements the weight initialization from:
-
-Xavier Glorot and Yoshua Bengio (2010):
- Understanding the difficulty of training deep feedforward neural
- networks. International conference on artificial intelligence and
- statistics.
-
-This initializer is designed to keep the scale of the gradients roughly the
-same in all layers. In uniform distribution this ends up being the range:
-`x = sqrt(6. / (in + out)); [-x, x]` and for normal distribution a standard
-deviation of `sqrt(3. / (in + out))` is used.
-
-##### Args:
-
-
-* <b>`uniform`</b>: Whether to use uniform or normal distributed random initialization.
-* <b>`seed`</b>: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`dtype`</b>: The data type. Only floating point types are supported.
-
-##### Returns:
-
- An initializer for a weight matrix.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.DNNLinearCombinedRegressor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.DNNLinearCombinedRegressor.md
deleted file mode 100644
index dea9500a81..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.DNNLinearCombinedRegressor.md
+++ /dev/null
@@ -1,408 +0,0 @@
-A regressor for TensorFlow Linear and DNN joined training models.
-
-Example:
-
-```python
-sparse_feature_a = sparse_column_with_hash_bucket(...)
-sparse_feature_b = sparse_column_with_hash_bucket(...)
-
-sparse_feature_a_x_sparse_feature_b = crossed_column(...)
-
-sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a,
- ...)
-sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b,
- ...)
-
-estimator = DNNLinearCombinedRegressor(
- # common settings
- weight_column_name=weight_column_name,
- # wide settings
- linear_feature_columns=[sparse_feature_a_x_sparse_feature_b],
- linear_optimizer=tf.train.FtrlOptimizer(...),
- # deep settings
- dnn_feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb],
- dnn_hidden_units=[1000, 500, 100],
- dnn_optimizer=tf.train.ProximalAdagradOptimizer(...))
-
-# To apply L1 and L2 regularization, you can set optimizers as follows:
-tf.train.ProximalAdagradOptimizer(
- learning_rate=0.1,
- l1_regularization_strength=0.001,
- l2_regularization_strength=0.001)
-# It is same for FtrlOptimizer.
-
-# Input builders
-def input_fn_train: # returns x, y
- ...
-def input_fn_eval: # returns x, y
- ...
-estimator.train(input_fn_train)
-estimator.evaluate(input_fn_eval)
-estimator.predict(x)
-```
-
-Input of `fit`, `train`, and `evaluate` should have following features,
- otherwise there will be a `KeyError`:
- if `weight_column_name` is not `None`, a feature with
- `key=weight_column_name` whose value is a `Tensor`.
- for each `column` in `dnn_feature_columns` + `linear_feature_columns`:
- - if `column` is a `SparseColumn`, a feature with `key=column.name`
- whose `value` is a `SparseTensor`.
- - if `column` is a `WeightedSparseColumn`, two features: the first with
- `key` the id column name, the second with `key` the weight column name.
- Both features' `value` must be a `SparseTensor`.
- - if `column` is a `RealValuedColumn, a feature with `key=column.name`
- whose `value` is a `Tensor`.
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.__init__(model_dir=None, weight_column_name=None, linear_feature_columns=None, linear_optimizer=None, _joint_linear_weights=False, dnn_feature_columns=None, dnn_optimizer=None, dnn_hidden_units=None, dnn_activation_fn=relu, dnn_dropout=None, gradient_clip_norm=None, enable_centered_bias=False, label_dimension=1, config=None, feature_engineering_fn=None, embedding_lr_multipliers=None, input_layer_min_slice_size=None)` {#DNNLinearCombinedRegressor.__init__}
-
-Initializes a DNNLinearCombinedRegressor instance.
-
-##### Args:
-
-
-* <b>`model_dir`</b>: Directory to save model parameters, graph and etc. This can
- also be used to load checkpoints from the directory into a estimator
- to continue training a previously saved model.
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training. It
- will be multiplied by the loss of the example.
-* <b>`linear_feature_columns`</b>: An iterable containing all the feature columns
- used by linear part of the model. All items in the set must be
- instances of classes derived from `FeatureColumn`.
-* <b>`linear_optimizer`</b>: An instance of `tf.Optimizer` used to apply gradients to
- the linear part of the model. If `None`, will use a FTRL optimizer.
- _joint_linear_weights: If True a single (possibly partitioned) variable
- will be used to store the linear model weights. It's faster, but
- requires that all columns are sparse and have the 'sum' combiner.
-
-* <b>`dnn_feature_columns`</b>: An iterable containing all the feature columns used
- by deep part of the model. All items in the set must be instances of
- classes derived from `FeatureColumn`.
-* <b>`dnn_optimizer`</b>: An instance of `tf.Optimizer` used to apply gradients to
- the deep part of the model. If `None`, will use an Adagrad optimizer.
-* <b>`dnn_hidden_units`</b>: List of hidden units per layer. All layers are fully
- connected.
-* <b>`dnn_activation_fn`</b>: Activation function applied to each layer. If None,
- will use `tf.nn.relu`.
-* <b>`dnn_dropout`</b>: When not None, the probability we will drop out
- a given coordinate.
-* <b>`gradient_clip_norm`</b>: A float > 0. If provided, gradients are clipped
- to their global norm with this clipping ratio. See
- tf.clip_by_global_norm for more details.
-* <b>`enable_centered_bias`</b>: A bool. If True, estimator will learn a centered
- bias variable for each class. Rest of the model structure learns the
- residual after centered bias.
-* <b>`label_dimension`</b>: Number of regression targets per example. This is the
- size of the last dimension of the labels and logits `Tensor` objects
- (typically, these have shape `[batch_size, label_dimension]`).
-* <b>`config`</b>: RunConfig object to configure the runtime settings.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into the model.
-* <b>`embedding_lr_multipliers`</b>: Optional. A dictionary from `EmbeddingColumn` to
- a `float` multiplier. Multiplier will be used to multiply with
- learning rate for the embedding variables.
-* <b>`input_layer_min_slice_size`</b>: Optional. The min slice size of input layer
- partitions. If not provided, will use the default of 64M.
-
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both linear_feature_columns and dnn_features_columns are
- empty at the same time.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.__repr__()` {#DNNLinearCombinedRegressor.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.config` {#DNNLinearCombinedRegressor.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.evaluate(x=None, y=None, input_fn=None, feed_fn=None, batch_size=None, steps=None, metrics=None, name=None, checkpoint_path=None, hooks=None)` {#DNNLinearCombinedRegressor.evaluate}
-
-See evaluable.Evaluable.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.export(export_dir, input_fn=None, input_feature_key=None, use_deprecated_input_fn=True, signature_fn=None, default_batch_size=1, exports_to_keep=None)` {#DNNLinearCombinedRegressor.export}
-
-See BaseEstimator.export.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#DNNLinearCombinedRegressor.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.fit(*args, **kwargs)` {#DNNLinearCombinedRegressor.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.get_params(deep=True)` {#DNNLinearCombinedRegressor.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.get_variable_names()` {#DNNLinearCombinedRegressor.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.get_variable_value(name)` {#DNNLinearCombinedRegressor.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.model_dir` {#DNNLinearCombinedRegressor.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.partial_fit(*args, **kwargs)` {#DNNLinearCombinedRegressor.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.predict(*args, **kwargs)` {#DNNLinearCombinedRegressor.predict}
-
-Returns predictions for given features. (deprecated arguments) (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2017-03-01.
-Instructions for updating:
-Please switch to predict_scores, or set `outputs` argument.
-
-By default, returns predicted scores. But this default will be dropped
-soon. Users should either pass `outputs`, or call `predict_scores` method.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns scores.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted scores (or an iterable of predicted scores if
- as_iterable is True). If `label_dimension == 1`, the shape of the output
- is `[batch_size]`, otherwise the shape is `[batch_size, label_dimension]`.
- If `outputs` is set, returns a dict of predictions.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.predict_scores(*args, **kwargs)` {#DNNLinearCombinedRegressor.predict_scores}
-
-Returns predicted scores for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted scores (or an iterable of predicted scores if
- as_iterable is True). If `label_dimension == 1`, the shape of the output
- is `[batch_size]`, otherwise the shape is `[batch_size, label_dimension]`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNLinearCombinedRegressor.set_params(**params)` {#DNNLinearCombinedRegressor.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.DNNRegressor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.DNNRegressor.md
deleted file mode 100644
index 017234aa0c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.DNNRegressor.md
+++ /dev/null
@@ -1,393 +0,0 @@
-A regressor for TensorFlow DNN models.
-
-Example:
-
-```python
-sparse_feature_a = sparse_column_with_hash_bucket(...)
-sparse_feature_b = sparse_column_with_hash_bucket(...)
-
-sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a,
- ...)
-sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b,
- ...)
-
-estimator = DNNRegressor(
- feature_columns=[sparse_feature_a, sparse_feature_b],
- hidden_units=[1024, 512, 256])
-
-# Or estimator using the ProximalAdagradOptimizer optimizer with
-# regularization.
-estimator = DNNRegressor(
- feature_columns=[sparse_feature_a, sparse_feature_b],
- hidden_units=[1024, 512, 256],
- optimizer=tf.train.ProximalAdagradOptimizer(
- learning_rate=0.1,
- l1_regularization_strength=0.001
- ))
-
-# Input builders
-def input_fn_train: # returns x, y
- pass
-estimator.fit(input_fn=input_fn_train)
-
-def input_fn_eval: # returns x, y
- pass
-estimator.evaluate(input_fn=input_fn_eval)
-estimator.predict(x=x)
-```
-
-Input of `fit` and `evaluate` should have following features,
- otherwise there will be a `KeyError`:
-
-* if `weight_column_name` is not `None`, a feature with
- `key=weight_column_name` whose value is a `Tensor`.
-* for each `column` in `feature_columns`:
- - if `column` is a `SparseColumn`, a feature with `key=column.name`
- whose `value` is a `SparseTensor`.
- - if `column` is a `WeightedSparseColumn`, two features: the first with
- `key` the id column name, the second with `key` the weight column name.
- Both features' `value` must be a `SparseTensor`.
- - if `column` is a `RealValuedColumn`, a feature with `key=column.name`
- whose `value` is a `Tensor`.
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.__init__(hidden_units, feature_columns, model_dir=None, weight_column_name=None, optimizer=None, activation_fn=relu, dropout=None, gradient_clip_norm=None, enable_centered_bias=False, config=None, feature_engineering_fn=None, label_dimension=1, embedding_lr_multipliers=None, input_layer_min_slice_size=None)` {#DNNRegressor.__init__}
-
-Initializes a `DNNRegressor` instance.
-
-##### Args:
-
-
-* <b>`hidden_units`</b>: List of hidden units per layer. All layers are fully
- connected. Ex. `[64, 32]` means first layer has 64 nodes and second one
- has 32.
-* <b>`feature_columns`</b>: An iterable containing all the feature columns used by
- the model. All items in the set should be instances of classes derived
- from `FeatureColumn`.
-* <b>`model_dir`</b>: Directory to save model parameters, graph and etc. This can
- also be used to load checkpoints from the directory into a estimator to
- continue training a previously saved model.
-* <b>`weight_column_name`</b>: A string defining feature column name representing
- weights. It is used to down weight or boost examples during training. It
- will be multiplied by the loss of the example.
-* <b>`optimizer`</b>: An instance of `tf.Optimizer` used to train the model. If
- `None`, will use an Adagrad optimizer.
-* <b>`activation_fn`</b>: Activation function applied to each layer. If `None`, will
- use `tf.nn.relu`.
-* <b>`dropout`</b>: When not `None`, the probability we will drop out a given
- coordinate.
-* <b>`gradient_clip_norm`</b>: A `float` > 0. If provided, gradients are clipped
- to their global norm with this clipping ratio. See
- `tf.clip_by_global_norm` for more details.
-* <b>`enable_centered_bias`</b>: A bool. If True, estimator will learn a centered
- bias variable for each class. Rest of the model structure learns the
- residual after centered bias.
-* <b>`config`</b>: `RunConfig` object to configure the runtime settings.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into the model.
-* <b>`label_dimension`</b>: Number of regression targets per example. This is the
- size of the last dimension of the labels and logits `Tensor` objects
- (typically, these have shape `[batch_size, label_dimension]`).
-* <b>`embedding_lr_multipliers`</b>: Optional. A dictionary from `EbeddingColumn` to
- a `float` multiplier. Multiplier will be used to multiply with
- learning rate for the embedding variables.
-* <b>`input_layer_min_slice_size`</b>: Optional. The min slice size of input layer
- partitions. If not provided, will use the default of 64M.
-
-##### Returns:
-
- A `DNNRegressor` estimator.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.__repr__()` {#DNNRegressor.__repr__}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.config` {#DNNRegressor.config}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.evaluate(x=None, y=None, input_fn=None, feed_fn=None, batch_size=None, steps=None, metrics=None, name=None, checkpoint_path=None, hooks=None)` {#DNNRegressor.evaluate}
-
-See evaluable.Evaluable.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.export(export_dir, input_fn=None, input_feature_key=None, use_deprecated_input_fn=True, signature_fn=None, default_batch_size=1, exports_to_keep=None)` {#DNNRegressor.export}
-
-See BaseEstimator.export.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.export_savedmodel(export_dir_base, serving_input_fn, default_output_alternative_key=None, assets_extra=None, as_text=False, checkpoint_path=None)` {#DNNRegressor.export_savedmodel}
-
-Exports inference graph as a SavedModel into given dir.
-
-##### Args:
-
-
-* <b>`export_dir_base`</b>: A string containing a directory to write the exported
- graph and checkpoints.
-* <b>`serving_input_fn`</b>: A function that takes no argument and
- returns an `InputFnOps`.
-* <b>`default_output_alternative_key`</b>: the name of the head to serve when none is
- specified. Not needed for single-headed models.
-* <b>`assets_extra`</b>: A dict specifying how to populate the assets.extra directory
- within the exported SavedModel. Each key should give the destination
- path (including the filename) relative to the assets.extra directory.
- The corresponding value gives the full path of the source file to be
- copied. For example, the simple case of copying a single file without
- renaming it is specified as
- `{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
-* <b>`as_text`</b>: whether to write the SavedModel proto in text format.
-* <b>`checkpoint_path`</b>: The checkpoint path to export. If None (the default),
- the most recent checkpoint found within the model directory is chosen.
-
-##### Returns:
-
- The string path to the exported directory.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if an unrecognized export_type is requested.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.fit(*args, **kwargs)` {#DNNRegressor.fit}
-
-See `Trainable`. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` or `y` are not `None` while `input_fn` is not `None`.
-* <b>`ValueError`</b>: If both `steps` and `max_steps` are not `None`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.get_params(deep=True)` {#DNNRegressor.get_params}
-
-Get parameters for this estimator.
-
-##### Args:
-
-
-* <b>`deep`</b>: boolean, optional
-
- If `True`, will return the parameters for this estimator and
- contained subobjects that are estimators.
-
-##### Returns:
-
- params : mapping of string to any
- Parameter names mapped to their values.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.get_variable_names()` {#DNNRegressor.get_variable_names}
-
-Returns list of all variable names in this model.
-
-##### Returns:
-
- List of names.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.get_variable_value(name)` {#DNNRegressor.get_variable_value}
-
-Returns value of the variable given by name.
-
-##### Args:
-
-
-* <b>`name`</b>: string, name of the tensor.
-
-##### Returns:
-
- Numpy array - value of the tensor.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.model_dir` {#DNNRegressor.model_dir}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.partial_fit(*args, **kwargs)` {#DNNRegressor.partial_fit}
-
-Incremental fit on a batch of samples. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-12-01.
-Instructions for updating:
-Estimator is decoupled from Scikit Learn interface by moving into
-separate class SKCompat. Arguments x, y and batch_size are only
-available in the SKCompat class, Estimator will only accept input_fn.
-
-##### Example conversion:
-
- est = Estimator(...) -> est = SKCompat(Estimator(...))
-
-This method is expected to be called several times consecutively
-on different or the same chunks of the dataset. This either can
-implement iterative training or out-of-core/online training.
-
-This is especially useful when the whole dataset is too big to
-fit in memory at the same time. Or when model is taking long time
-to converge, and you want to split up training into subparts.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...]. Can be iterator that
- returns arrays of features. The training input samples for fitting the
- model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
- iterator that returns array of labels. The training label values
- (class labels in classification, real numbers in regression). If set,
- `input_fn` must be `None`.
-* <b>`input_fn`</b>: Input function. If set, `x`, `y`, and `batch_size` must be
- `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-
-##### Returns:
-
- `self`, for chaining.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If at least one of `x` and `y` is provided, and `input_fn` is
- provided.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.predict(*args, **kwargs)` {#DNNRegressor.predict}
-
-Returns predictions for given features. (deprecated arguments) (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2017-03-01.
-Instructions for updating:
-Please switch to predict_scores, or set `outputs` argument.
-
-By default, returns predicted scores. But this default will be dropped
-soon. Users should either pass `outputs`, or call `predict_scores` method.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`outputs`</b>: list of `str`, name of the output to predict.
- If `None`, returns scores.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted scores (or an iterable of predicted scores if
- as_iterable is True). If `label_dimension == 1`, the shape of the output
- is `[batch_size]`, otherwise the shape is `[batch_size, label_dimension]`.
- If `outputs` is set, returns a dict of predictions.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.predict_scores(*args, **kwargs)` {#DNNRegressor.predict_scores}
-
-Returns predicted scores for given features. (deprecated arguments)
-
-SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-15.
-Instructions for updating:
-The default behavior of predict() is changing. The default value for
-as_iterable will change to True, and then the flag will be removed
-altogether. The behavior of this flag is described below.
-
-##### Args:
-
-
-* <b>`x`</b>: features.
-* <b>`input_fn`</b>: Input function. If set, x must be None.
-* <b>`batch_size`</b>: Override default batch size.
-* <b>`as_iterable`</b>: If True, return an iterable which keeps yielding predictions
- for each example until inputs are exhausted. Note: The inputs must
- terminate if you want the iterable to terminate (e.g. be sure to pass
- num_epochs=1 if you are using something like read_batch_features).
-
-##### Returns:
-
- Numpy array of predicted scores (or an iterable of predicted scores if
- as_iterable is True). If `label_dimension == 1`, the shape of the output
- is `[batch_size]`, otherwise the shape is `[batch_size, label_dimension]`.
-
-
-- - -
-
-#### `tf.contrib.learn.DNNRegressor.set_params(**params)` {#DNNRegressor.set_params}
-
-Set the parameters of this estimator.
-
-The method works on simple estimators as well as on nested objects
-(such as pipelines). The former have parameters of the form
-``<component>__<parameter>`` so that it's possible to update each
-component of a nested object.
-
-##### Args:
-
-
-* <b>`**params`</b>: Parameters.
-
-##### Returns:
-
- self
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If params contain invalid names.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.InputFnOps.__new__.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.InputFnOps.__new__.md
deleted file mode 100644
index 147c6e6ed8..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.InputFnOps.__new__.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.contrib.learn.InputFnOps.__new__(_cls, features, labels, default_inputs)` {#InputFnOps.__new__}
-
-Create new instance of InputFnOps(features, labels, default_inputs)
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.LogisticRegressor.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.LogisticRegressor.md
deleted file mode 100644
index 3b420913ed..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.LogisticRegressor.md
+++ /dev/null
@@ -1,45 +0,0 @@
-### `tf.contrib.learn.LogisticRegressor(model_fn, thresholds=None, model_dir=None, config=None, feature_engineering_fn=None)` {#LogisticRegressor}
-
-Builds a logistic regression Estimator for binary classification.
-
-This method provides a basic Estimator with some additional metrics for custom
-binary classification models, including AUC, precision/recall and accuracy.
-
-Example:
-
-```python
- # See tf.contrib.learn.Estimator(...) for details on model_fn structure
- def my_model_fn(...):
- pass
-
- estimator = LogisticRegressor(model_fn=my_model_fn)
-
- # Input builders
- def input_fn_train:
- pass
-
- estimator.fit(input_fn=input_fn_train)
- estimator.predict(x=x)
-```
-
-##### Args:
-
-
-* <b>`model_fn`</b>: Model function with the signature:
- `(features, labels, mode) -> (predictions, loss, train_op)`.
- Expects the returned predictions to be probabilities in [0.0, 1.0].
-* <b>`thresholds`</b>: List of floating point thresholds to use for accuracy,
- precision, and recall metrics. If `None`, defaults to `[0.5]`.
-* <b>`model_dir`</b>: Directory to save model parameters, graphs, etc. This can also
- be used to load checkpoints from the directory into a estimator to
- continue training a previously saved model.
-* <b>`config`</b>: A RunConfig configuration object.
-* <b>`feature_engineering_fn`</b>: Feature engineering function. Takes features and
- labels which are the output of `input_fn` and
- returns features and labels which will be fed
- into the model.
-
-##### Returns:
-
- A `tf.contrib.learn.Estimator` instance.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.Trainable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.Trainable.md
deleted file mode 100644
index 944903d1f0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.Trainable.md
+++ /dev/null
@@ -1,45 +0,0 @@
-Interface for objects that are trainable by, e.g., `Experiment`.
-- - -
-
-#### `tf.contrib.learn.Trainable.fit(x=None, y=None, input_fn=None, steps=None, batch_size=None, monitors=None, max_steps=None)` {#Trainable.fit}
-
-Trains a model given training data `x` predictions and `y` labels.
-
-##### Args:
-
-
-* <b>`x`</b>: Matrix of shape [n_samples, n_features...] or the dictionary of Matrices.
- Can be iterator that returns arrays of features or dictionary of arrays of features.
- The training input samples for fitting the model. If set, `input_fn` must be `None`.
-* <b>`y`</b>: Vector or matrix [n_samples] or [n_samples, n_outputs] or the dictionary of same.
- Can be iterator that returns array of labels or dictionary of array of labels.
- The training label values (class labels in classification, real numbers in regression).
- If set, `input_fn` must be `None`. Note: For classification, label values must
- be integers representing the class index (i.e. values from 0 to
- n_classes-1).
-* <b>`input_fn`</b>: Input function returning a tuple of:
- features - `Tensor` or dictionary of string feature name to `Tensor`.
- labels - `Tensor` or dictionary of `Tensor` with labels.
- If input_fn is set, `x`, `y`, and `batch_size` must be `None`.
-* <b>`steps`</b>: Number of steps for which to train model. If `None`, train forever.
- 'steps' works incrementally. If you call two times fit(steps=10) then
- training occurs in total 20 steps. If you don't want to have incremental
- behaviour please set `max_steps` instead. If set, `max_steps` must be
- `None`.
-* <b>`batch_size`</b>: minibatch size to use on the input, defaults to first
- dimension of `x`. Must be `None` if `input_fn` is provided.
-* <b>`monitors`</b>: List of `BaseMonitor` subclass instances. Used for callbacks
- inside the training loop.
-* <b>`max_steps`</b>: Number of total steps for which to train model. If `None`,
- train forever. If set, `steps` must be `None`.
-
- Two calls to `fit(steps=100)` means 200 training
- iterations. On the other hand, two calls to `fit(max_steps=100)` means
- that the second call will not do any iteration since first call did
- all 100 steps.
-
-##### Returns:
-
- `self`, for chaining.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.infer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.infer.md
deleted file mode 100644
index 32268ffd16..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.infer.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.contrib.learn.infer(*args, **kwargs)` {#infer}
-
-Restore graph from `restore_checkpoint_path` and run `output_dict` tensors. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-02-15.
-Instructions for updating:
-graph_actions.py will be deleted. Use tf.train.* utilities instead. You can use learn/estimators/estimator.py as an example.
-
-If `restore_checkpoint_path` is supplied, restore from checkpoint. Otherwise,
-init all variables.
-
-##### Args:
-
-
-* <b>`restore_checkpoint_path`</b>: A string containing the path to a checkpoint to
- restore.
-* <b>`output_dict`</b>: A `dict` mapping string names to `Tensor` objects to run.
- Tensors must all be from the same graph.
-* <b>`feed_dict`</b>: `dict` object mapping `Tensor` objects to input values to feed.
-
-##### Returns:
-
- Dict of values read from `output_dict` tensors. Keys are the same as
- `output_dict`, values are the results read from the corresponding `Tensor`
- in `output_dict`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `output_dict` or `feed_dicts` is None or empty.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.monitors.LoggingTrainable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.monitors.LoggingTrainable.md
deleted file mode 100644
index 1d94d6e1f3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.learn.monitors.LoggingTrainable.md
+++ /dev/null
@@ -1,184 +0,0 @@
-Writes trainable variable values into log every N steps.
-
-Write the tensors in trainable variables `every_n` steps,
-starting with the `first_n`th step.
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.__init__(scope=None, every_n=100, first_n=1)` {#LoggingTrainable.__init__}
-
-Initializes LoggingTrainable monitor.
-
-##### Args:
-
-
-* <b>`scope`</b>: An optional string to match variable names using re.match.
-* <b>`every_n`</b>: Print every N steps.
-* <b>`first_n`</b>: Print first N steps.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.begin(max_steps=None)` {#LoggingTrainable.begin}
-
-Called at the beginning of training.
-
-When called, the default graph is the one we are executing.
-
-##### Args:
-
-
-* <b>`max_steps`</b>: `int`, the maximum global step this training will run until.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun a run.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.end(session=None)` {#LoggingTrainable.end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.epoch_begin(epoch)` {#LoggingTrainable.epoch_begin}
-
-Begin epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've already begun an epoch, or `epoch` < 0.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.epoch_end(epoch)` {#LoggingTrainable.epoch_end}
-
-End epoch.
-
-##### Args:
-
-
-* <b>`epoch`</b>: `int`, the epoch number.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if we've not begun an epoch, or `epoch` number does not match.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.every_n_post_step(step, session)` {#LoggingTrainable.every_n_post_step}
-
-Callback after a step is finished or `end()` is called.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`session`</b>: `Session` object.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.every_n_step_begin(step)` {#LoggingTrainable.every_n_step_begin}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.every_n_step_end(step, outputs)` {#LoggingTrainable.every_n_step_end}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.post_step(step, session)` {#LoggingTrainable.post_step}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.run_on_all_workers` {#LoggingTrainable.run_on_all_workers}
-
-
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.set_estimator(estimator)` {#LoggingTrainable.set_estimator}
-
-A setter called automatically by the target estimator.
-
-If the estimator is locked, this method does nothing.
-
-##### Args:
-
-
-* <b>`estimator`</b>: the estimator that this monitor monitors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the estimator is None.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.step_begin(step)` {#LoggingTrainable.step_begin}
-
-Overrides `BaseMonitor.step_begin`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-
-##### Returns:
-
- A `list`, the result of every_n_step_begin, if that was called this step,
- or an empty list otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if called more than once during a step.
-
-
-- - -
-
-#### `tf.contrib.learn.monitors.LoggingTrainable.step_end(step, output)` {#LoggingTrainable.step_end}
-
-Overrides `BaseMonitor.step_end`.
-
-When overriding this method, you must call the super implementation.
-
-##### Args:
-
-
-* <b>`step`</b>: `int`, the current value of the global step.
-* <b>`output`</b>: `dict` mapping `string` values representing tensor names to
- the value resulted from running these tensors. Values may be either
- scalars, for scalar tensors, or Numpy `array`, for non-scalar tensors.
-
-##### Returns:
-
- `bool`, the result of every_n_step_end, if that was called this step,
- or `False` otherwise.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.legacy_seq2seq.embedding_tied_rnn_seq2seq.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.legacy_seq2seq.embedding_tied_rnn_seq2seq.md
deleted file mode 100644
index 2d628aa16a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.legacy_seq2seq.embedding_tied_rnn_seq2seq.md
+++ /dev/null
@@ -1,53 +0,0 @@
-### `tf.contrib.legacy_seq2seq.embedding_tied_rnn_seq2seq(encoder_inputs, decoder_inputs, cell, num_symbols, embedding_size, num_decoder_symbols=None, output_projection=None, feed_previous=False, dtype=None, scope=None)` {#embedding_tied_rnn_seq2seq}
-
-Embedding RNN sequence-to-sequence model with tied (shared) parameters.
-
-This model first embeds encoder_inputs by a newly created embedding (of shape
-[num_symbols x input_size]). Then it runs an RNN to encode embedded
-encoder_inputs into a state vector. Next, it embeds decoder_inputs using
-the same embedding. Then it runs RNN decoder, initialized with the last
-encoder state, on embedded decoder_inputs. The decoder output is over symbols
-from 0 to num_decoder_symbols - 1 if num_decoder_symbols is none; otherwise it
-is over 0 to num_symbols - 1.
-
-##### Args:
-
-
-* <b>`encoder_inputs`</b>: A list of 1D int32 Tensors of shape [batch_size].
-* <b>`decoder_inputs`</b>: A list of 1D int32 Tensors of shape [batch_size].
-* <b>`cell`</b>: core_rnn_cell.RNNCell defining the cell function and size.
-* <b>`num_symbols`</b>: Integer; number of symbols for both encoder and decoder.
-* <b>`embedding_size`</b>: Integer, the length of the embedding vector for each symbol.
-* <b>`num_decoder_symbols`</b>: Integer; number of output symbols for decoder. If
- provided, the decoder output is over symbols 0 to num_decoder_symbols - 1.
- Otherwise, decoder output is over symbols 0 to num_symbols - 1. Note that
- this assumes that the vocabulary is set up such that the first
- num_decoder_symbols of num_symbols are part of decoding.
-* <b>`output_projection`</b>: None or a pair (W, B) of output projection weights and
- biases; W has shape [output_size x num_symbols] and B has
- shape [num_symbols]; if provided and feed_previous=True, each
- fed previous output will first be multiplied by W and added B.
-* <b>`feed_previous`</b>: Boolean or scalar Boolean Tensor; if True, only the first
- of decoder_inputs will be used (the "GO" symbol), and all other decoder
- inputs will be taken from previous outputs (as in embedding_rnn_decoder).
- If False, decoder_inputs are used as given (the standard decoder case).
-* <b>`dtype`</b>: The dtype to use for the initial RNN states (default: tf.float32).
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "embedding_tied_rnn_seq2seq".
-
-##### Returns:
-
- A tuple of the form (outputs, state), where:
-
-* <b>`outputs`</b>: A list of the same length as decoder_inputs of 2D Tensors with
- shape [batch_size x output_symbols] containing the generated
- outputs where output_symbols = num_decoder_symbols if
- num_decoder_symbols is not None otherwise output_symbols = num_symbols.
-* <b>`state`</b>: The state of each decoder cell at the final time-step.
- It is a 2D Tensor of shape [batch_size x cell.state_size].
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: When output_projection has the wrong shape.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.linalg.LinearOperator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.linalg.LinearOperator.md
deleted file mode 100644
index 1d9ba477a7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.linalg.LinearOperator.md
+++ /dev/null
@@ -1,553 +0,0 @@
-Base class defining a [batch of] linear operator[s].
-
-Subclasses of `LinearOperator` provide a access to common methods on a
-(batch) matrix, without the need to materialize the matrix. This allows:
-
-* Matrix free computations
-* Operators that take advantage of special structure, while providing a
- consistent API to users.
-
-#### Subclassing
-
-To enable a public method, subclasses should implement the leading-underscore
-version of the method. The argument signature should be identical except for
-the omission of `name="..."`. For example, to enable
-`apply(x, adjoint=False, name="apply")` a subclass should implement
-`_apply(x, adjoint=False)`.
-
-#### Performance contract
-
-Subclasses should implement a method only if it can be done with a reasonable
-performance increase over generic dense operations, either in time, parallel
-scalability, or memory usage. For example, if the determinant can only be
-computed using `tf.matrix_determinant(self.to_dense())`, then determinants
-should not be implemented.
-
-Class docstrings should contain an explanation of computational complexity.
-Since this is a high-performance library, attention should be paid to detail,
-and explanations can include constants as well as Big-O notation.
-
-#### Shape compatibility
-
-`LinearOperator` sub classes should operate on a [batch] matrix with
-compatible shape. Class docstrings should define what is meant by compatible
-shape. Some sub-classes may not support batching.
-
-An example is:
-
-`x` is a batch matrix with compatible shape for `apply` if
-
-```
-operator.shape = [B1,...,Bb] + [M, N], b >= 0,
-x.shape = [B1,...,Bb] + [N, R]
-```
-
-`rhs` is a batch matrix with compatible shape for `solve` if
-
-```
-operator.shape = [B1,...,Bb] + [M, N], b >= 0,
-rhs.shape = [B1,...,Bb] + [M, R]
-```
-
-#### Example docstring for subclasses.
-
-This operator acts like a (batch) matrix `A` with shape
-`[B1,...,Bb, M, N]` for some `b >= 0`. The first `b` indices index a
-batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is
-an `m x n` matrix. Again, this matrix `A` may not be materialized, but for
-purposes of identifying and working with compatible arguments the shape is
-relevant.
-
-Examples:
-
-```python
-some_tensor = ... shape = ????
-operator = MyLinOp(some_tensor)
-
-operator.shape()
-==> [2, 4, 4]
-
-operator.log_determinant()
-==> Shape [2] Tensor
-
-x = ... Shape [2, 4, 5] Tensor
-
-operator.apply(x)
-==> Shape [2, 4, 5] Tensor
-```
-
-#### Shape compatibility
-
-This operator acts on batch matrices with compatible shape.
-FILL IN WHAT IS MEANT BY COMPATIBLE SHAPE
-
-#### Performance
-
-FILL THIS IN
-
-#### Matrix property hints
-
-This `LinearOperator` is initialized with boolean flags of the form `is_X`,
-for `X = non_singular, self_adjoint, positive_definite, square`.
-These have the following meaning
-* If `is_X == True`, callers should expect the operator to have the
- property `X`. This is a promise that should be fulfilled, but is *not* a
- runtime assert. For example, finite floating point precision may result
- in these promises being violated.
-* If `is_X == False`, callers should expect the operator to not have `X`.
-* If `is_X == None` (the default), callers should have no expectation either
- way.
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.__init__(dtype, graph_parents=None, is_non_singular=None, is_self_adjoint=None, is_positive_definite=None, is_square=None, name=None)` {#LinearOperator.__init__}
-
-Initialize the `LinearOperator`.
-
-**This is a private method for subclass use.**
-**Subclasses should copy-paste this `__init__` documentation.**
-
-##### Args:
-
-
-* <b>`dtype`</b>: The type of the this `LinearOperator`. Arguments to `apply` and
- `solve` will have to be this type.
-* <b>`graph_parents`</b>: Python list of graph prerequisites of this `LinearOperator`
- Typically tensors that are passed during initialization.
-* <b>`is_non_singular`</b>: Expect that this operator is non-singular.
-* <b>`is_self_adjoint`</b>: Expect that this operator is equal to its hermitian
- transpose. If `dtype` is real, this is equivalent to being symmetric.
-* <b>`is_positive_definite`</b>: Expect that this operator is positive definite,
- meaning the real part of all eigenvalues is positive. We do not require
- the operator to be self-adjoint to be positive-definite. See:
-* <b>`https`</b>: //en.wikipedia.org/wiki/Positive-definite_matrix\
- #Extension_for_non_symmetric_matrices
-* <b>`is_square`</b>: Expect that this operator acts like square [batch] matrices.
-* <b>`name`</b>: A name for this `LinearOperator`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If any member of graph_parents is `None` or not a `Tensor`.
-* <b>`ValueError`</b>: If hints are set incorrectly.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.add_to_tensor(x, name='add_to_tensor')` {#LinearOperator.add_to_tensor}
-
-Add matrix represented by this operator to `x`. Equivalent to `A + x`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with same `dtype` and shape broadcastable to `self.shape`.
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- A `Tensor` with broadcast shape and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.apply(x, adjoint=False, name='apply')` {#LinearOperator.apply}
-
-Transform `x` with left multiplication: `x --> Ax`.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` with compatible shape and same `dtype` as `self`.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, left multiply by the adjoint.
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- A `Tensor` with shape `[..., M, R]` and same `dtype` as `self`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.assert_non_singular(name='assert_non_singular')` {#LinearOperator.assert_non_singular}
-
-Returns an `Op` that asserts this operator is non singular.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.assert_positive_definite(name='assert_positive_definite')` {#LinearOperator.assert_positive_definite}
-
-Returns an `Op` that asserts this operator is positive definite.
-
-Here, positive definite means the real part of all eigenvalues is positive.
-We do not require the operator to be self-adjoint.
-
-##### Args:
-
-
-* <b>`name`</b>: A name to give this `Op`.
-
-##### Returns:
-
- An `Op` that asserts this operator is positive definite.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.assert_self_adjoint(name='assert_self_adjoint')` {#LinearOperator.assert_self_adjoint}
-
-Returns an `Op` that asserts this operator is self-adjoint.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.batch_shape` {#LinearOperator.batch_shape}
-
-`TensorShape` of batch dimensions of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb])`, equivalent to `A.get_shape()[:-2]`
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.batch_shape_tensor(name='batch_shape_tensor')` {#LinearOperator.batch_shape_tensor}
-
-Shape of batch dimensions of this operator, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb]`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.determinant(name='det')` {#LinearOperator.determinant}
-
-Determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.diag_part(name='diag_part')` {#LinearOperator.diag_part}
-
-Efficiently get the [batch] diagonal part of this operator.
-
-If this operator has shape `[B1,...,Bb, M, N]`, this returns a
-`Tensor` `diagonal`, of shape `[B1,...,Bb, min(M, N)]`, where
-`diagonal[b1,...,bb, i] = self.to_dense()[b1,...,bb, i, i]`.
-
-```
-my_operator = LinearOperatorDiag([1., 2.])
-
-# Efficiently get the diagonal
-my_operator.diag_part()
-==> [1., 2.]
-
-# Equivalent, but inefficient method
-tf.matrix_diag_part(my_operator.to_dense())
-==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
-
-* <b>`diag_part`</b>: A `Tensor` of same `dtype` as self.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.domain_dimension` {#LinearOperator.domain_dimension}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.domain_dimension_tensor(name='domain_dimension_tensor')` {#LinearOperator.domain_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the domain of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `N`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.dtype` {#LinearOperator.dtype}
-
-The `DType` of `Tensor`s handled by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.graph_parents` {#LinearOperator.graph_parents}
-
-List of graph dependencies of this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.is_non_singular` {#LinearOperator.is_non_singular}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.is_positive_definite` {#LinearOperator.is_positive_definite}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.is_self_adjoint` {#LinearOperator.is_self_adjoint}
-
-
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.is_square` {#LinearOperator.is_square}
-
-Return `True/False` depending on if this operator is square.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.log_abs_determinant(name='log_abs_det')` {#LinearOperator.log_abs_determinant}
-
-Log absolute value of determinant for every batch member.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `Tensor` with shape `self.batch_shape` and same `dtype` as `self`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_square` is `False`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.name` {#LinearOperator.name}
-
-Name prepended to all ops created by this `LinearOperator`.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.range_dimension` {#LinearOperator.range_dimension}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Returns:
-
- `Dimension` object.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.range_dimension_tensor(name='range_dimension_tensor')` {#LinearOperator.range_dimension_tensor}
-
-Dimension (in the sense of vector spaces) of the range of this operator.
-
-Determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `M`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op`.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.shape` {#LinearOperator.shape}
-
-`TensorShape` of this `LinearOperator`.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns
-`TensorShape([B1,...,Bb, M, N])`, equivalent to `A.get_shape()`.
-
-##### Returns:
-
- `TensorShape`, statically determined, may be undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.shape_tensor(name='shape_tensor')` {#LinearOperator.shape_tensor}
-
-Shape of this `LinearOperator`, determined at runtime.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns a `Tensor` holding
-`[B1,...,Bb, M, N]`, equivalent to `tf.shape(A)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.solve(rhs, adjoint=False, name='solve')` {#LinearOperator.solve}
-
-Solve `R` (batch) systems of equations exactly: `A X = rhs`.
-
-Examples:
-
-```python
-# Create an operator acting like a 10 x 2 x 2 matrix.
-operator = LinearOperator(...)
-operator.shape # = 10 x 2 x 2
-
-# Solve one linear system (R = 1) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 1
-X = operator.solve(RHS) # shape 10 x 2 x 1
-
-# Solve five linear systems (R = 5) for every member of the length 10 batch.
-RHS = ... # shape 10 x 2 x 5
-X = operator.solve(RHS)
-X[3, :, 2] # Solution to the linear system A[3, :, :] X = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`rhs`</b>: `Tensor` with same `dtype` as this operator and compatible shape.
- See class docstring for definition of compatibility.
-* <b>`adjoint`</b>: Python `bool`. If `True`, solve the system involving the adjoint
- of this `LinearOperator`.
-* <b>`name`</b>: A name scope to use for ops added by this method.
-
-##### Returns:
-
- `Tensor` with shape `[...,N, R]` and same `dtype` as `rhs`.
-
-##### Raises:
-
-
-* <b>`NotImplementedError`</b>: If `self.is_non_singular` or `is_square` is False.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.tensor_rank` {#LinearOperator.tensor_rank}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- Python integer, or None if the tensor rank is undefined.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.tensor_rank_tensor(name='tensor_rank_tensor')` {#LinearOperator.tensor_rank_tensor}
-
-Rank (in the sense of tensors) of matrix corresponding to this operator.
-
-If this operator acts like the batch matrix `A` with
-`A.shape = [B1,...,Bb, M, N]`, then this returns `b + 2`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for this `Op.
-
-##### Returns:
-
- `int32` `Tensor`, determined at runtime.
-
-
-- - -
-
-#### `tf.contrib.linalg.LinearOperator.to_dense(name='to_dense')` {#LinearOperator.to_dense}
-
-Return a dense (batch) matrix representing this operator.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.losses.get_total_loss.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.losses.get_total_loss.md
deleted file mode 100644
index 533121794f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.losses.get_total_loss.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.contrib.losses.get_total_loss(*args, **kwargs)` {#get_total_loss}
-
-Returns a tensor whose value represents the total loss. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.get_total_loss instead.
-
-Notice that the function adds the given losses to the regularization losses.
-
-##### Args:
-
-
-* <b>`add_regularization_losses`</b>: A boolean indicating whether or not to use the
- regularization losses in the sum.
-* <b>`name`</b>: The name of the returned tensor.
-
-##### Returns:
-
- A `Tensor` whose value represents the total loss.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `losses` is not iterable.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.losses.mean_pairwise_squared_error.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.losses.mean_pairwise_squared_error.md
deleted file mode 100644
index 6d42afbbca..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.losses.mean_pairwise_squared_error.md
+++ /dev/null
@@ -1,49 +0,0 @@
-### `tf.contrib.losses.mean_pairwise_squared_error(*args, **kwargs)` {#mean_pairwise_squared_error}
-
-Adds a pairwise-errors-squared loss to the training procedure. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2016-12-30.
-Instructions for updating:
-Use tf.losses.mean_pairwise_squared_error instead. Note that the order of the predictions and labels arguments was changed.
-
-Unlike `mean_squared_error`, which is a measure of the differences between
-corresponding elements of `predictions` and `labels`,
-`mean_pairwise_squared_error` is a measure of the differences between pairs of
-corresponding elements of `predictions` and `labels`.
-
-For example, if `labels`=[a, b, c] and `predictions`=[x, y, z], there are
-three pairs of differences are summed to compute the loss:
- loss = [ ((a-b) - (x-y)).^2 + ((a-c) - (x-z)).^2 + ((b-c) - (y-z)).^2 ] / 3
-
-Note that since the inputs are of size [batch_size, d0, ... dN], the
-corresponding pairs are computed within each batch sample but not across
-samples within a batch. For example, if `predictions` represents a batch of
-16 grayscale images of dimension [batch_size, 100, 200], then the set of pairs
-is drawn from each image, but not across images.
-
-`weights` acts as a coefficient for the loss. If a scalar is provided, then
-the loss is simply scaled by the given value. If `weights` is a tensor of size
-[batch_size], then the total loss for each sample of the batch is rescaled
-by the corresponding element in the `weights` vector.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted outputs, a tensor of size [batch_size, d0, .. dN]
- where N+1 is the total number of dimensions in `predictions`.
-* <b>`labels`</b>: The ground truth output tensor, whose shape must match the shape of
- the `predictions` tensor.
-* <b>`weights`</b>: Coefficients for the loss a scalar, a tensor of shape [batch_size]
- or a tensor whose shape matches `predictions`.
-* <b>`scope`</b>: The scope for the operations performed in computing the loss.
-
-##### Returns:
-
- A scalar `Tensor` representing the loss value.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `predictions` doesn't match that of `labels` or
- if the shape of `weights` is invalid.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.aggregate_metric_map.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.aggregate_metric_map.md
deleted file mode 100644
index fd4d3733c6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.aggregate_metric_map.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.contrib.metrics.aggregate_metric_map(names_to_tuples)` {#aggregate_metric_map}
-
-Aggregates the metric names to tuple dictionary.
-
-This function is useful for pairing metric names with their associated value
-and update ops when the list of metrics is long. For example:
-
-```python
- metrics_to_values, metrics_to_updates = slim.metrics.aggregate_metric_map({
- 'Mean Absolute Error': new_slim.metrics.streaming_mean_absolute_error(
- predictions, labels, weights),
- 'Mean Relative Error': new_slim.metrics.streaming_mean_relative_error(
- predictions, labels, labels, weights),
- 'RMSE Linear': new_slim.metrics.streaming_root_mean_squared_error(
- predictions, labels, weights),
- 'RMSE Log': new_slim.metrics.streaming_root_mean_squared_error(
- predictions, labels, weights),
- })
-```
-
-##### Args:
-
-
-* <b>`names_to_tuples`</b>: a map of metric names to tuples, each of which contain the
- pair of (value_tensor, update_op) from a streaming metric.
-
-##### Returns:
-
- A dictionary from metric names to value ops and a dictionary from metric
- names to update ops.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.aggregate_metrics.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.aggregate_metrics.md
deleted file mode 100644
index fc3844131c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.aggregate_metrics.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.contrib.metrics.aggregate_metrics(*value_update_tuples)` {#aggregate_metrics}
-
-Aggregates the metric value tensors and update ops into two lists.
-
-##### Args:
-
-
-* <b>`*value_update_tuples`</b>: a variable number of tuples, each of which contain the
- pair of (value_tensor, update_op) from a streaming metric.
-
-##### Returns:
-
- A list of value `Tensor` objects and a list of update ops.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `value_update_tuples` is empty.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_false_negatives.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_false_negatives.md
deleted file mode 100644
index 1464305257..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.metrics.streaming_false_negatives.md
+++ /dev/null
@@ -1,36 +0,0 @@
-### `tf.contrib.metrics.streaming_false_negatives(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None)` {#streaming_false_negatives}
-
-Computes the total number of false positives.
-
-If `weights` is `None`, weights default to 1. Use weights of 0 to mask values.
-
-##### Args:
-
-
-* <b>`predictions`</b>: The predicted values, a `Tensor` of arbitrary dimensions. Will
- be cast to `bool`.
-* <b>`labels`</b>: The ground truth values, a `Tensor` whose dimensions must match
- `predictions`. Will be cast to `bool`.
-* <b>`weights`</b>: Optional `Tensor` whose rank is either 0, or the same rank as
- `labels`, and must be broadcastable to `labels` (i.e., all dimensions
- must be either `1`, or the same as the corresponding `labels`
- dimension).
-* <b>`metrics_collections`</b>: An optional list of collections that the metric
- value variable should be added to.
-* <b>`updates_collections`</b>: An optional list of collections that the metric update
- ops should be added to.
-* <b>`name`</b>: An optional variable_scope name.
-
-##### Returns:
-
-
-* <b>`value_tensor`</b>: A `Tensor` representing the current value of the metric.
-* <b>`update_op`</b>: An operation that accumulates the error from a batch of data.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `weights` is not `None` and its shape doesn't match `values`,
- or if either `metrics_collections` or `updates_collections` are not a list
- or tuple.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.rnn.CompiledWrapper.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.rnn.CompiledWrapper.md
deleted file mode 100644
index dd655070ca..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.rnn.CompiledWrapper.md
+++ /dev/null
@@ -1,58 +0,0 @@
-Wraps step execution in an XLA JIT scope.
-- - -
-
-#### `tf.contrib.rnn.CompiledWrapper.__call__(inputs, state, scope=None)` {#CompiledWrapper.__call__}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.CompiledWrapper.__init__(cell, compile_stateful=False)` {#CompiledWrapper.__init__}
-
-Create CompiledWrapper cell.
-
-##### Args:
-
-
-* <b>`cell`</b>: Instance of `RNNCell`.
-* <b>`compile_stateful`</b>: Whether to compile stateful ops like initializers
- and random number generators (default: False).
-
-
-- - -
-
-#### `tf.contrib.rnn.CompiledWrapper.output_size` {#CompiledWrapper.output_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.CompiledWrapper.state_size` {#CompiledWrapper.state_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.rnn.CompiledWrapper.zero_state(batch_size, dtype)` {#CompiledWrapper.zero_state}
-
-Return zero-filled state tensor(s).
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int, float, or unit Tensor representing the batch size.
-* <b>`dtype`</b>: the data type to use for the state.
-
-##### Returns:
-
- If `state_size` is an int or TensorShape, then the return value is a
- `N-D` tensor of shape `[batch_size x state_size]` filled with zeros.
-
- If `state_size` is a nested list or tuple, then the return value is
- a nested list or tuple (of the same structure) of `2-D` tensors with
-the shapes `[batch_size x s]` for each s in `state_size`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.training.SequenceQueueingStateSaver.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.training.SequenceQueueingStateSaver.md
deleted file mode 100644
index a805e936a3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.contrib.training.SequenceQueueingStateSaver.md
+++ /dev/null
@@ -1,270 +0,0 @@
-SequenceQueueingStateSaver provides access to stateful values from input.
-
-This class is meant to be used instead of, e.g., a `Queue`, for splitting
-variable-length sequence inputs into segments of sequences with fixed length
-and batching them into mini-batches. It maintains contexts and state for a
-sequence across the segments. It can be used in conjunction with a
-`QueueRunner` (see the example below).
-
-The `SequenceQueueingStateSaver` (SQSS) accepts one example at a time via the
-inputs `input_length`, `input_key`, `input_sequences` (a dict),
-`input_context` (a dict), and `initial_states` (a dict).
-The sequences, values in `input_sequences`, may have variable first dimension
-(the `padded_length`), though this dimension must always be a multiple of
-`num_unroll`. All other dimensions must be fixed and accessible via
-`get_shape` calls. The length prior to padding can be recorded in
-`input_length`. The context values in `input_context` must all have fixed and
-well defined dimensions. The initial state values must all have fixed and
-well defined dimensions.
-
-The SQSS splits the sequences of an input example into segments of length
-`num_unroll`. Across examples minibatches of size `batch_size` are formed.
-These minibatches contain a segment of the sequences, copy the context values,
-and maintain state, length, and key information of the original input
-examples. In the first segment of an example the state is still the initial
-state. It can then be updated; and updated state values are accessible in
-subsequent segments of the same example. After each segment
-`batch.save_state()` must be called which is done by the state_saving_rnn.
-Without this call, the dequeue op associated with the SQSS will not run.
-Internally, SQSS has a queue for the input examples. Its `capacity` is
-configurable. If set smaller than `batch_size` then the dequeue op will block
-indefinitely. A small multiple of `batch_size` is a good rule of thumb to
-prevent that queue from becoming a bottleneck and slowing down training.
-If set too large (and note that it defaults to unbounded) memory consumption
-goes up. Moreover, when iterating over the same input examples multiple times
-reusing the same `key` the `capacity` must be smaller than the number of
-examples.
-
-The prefetcher, which reads one unrolled, variable-length input sequence at
-a time, is accessible via `prefetch_op`. The underlying `Barrier` object
-is accessible via `barrier`. Processed minibatches, as well as
-state read and write capabilities are accessible via `next_batch`.
-Specifically, `next_batch` provides access to all of the minibatched
-data, including the following, see `NextQueuedSequenceBatch` for details:
-
-* `total_length`, `length`, `insertion_index`, `key`, `next_key`,
-* `sequence` (the index each minibatch entry's time segment index),
-* `sequence_count` (the total time segment count for each minibatch entry),
-* `context` (a dict of the copied minibatched context values),
-* `sequences` (a dict of the split minibatched variable-length sequences),
-* `state` (to access the states of the current segments of these entries)
-* `save_state` (to save the states for the next segments of these entries)
-
-Example usage:
-
-```python
-batch_size = 32
-num_unroll = 20
-lstm_size = 8
-cell = tf.contrib.rnn.BasicLSTMCell(num_units=lstm_size)
-initial_state_values = tf.zeros(cell.state_size, dtype=tf.float32)
-
-raw_data = get_single_input_from_input_reader()
-length, key, sequences, context = my_parser(raw_data)
-assert "input" in sequences.keys()
-assert "label" in context.keys()
-initial_states = {"lstm_state": initial_state_value}
-
-stateful_reader = tf.SequenceQueueingStateSaver(
- batch_size, num_unroll,
- length=length, input_key=key, input_sequences=sequences,
- input_context=context, initial_states=initial_states,
- capacity=batch_size*100)
-
-batch = stateful_reader.next_batch
-inputs = batch.sequences["input"]
-context_label = batch.context["label"]
-
-inputs_by_time = tf.split(value=inputs, num_or_size_splits=num_unroll, axis=1)
-assert len(inputs_by_time) == num_unroll
-
-lstm_output, _ = tf.contrib.rnn.static_state_saving_rnn(
- cell,
- inputs_by_time,
- state_saver=batch,
- state_name="lstm_state")
-
-# Start a prefetcher in the background
-sess = tf.Session()
-num_threads = 3
-queue_runner = tf.train.QueueRunner(
- stateful_reader, [stateful_reader.prefetch_op] * num_threads)
-tf.train.add_queue_runner(queue_runner)
-tf.train.start_queue_runners(sess=session)
-
-while True:
- # Step through batches, perform training or inference...
- session.run([lstm_output])
-```
-
-**Note**: Usually the barrier is given to a QueueRunner as in the
- examples above. The QueueRunner will close the barrier if the prefetch_op
- receives an OutOfRange Error from upstream input queues (i.e., reaches
- the end of the input). If the barrier is closed no further new examples
- are added to the SQSS. The underlying barrier might, however, still
- contain further unroll-steps of examples that have not undergone all
- iterations. To gracefully finish all examples, the flag
- `allow_small_batch` must be set to true, which causes the SQSS to issue
- progressively smaller mini-batches with the remaining examples.
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.__init__(batch_size, num_unroll, input_length, input_key, input_sequences, input_context, initial_states, capacity=None, allow_small_batch=False, name=None)` {#SequenceQueueingStateSaver.__init__}
-
-Creates the SequenceQueueingStateSaver.
-
-##### Args:
-
-
-* <b>`batch_size`</b>: int or int32 scalar `Tensor`, how large minibatches should
- be when accessing the `state()` method and `context`, `sequences`, etc,
- properties.
-* <b>`num_unroll`</b>: Python integer, how many time steps to unroll at a time.
- The input sequences of length `k` are then split into `k / num_unroll`
- many segments.
-* <b>`input_length`</b>: An int32 scalar `Tensor`, the length of the sequence prior
- to padding. This value may be at most `padded_length` for any given
- input (see below for the definition of `padded_length`).
- Batched and total lengths of the current iteration are made accessible
- via the `length` and `total_length` properties. The shape of
- input_length (scalar) must be fully specified.
-* <b>`input_key`</b>: A string scalar `Tensor`, the **unique** key for the given
- input. This is used to keep track of the split minibatch elements
- of this input. Batched keys of the current iteration are made
- accessible via the `key` property. The shape of `input_key` (scalar)
- must be fully specified.
-* <b>`input_sequences`</b>: A dict mapping string names to `Tensor` values. The
- values must all have matching first dimension, called `padded_length`.
- The `SequenceQueueingStateSaver` will split these tensors along
- this first dimension into minibatch elements of dimension
- `num_unroll`. Batched and segmented sequences of the current iteration
- are made accessible via the `sequences` property.
-
- **Note**: `padded_length` may be dynamic, and may vary from input
- to input, but must always be a multiple of `num_unroll`. The remainder
- of the shape (other than the first dimension) must be fully specified.
-
-* <b>`input_context`</b>: A dict mapping string names to `Tensor` values. The values
- are treated as "global" across all time splits of the given input,
- and will be copied across for all minibatch elements accordingly.
- Batched and copied context of the current iteration are made
- accessible via the `context` property.
-
- **Note**: All input_context values must have fully defined shapes.
-
-* <b>`initial_states`</b>: A dict mapping string state names to multi-dimensional
- values (e.g. constants or tensors). This input defines the set of
- states that will be kept track of during computing iterations, and
- which can be accessed via the `state` and `save_state` methods.
-
- **Note**: All initial_state values must have fully defined shapes.
-
-* <b>`capacity`</b>: The max capacity of the SQSS in number of examples. Needs to be
- at least `batch_size`. Defaults to unbounded.
-* <b>`allow_small_batch`</b>: If true, the SQSS will return smaller batches when
- there aren't enough input examples to fill a whole batch and the end of
- the input has been reached (i.e., the underlying barrier has been
- closed).
-* <b>`name`</b>: An op name string (optional).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if any of the inputs is not an expected type.
-* <b>`ValueError`</b>: if any of the input values is inconsistent, e.g. if
- not enough shape information is available from inputs to build
- the state saver.
-
-
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.barrier` {#SequenceQueueingStateSaver.barrier}
-
-
-
-
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.batch_size` {#SequenceQueueingStateSaver.batch_size}
-
-
-
-
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.close(cancel_pending_enqueues=False, name=None)` {#SequenceQueueingStateSaver.close}
-
-Closes the barrier and the FIFOQueue.
-
-This operation signals that no more segments of new sequences will be
-enqueued. New segments of already inserted sequences may still be enqueued
-and dequeued if there is a sufficient number filling a batch or
-allow_small_batch is true. Otherwise dequeue operations will fail
-immediately.
-
-##### Args:
-
-
-* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
- `False`. If `True`, all pending enqueues to the underlying queues will
- be cancelled, and completing already started sequences is not possible.
-* <b>`name`</b>: Optional name for the op.
-
-##### Returns:
-
- The operation that closes the barrier and the FIFOQueue.
-
-
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.name` {#SequenceQueueingStateSaver.name}
-
-
-
-
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.next_batch` {#SequenceQueueingStateSaver.next_batch}
-
-The `NextQueuedSequenceBatch` providing access to batched output data.
-
-Also provides access to the `state` and `save_state` methods.
-The first time this gets called, it additionally prepares barrier reads
-and creates `NextQueuedSequenceBatch` / next_batch objects. Subsequent
-calls simply return the previously created `next_batch`.
-
-In order to access data in `next_batch` without blocking, the `prefetch_op`
-must have been run at least `batch_size` times (ideally in a separate
-thread, or launched via a `QueueRunner`). After processing a segment in
-`next_batch()`, `batch.save_state()` must be called which is done by the
-state_saving_rnn. Without this call, the dequeue op associated with the SQSS
-will not run.
-
-##### Returns:
-
- A cached `NextQueuedSequenceBatch` instance.
-
-
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.num_unroll` {#SequenceQueueingStateSaver.num_unroll}
-
-
-
-
-- - -
-
-#### `tf.contrib.training.SequenceQueueingStateSaver.prefetch_op` {#SequenceQueueingStateSaver.prefetch_op}
-
-The op used to prefetch new data into the state saver.
-
-Running it once enqueues one new input example into the state saver.
-The first time this gets called, it additionally creates the prefetch_op.
-Subsequent calls simply return the previously created `prefetch_op`.
-
-It should be run in a separate thread via e.g. a `QueueRunner`.
-
-##### Returns:
-
- An `Operation` that performs prefetching.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.InvalidArgumentError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.InvalidArgumentError.md
deleted file mode 100644
index 877325fe0b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.InvalidArgumentError.md
+++ /dev/null
@@ -1,17 +0,0 @@
-Raised when an operation receives an invalid argument.
-
-This may occur, for example, if an operation is receives an input
-tensor that has an invalid value or shape. For example, the
-[`tf.matmul()`](../../api_docs/python/math_ops.md#matmul) op will raise this
-error if it receives an input that is not a matrix, and the
-[`tf.reshape()`](../../api_docs/python/array_ops.md#reshape) op will raise
-this error if the new shape does not match the number of elements in the input
-tensor.
-
-- - -
-
-#### `tf.errors.InvalidArgumentError.__init__(node_def, op, message)` {#InvalidArgumentError.__init__}
-
-Creates an `InvalidArgumentError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.UnknownError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.UnknownError.md
deleted file mode 100644
index 3e18ec866b..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.errors.UnknownError.md
+++ /dev/null
@@ -1,15 +0,0 @@
-Unknown error.
-
-An example of where this error may be returned is if a Status value
-received from another address space belongs to an error-space that
-is not known to this address space. Also errors raised by APIs that
-do not return enough error information may be converted to this
-error.
-
-- - -
-
-#### `tf.errors.UnknownError.__init__(node_def, op, message, error_code=2)` {#UnknownError.__init__}
-
-Creates an `UnknownError`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.expm1.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.expm1.md
deleted file mode 100644
index a1867f0a30..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.expm1.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.expm1(x, name=None)` {#expm1}
-
-Computes exponential of x - 1 element-wise.
-
-I.e., \\(y = (\exp x) - 1\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.eye.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.eye.md
deleted file mode 100644
index b71edf9b96..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.eye.md
+++ /dev/null
@@ -1,36 +0,0 @@
-### `tf.eye(num_rows, num_columns=None, batch_shape=None, dtype=tf.float32, name=None)` {#eye}
-
-Construct an identity matrix, or a batch of matrices.
-
-```python
-# Construct one identity matrix.
-tf.eye(2)
-==> [[1., 0.],
- [0., 1.]]
-
-# Construct a batch of 3 identity matricies, each 2 x 2.
-# batch_identity[i, :, :] is a 2 x 2 identity matrix, i = 0, 1, 2.
-batch_identity = tf.eye(2, batch_shape=[3])
-
-# Construct one 2 x 3 "identity" matrix
-tf.eye(2, num_columns=3)
-==> [[ 1., 0., 0.],
- [ 0., 1., 0.]]
-```
-
-##### Args:
-
-
-* <b>`num_rows`</b>: Non-negative `int32` scalar `Tensor` giving the number of rows
- in each batch matrix.
-* <b>`num_columns`</b>: Optional non-negative `int32` scalar `Tensor` giving the number
- of columns in each batch matrix. Defaults to `num_rows`.
-* <b>`batch_shape`</b>: `int32` `Tensor`. If provided, returned `Tensor` will have
- leading batch dimensions of this shape.
-* <b>`dtype`</b>: The type of an element in the resulting `Tensor`
-* <b>`name`</b>: A name for this `Op`. Defaults to "eye".
-
-##### Returns:
-
- A `Tensor` of shape `batch_shape + [num_rows, num_columns]`
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.fill.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.fill.md
deleted file mode 100644
index 76de3e2d4d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.fill.md
+++ /dev/null
@@ -1,31 +0,0 @@
-### `tf.fill(dims, value, name=None)` {#fill}
-
-Creates a tensor filled with a scalar value.
-
-This operation creates a tensor of shape `dims` and fills it with `value`.
-
-For example:
-
-```prettyprint
-# Output tensor has shape [2, 3].
-fill([2, 3], 9) ==> [[9, 9, 9]
- [9, 9, 9]]
-```
-
-##### Args:
-
-
-* <b>`dims`</b>: A `Tensor` of type `int32`.
- 1-D. Represents the shape of the output tensor.
-* <b>`value`</b>: A `Tensor`. 0-D (scalar). Value to fill the returned tensor.
-
- @compatibility(numpy)
- Equivalent to np.full
- @end_compatibility
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `value`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.foldr.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.foldr.md
deleted file mode 100644
index ae3471659f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.foldr.md
+++ /dev/null
@@ -1,44 +0,0 @@
-### `tf.foldr(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)` {#foldr}
-
-foldr on the list of tensors unpacked from `elems` on dimension 0.
-
-This foldr operator repeatedly applies the callable `fn` to a sequence
-of elements from last to first. The elements are made of the tensors
-unpacked from `elems`. The callable fn takes two tensors as arguments.
-The first argument is the accumulated value computed from the preceding
-invocation of fn. If `initializer` is None, `elems` must contain at least
-one element, and its first element is used as the initializer.
-
-Suppose that `elems` is unpacked into `values`, a list of tensors. The shape
-of the result tensor is `fn(initializer, values[0]).shape`.
-
-##### Args:
-
-
-* <b>`fn`</b>: The callable to be performed.
-* <b>`elems`</b>: A tensor that is unpacked into a sequence of tensors to apply `fn`.
-* <b>`initializer`</b>: (optional) The initial value for the accumulator.
-* <b>`parallel_iterations`</b>: (optional) The number of iterations allowed to run
- in parallel.
-* <b>`back_prop`</b>: (optional) True enables support for back propagation.
-* <b>`swap_memory`</b>: (optional) True enables GPU-CPU memory swapping.
-* <b>`name`</b>: (optional) Name prefix for the returned tensors.
-
-##### Returns:
-
- A tensor resulting from applying `fn` consecutively to the list of tensors
- unpacked from `elems`, from last to first.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `fn` is not callable.
-
-##### Example:
-
- ```python
- elems = [1, 2, 3, 4, 5, 6]
- sum = foldr(lambda a, x: a + x, elems)
- # sum == 21
- ```
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.gather.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.gather.md
deleted file mode 100644
index 3c6be5988c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.gather.md
+++ /dev/null
@@ -1,37 +0,0 @@
-### `tf.gather(params, indices, validate_indices=None, name=None)` {#gather}
-
-Gather slices from `params` according to `indices`.
-
-`indices` must be an integer tensor of any dimension (usually 0-D or 1-D).
-Produces an output tensor with shape `indices.shape + params.shape[1:]` where:
-
-```python
- # Scalar indices
- output[:, ..., :] = params[indices, :, ... :]
-
- # Vector indices
- output[i, :, ..., :] = params[indices[i], :, ... :]
-
- # Higher rank indices
- output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]
-```
-
-If `indices` is a permutation and `len(indices) == params.shape[0]` then
-this operation will permute `params` accordingly.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/Gather.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`params`</b>: A `Tensor`.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
-* <b>`validate_indices`</b>: An optional `bool`. Defaults to `True`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `params`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_default_graph.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_default_graph.md
deleted file mode 100644
index bd734d1b98..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_default_graph.md
+++ /dev/null
@@ -1,17 +0,0 @@
-### `tf.get_default_graph()` {#get_default_graph}
-
-Returns the default graph for the current thread.
-
-The returned graph will be the innermost graph on which a
-`Graph.as_default()` context has been entered, or a global default
-graph if none has been explicitly created.
-
-NOTE: The default graph is a property of the current thread. If you
-create a new thread, and wish to use the default graph in that
-thread, you must explicitly add a `with g.as_default():` in that
-thread's function.
-
-##### Returns:
-
- The default `Graph` being used in the current thread.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_local_variable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_local_variable.md
deleted file mode 100644
index ffd465b3c3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_local_variable.md
+++ /dev/null
@@ -1,87 +0,0 @@
-### `tf.get_local_variable(*args, **kwargs)` {#get_local_variable}
-
-Gets an existing *local* variable or creates a new one.
-
-Behavior is the same as in `get_variable`, except that variables are
-added to the `LOCAL_VARIABLES` collection and `trainable` is set to
-`False`.
-This function prefixes the name with the current variable scope
-and performs reuse checks. See the
-[Variable Scope How To](../../how_tos/variable_scope/index.md)
-for an extensive description of how reusing works. Here is a basic example:
-
-```python
-with tf.variable_scope("foo"):
- v = tf.get_variable("v", [1]) # v.name == "foo/v:0"
- w = tf.get_variable("w", [1]) # w.name == "foo/w:0"
-with tf.variable_scope("foo", reuse=True):
- v1 = tf.get_variable("v") # The same as v above.
-```
-
-If initializer is `None` (the default), the default initializer passed in
-the variable scope will be used. If that one is `None` too, a
-`glorot_uniform_initializer` will be used. The initializer can also be
-a Tensor, in which case the variable is initialized to this value and shape.
-
-Similarly, if the regularizer is `None` (the default), the default regularizer
-passed in the variable scope will be used (if that is `None` too,
-then by default no regularization is performed).
-
-If a partitioner is provided, a `PartitionedVariable` is returned.
-Accessing this object as a `Tensor` returns the shards concatenated along
-the partition axis.
-
-Some useful partitioners are available. See, e.g.,
-`variable_axis_size_partitioner` and `min_max_variable_partitioner`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name of the new or existing variable.
-* <b>`shape`</b>: Shape of the new or existing variable.
-* <b>`dtype`</b>: Type of the new or existing variable (defaults to `DT_FLOAT`).
-* <b>`initializer`</b>: Initializer for the variable if one is created.
-* <b>`regularizer`</b>: A (Tensor -> Tensor or None) function; the result of
- applying it on a newly created variable will be added to the collection
- @{tf.GraphKeys.REGULARIZATION_LOSSES} and can be used for regularization.
-* <b>`collections`</b>: List of graph collections keys to add the Variable to.
- Defaults to `[GraphKeys.LOCAL_VARIABLES]` (see `tf.Variable`).
-* <b>`caching_device`</b>: Optional device string or function describing where the
- Variable should be cached for reading. Defaults to the Variable's
- device. If not `None`, caches on another device. Typical use is to
- cache on the device where the Ops using the Variable reside, to
- deduplicate copying through `Switch` and other conditional statements.
-* <b>`partitioner`</b>: Optional callable that accepts a fully defined `TensorShape`
- and `dtype` of the Variable to be created, and returns a list of
- partitions for each axis (currently only one axis can be partitioned).
-* <b>`validate_shape`</b>: If False, allows the variable to be initialized with a
- value of unknown shape. If True, the default, the shape of initial_value
- must be known.
-* <b>`use_resource`</b>: If False, creates a regular Variable. If true, creates an
- experimental ResourceVariable instead with well-defined semantics.
- Defaults to False (will later change to True).
-* <b>`custom_getter`</b>: Callable that takes as a first argument the true getter, and
- allows overwriting the internal get_variable method.
- The signature of `custom_getter` should match that of this method,
- but the most future-proof version will allow for changes:
- `def custom_getter(getter, *args, **kwargs)`. Direct access to
- all `get_variable` parameters is also allowed:
- `def custom_getter(getter, name, *args, **kwargs)`. A simple identity
- custom getter that simply creates variables with modified names is:
- ```python
- def custom_getter(getter, name, *args, **kwargs):
- return getter(name + '_suffix', *args, **kwargs)
- ```
-
-##### Returns:
-
- The created or existing `Variable` (or `PartitionedVariable`, if a
- partitioner was used).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: when creating a new variable and shape is not declared,
- when violating reuse during variable creation, or when `initializer` dtype
- and `dtype` don't match. Reuse is set inside `variable_scope`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_variable.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_variable.md
deleted file mode 100644
index 9ec5405d6e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.get_variable.md
+++ /dev/null
@@ -1,86 +0,0 @@
-### `tf.get_variable(name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True, use_resource=None, custom_getter=None)` {#get_variable}
-
-Gets an existing variable with these parameters or create a new one.
-
-This function prefixes the name with the current variable scope
-and performs reuse checks. See the
-[Variable Scope How To](../../how_tos/variable_scope/index.md)
-for an extensive description of how reusing works. Here is a basic example:
-
-```python
-with tf.variable_scope("foo"):
- v = tf.get_variable("v", [1]) # v.name == "foo/v:0"
- w = tf.get_variable("w", [1]) # w.name == "foo/w:0"
-with tf.variable_scope("foo", reuse=True):
- v1 = tf.get_variable("v") # The same as v above.
-```
-
-If initializer is `None` (the default), the default initializer passed in
-the variable scope will be used. If that one is `None` too, a
-`glorot_uniform_initializer` will be used. The initializer can also be
-a Tensor, in which case the variable is initialized to this value and shape.
-
-Similarly, if the regularizer is `None` (the default), the default regularizer
-passed in the variable scope will be used (if that is `None` too,
-then by default no regularization is performed).
-
-If a partitioner is provided, a `PartitionedVariable` is returned.
-Accessing this object as a `Tensor` returns the shards concatenated along
-the partition axis.
-
-Some useful partitioners are available. See, e.g.,
-`variable_axis_size_partitioner` and `min_max_variable_partitioner`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name of the new or existing variable.
-* <b>`shape`</b>: Shape of the new or existing variable.
-* <b>`dtype`</b>: Type of the new or existing variable (defaults to `DT_FLOAT`).
-* <b>`initializer`</b>: Initializer for the variable if one is created.
-* <b>`regularizer`</b>: A (Tensor -> Tensor or None) function; the result of
- applying it on a newly created variable will be added to the collection
- @{tf.GraphKeys.REGULARIZATION_LOSSES} and can be used for regularization.
-* <b>`trainable`</b>: If `True` also add the variable to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
-* <b>`collections`</b>: List of graph collections keys to add the Variable to.
- Defaults to `[GraphKeys.GLOBAL_VARIABLES]` (see `tf.Variable`).
-* <b>`caching_device`</b>: Optional device string or function describing where the
- Variable should be cached for reading. Defaults to the Variable's
- device. If not `None`, caches on another device. Typical use is to
- cache on the device where the Ops using the Variable reside, to
- deduplicate copying through `Switch` and other conditional statements.
-* <b>`partitioner`</b>: Optional callable that accepts a fully defined `TensorShape`
- and `dtype` of the Variable to be created, and returns a list of
- partitions for each axis (currently only one axis can be partitioned).
-* <b>`validate_shape`</b>: If False, allows the variable to be initialized with a
- value of unknown shape. If True, the default, the shape of initial_value
- must be known.
-* <b>`use_resource`</b>: If False, creates a regular Variable. If true, creates an
- experimental ResourceVariable instead with well-defined semantics.
- Defaults to False (will later change to True).
-* <b>`custom_getter`</b>: Callable that takes as a first argument the true getter, and
- allows overwriting the internal get_variable method.
- The signature of `custom_getter` should match that of this method,
- but the most future-proof version will allow for changes:
- `def custom_getter(getter, *args, **kwargs)`. Direct access to
- all `get_variable` parameters is also allowed:
- `def custom_getter(getter, name, *args, **kwargs)`. A simple identity
- custom getter that simply creates variables with modified names is:
- ```python
- def custom_getter(getter, name, *args, **kwargs):
- return getter(name + '_suffix', *args, **kwargs)
- ```
-
-##### Returns:
-
- The created or existing `Variable` (or `PartitionedVariable`, if a
- partitioner was used).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: when creating a new variable and shape is not declared,
- when violating reuse during variable creation, or when `initializer` dtype
- and `dtype` don't match. Reuse is set inside `variable_scope`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.encode_png.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.encode_png.md
deleted file mode 100644
index fa073a771f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.encode_png.md
+++ /dev/null
@@ -1,28 +0,0 @@
-### `tf.image.encode_png(image, compression=None, name=None)` {#encode_png}
-
-PNG-encode an image.
-
-`image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]`
-where `channels` is:
-
-* 1: for grayscale.
-* 2: for grayscale + alpha.
-* 3: for RGB.
-* 4: for RGBA.
-
-The ZLIB compression level, `compression`, can be -1 for the PNG-encoder
-default or a value from 0 to 9. 9 is the highest compression level, generating
-the smallest output, but is slower.
-
-##### Args:
-
-
-* <b>`image`</b>: A `Tensor`. Must be one of the following types: `uint8`, `uint16`.
- 3-D with shape `[height, width, channels]`.
-* <b>`compression`</b>: An optional `int`. Defaults to `-1`. Compression level.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`. 0-D. PNG-encoded image.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.random_flip_up_down.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.random_flip_up_down.md
deleted file mode 100644
index 7ed36f5df2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.image.random_flip_up_down.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.image.random_flip_up_down(image, seed=None)` {#random_flip_up_down}
-
-Randomly flips an image vertically (upside down).
-
-With a 1 in 2 chance, outputs the contents of `image` flipped along the first
-dimension, which is `height`. Otherwise output the image as-is.
-
-##### Args:
-
-
-* <b>`image`</b>: A 3-D tensor of shape `[height, width, channels].`
-* <b>`seed`</b>: A Python integer. Used to create a random seed. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-
-##### Returns:
-
- A 3-D tensor of the same type and shape as `image`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape of `image` not supported.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.matrix_determinant.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.matrix_determinant.md
deleted file mode 100644
index bcd0859e47..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.matrix_determinant.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.matrix_determinant(input, name=None)` {#matrix_determinant}
-
-Computes the determinant of one ore more square matrices.
-
-The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
-form square matrices. The output is a tensor containing the determinants
-for all input submatrices `[..., :, :]`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
- Shape is `[..., M, M]`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`. Shape is `[...]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.matrix_triangular_solve.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.matrix_triangular_solve.md
deleted file mode 100644
index 66403eccfe..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.matrix_triangular_solve.md
+++ /dev/null
@@ -1,44 +0,0 @@
-### `tf.matrix_triangular_solve(matrix, rhs, lower=None, adjoint=None, name=None)` {#matrix_triangular_solve}
-
-Solves systems of linear equations with upper or lower triangular matrices by
-
-backsubstitution.
-
-`matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form
-square matrices. If `lower` is `True` then the strictly upper triangular part
-of each inner-most matrix is assumed to be zero and not accessed.
-If `lower` is False then the strictly lower triangular part of each inner-most
-matrix is assumed to be zero and not accessed.
-`rhs` is a tensor of shape `[..., M, K]`.
-
-The output is a tensor of shape `[..., M, K]`. If `adjoint` is
-`True` then the innermost matrices in output` satisfy matrix equations
-`matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`.
-If `adjoint` is `False` then the strictly then the innermost matrices in
-`output` satisfy matrix equations
-`adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`.
-
-##### Args:
-
-
-* <b>`matrix`</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`.
- Shape is `[..., M, M]`.
-* <b>`rhs`</b>: A `Tensor`. Must have the same type as `matrix`.
- Shape is `[..., M, K]`.
-* <b>`lower`</b>: An optional `bool`. Defaults to `True`.
- Boolean indicating whether the innermost matrices in `matrix` are
- lower or upper triangular.
-* <b>`adjoint`</b>: An optional `bool`. Defaults to `False`.
- Boolean indicating whether to solve with `matrix` or its (block-wise)
- adjoint.
-
- @compatibility(numpy)
- Equivalent to np.linalg.triangular_solve
- @end_compatibility
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `matrix`. Shape is `[..., M, K]`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.meshgrid.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.meshgrid.md
deleted file mode 100644
index 673a5c0717..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.meshgrid.md
+++ /dev/null
@@ -1,45 +0,0 @@
-### `tf.meshgrid(*args, **kwargs)` {#meshgrid}
-
-Broadcasts parameters for evaluation on an N-D grid.
-
-Given N one-dimensional coordinate arrays `*args`, returns a list `outputs`
-of N-D coordinate arrays for evaluating expressions on an N-D grid.
-
-Notes:
-
-`meshgrid` supports cartesian ('xy') and matrix ('ij') indexing conventions.
-When the `indexing` argument is set to 'xy' (the default), the broadcasting
-instructions for the first two dimensions are swapped.
-
-Examples:
-
-Calling `X, Y = meshgrid(x, y)` with the tensors
-
-```prettyprint
- x = [1, 2, 3]
- y = [4, 5, 6]
-```
-
-results in
-
-```prettyprint
- X = [[1, 1, 1],
- [2, 2, 2],
- [3, 3, 3]]
- Y = [[4, 5, 6],
- [4, 5, 6],
- [4, 5, 6]]
-```
-
-##### Args:
-
-
-* <b>`*args`</b>: `Tensor`s with rank 1
-* <b>`indexing`</b>: Either 'xy' or 'ij' (optional, default: 'xy')
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
-
-* <b>`outputs`</b>: A list of N `Tensor`s with rank N
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.bias_add.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.bias_add.md
deleted file mode 100644
index eee3edf9c3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.bias_add.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.nn.bias_add(value, bias, data_format=None, name=None)` {#bias_add}
-
-Adds `bias` to `value`.
-
-This is (mostly) a special case of `tf.add` where `bias` is restricted to 1-D.
-Broadcasting is supported, so `value` may have any number of dimensions.
-Unlike `tf.add`, the type of `bias` is allowed to differ from `value` in the
-case where both types are quantized.
-
-##### Args:
-
-
-* <b>`value`</b>: A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`,
- `int16`, `int8`, `complex64`, or `complex128`.
-* <b>`bias`</b>: A 1-D `Tensor` with size matching the last dimension of `value`.
- Must be the same type as `value` unless `value` is a quantized type,
- in which case a different quantized type may be used.
-* <b>`data_format`</b>: A string. 'NHWC' and 'NCHW' are supported.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` with the same type as `value`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.crelu.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.crelu.md
deleted file mode 100644
index 8f6d282c10..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.crelu.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.nn.crelu(features, name=None)` {#crelu}
-
-Computes Concatenated ReLU.
-
-Concatenates a ReLU which selects only the positive part of the activation
-with a ReLU which selects only the *negative* part of the activation.
-Note that as a result this non-linearity doubles the depth of the activations.
-Source: https://arxiv.org/abs/1603.05201
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`,
- `int16`, or `int8`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` with the same type as `features`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.fused_batch_norm.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.fused_batch_norm.md
deleted file mode 100644
index 154f5692f3..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.fused_batch_norm.md
+++ /dev/null
@@ -1,33 +0,0 @@
-### `tf.nn.fused_batch_norm(x, scale, offset, mean=None, variance=None, epsilon=0.001, data_format='NHWC', is_training=True, name=None)` {#fused_batch_norm}
-
-Batch normalization.
-
-As described in http://arxiv.org/abs/1502.03167.
-
-##### Args:
-
-
-* <b>`x`</b>: Input `Tensor` of 4 dimensions.
-* <b>`scale`</b>: A `Tensor` of 1 dimension for scaling.
-* <b>`offset`</b>: A `Tensor` of 1 dimension for bias.
-* <b>`mean`</b>: A `Tensor` of 1 dimension for population mean used for inference.
-* <b>`variance`</b>: A `Tensor` of 1 dimension for population variance
- used for inference.
-* <b>`epsilon`</b>: A small float number added to the variance of x.
-* <b>`data_format`</b>: The data format for x. Either "NHWC" (default) or "NCHW".
-* <b>`is_training`</b>: A bool value to specify if the operation is used for
- training or inference.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
-
-* <b>`y`</b>: A 4D Tensor for the normalized, scaled, offsetted x.
-* <b>`batch_mean`</b>: A 1D Tensor for the mean of x.
-* <b>`batch_var`</b>: A 1D Tensor for the variance of x.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If mean or variance is not None when is_training is True.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.max_pool_with_argmax.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.max_pool_with_argmax.md
deleted file mode 100644
index 5424efd7a7..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.max_pool_with_argmax.md
+++ /dev/null
@@ -1,30 +0,0 @@
-### `tf.nn.max_pool_with_argmax(input, ksize, strides, padding, Targmax=None, name=None)` {#max_pool_with_argmax}
-
-Performs max pooling on the input and outputs both max values and indices.
-
-The indices in `argmax` are flattened, so that a maximum value at position
-`[b, y, x, c]` becomes flattened index
-`((b * height + y) * width + x) * channels + c`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `half`.
- 4-D with shape `[batch, height, width, channels]`. Input to pool over.
-* <b>`ksize`</b>: A list of `ints` that has length `>= 4`.
- The size of the window for each dimension of the input tensor.
-* <b>`strides`</b>: A list of `ints` that has length `>= 4`.
- The stride of the sliding window for each dimension of the
- input tensor.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`Targmax`</b>: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, argmax).
-
-* <b>`output`</b>: A `Tensor`. Has the same type as `input`. The max pooled output tensor.
-* <b>`argmax`</b>: A `Tensor` of type `Targmax`. 4-D. The flattened indices of the max values chosen for each output.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.raw_rnn.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.raw_rnn.md
deleted file mode 100644
index 370ad0a5d2..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.raw_rnn.md
+++ /dev/null
@@ -1,170 +0,0 @@
-### `tf.nn.raw_rnn(cell, loop_fn, parallel_iterations=None, swap_memory=False, scope=None)` {#raw_rnn}
-
-Creates an `RNN` specified by RNNCell `cell` and loop function `loop_fn`.
-
-**NOTE: This method is still in testing, and the API may change.**
-
-This function is a more primitive version of `dynamic_rnn` that provides
-more direct access to the inputs each iteration. It also provides more
-control over when to start and finish reading the sequence, and
-what to emit for the output.
-
-For example, it can be used to implement the dynamic decoder of a seq2seq
-model.
-
-Instead of working with `Tensor` objects, most operations work with
-`TensorArray` objects directly.
-
-The operation of `raw_rnn`, in pseudo-code, is basically the following:
-
-```python
-time = tf.constant(0, dtype=tf.int32)
-(finished, next_input, initial_state, _, loop_state) = loop_fn(
- time=time, cell_output=None, cell_state=None, loop_state=None)
-emit_ta = TensorArray(dynamic_size=True, dtype=initial_state.dtype)
-state = initial_state
-while not all(finished):
- (output, cell_state) = cell(next_input, state)
- (next_finished, next_input, next_state, emit, loop_state) = loop_fn(
- time=time + 1, cell_output=output, cell_state=cell_state,
- loop_state=loop_state)
- # Emit zeros and copy forward state for minibatch entries that are finished.
- state = tf.where(finished, state, next_state)
- emit = tf.where(finished, tf.zeros_like(emit), emit)
- emit_ta = emit_ta.write(time, emit)
- # If any new minibatch entries are marked as finished, mark these.
- finished = tf.logical_or(finished, next_finished)
- time += 1
-return (emit_ta, state, loop_state)
-```
-
-with the additional properties that output and state may be (possibly nested)
-tuples, as determined by `cell.output_size` and `cell.state_size`, and
-as a result the final `state` and `emit_ta` may themselves be tuples.
-
-A simple implementation of `dynamic_rnn` via `raw_rnn` looks like this:
-
-```python
-inputs = tf.placeholder(shape=(max_time, batch_size, input_depth),
- dtype=tf.float32)
-sequence_length = tf.placeholder(shape=(batch_size,), dtype=tf.int32)
-inputs_ta = tf.TensorArray(dtype=tf.float32, size=max_time)
-inputs_ta = inputs_ta.unstack(inputs)
-
-cell = tf.contrib.rnn.LSTMCell(num_units)
-
-def loop_fn(time, cell_output, cell_state, loop_state):
- emit_output = cell_output # == None for time == 0
- if cell_output is None: # time == 0
- next_cell_state = cell.zero_state(batch_size, tf.float32)
- else:
- next_cell_state = cell_state
- elements_finished = (time >= sequence_length)
- finished = tf.reduce_all(elements_finished)
- next_input = tf.cond(
- finished,
- lambda: tf.zeros([batch_size, input_depth], dtype=tf.float32),
- lambda: inputs_ta.read(time))
- next_loop_state = None
- return (elements_finished, next_input, next_cell_state,
- emit_output, next_loop_state)
-
-outputs_ta, final_state, _ = raw_rnn(cell, loop_fn)
-outputs = outputs_ta.stack()
-```
-
-##### Args:
-
-
-* <b>`cell`</b>: An instance of RNNCell.
-* <b>`loop_fn`</b>: A callable that takes inputs
- `(time, cell_output, cell_state, loop_state)`
- and returns the tuple
- `(finished, next_input, next_cell_state, emit_output, next_loop_state)`.
- Here `time` is an int32 scalar `Tensor`, `cell_output` is a
- `Tensor` or (possibly nested) tuple of tensors as determined by
- `cell.output_size`, and `cell_state` is a `Tensor`
- or (possibly nested) tuple of tensors, as determined by the `loop_fn`
- on its first call (and should match `cell.state_size`).
- The outputs are: `finished`, a boolean `Tensor` of
- shape `[batch_size]`, `next_input`: the next input to feed to `cell`,
- `next_cell_state`: the next state to feed to `cell`,
- and `emit_output`: the output to store for this iteration.
-
- Note that `emit_output` should be a `Tensor` or (possibly nested)
- tuple of tensors with shapes and structure matching `cell.output_size`
- and `cell_output` above. The parameter `cell_state` and output
- `next_cell_state` may be either a single or (possibly nested) tuple
- of tensors. The parameter `loop_state` and
- output `next_loop_state` may be either a single or (possibly nested) tuple
- of `Tensor` and `TensorArray` objects. This last parameter
- may be ignored by `loop_fn` and the return value may be `None`. If it
- is not `None`, then the `loop_state` will be propagated through the RNN
- loop, for use purely by `loop_fn` to keep track of its own state.
- The `next_loop_state` parameter returned may be `None`.
-
- The first call to `loop_fn` will be `time = 0`, `cell_output = None`,
- `cell_state = None`, and `loop_state = None`. For this call:
- The `next_cell_state` value should be the value with which to initialize
- the cell's state. It may be a final state from a previous RNN or it
- may be the output of `cell.zero_state()`. It should be a
- (possibly nested) tuple structure of tensors.
- If `cell.state_size` is an integer, this must be
- a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`.
- If `cell.state_size` is a `TensorShape`, this must be a `Tensor` of
- appropriate type and shape `[batch_size] + cell.state_size`.
- If `cell.state_size` is a (possibly nested) tuple of ints or
- `TensorShape`, this will be a tuple having the corresponding shapes.
- The `emit_output` value may be either `None` or a (possibly nested)
- tuple structure of tensors, e.g.,
- `(tf.zeros(shape_0, dtype=dtype_0), tf.zeros(shape_1, dtype=dtype_1))`.
- If this first `emit_output` return value is `None`,
- then the `emit_ta` result of `raw_rnn` will have the same structure and
- dtypes as `cell.output_size`. Otherwise `emit_ta` will have the same
- structure, shapes (prepended with a `batch_size` dimension), and dtypes
- as `emit_output`. The actual values returned for `emit_output` at this
- initializing call are ignored. Note, this emit structure must be
- consistent across all time steps.
-
-
-* <b>`parallel_iterations`</b>: (Default: 32). The number of iterations to run in
- parallel. Those operations which do not have any temporal dependency
- and can be run in parallel, will be. This parameter trades off
- time for space. Values >> 1 use more memory but take less time,
- while smaller values use less memory but computations take longer.
-* <b>`swap_memory`</b>: Transparently swap the tensors produced in forward inference
- but needed for back prop from GPU to CPU. This allows training RNNs
- which would typically not fit on a single GPU, with very minimal (or no)
- performance penalty.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
-
-##### Returns:
-
- A tuple `(emit_ta, final_state, final_loop_state)` where:
-
- `emit_ta`: The RNN output `TensorArray`.
- If `loop_fn` returns a (possibly nested) set of Tensors for
- `emit_output` during initialization, (inputs `time = 0`,
- `cell_output = None`, and `loop_state = None`), then `emit_ta` will
- have the same structure, dtypes, and shapes as `emit_output` instead.
- If `loop_fn` returns `emit_output = None` during this call,
- the structure of `cell.output_size` is used:
- If `cell.output_size` is a (possibly nested) tuple of integers
- or `TensorShape` objects, then `emit_ta` will be a tuple having the
- same structure as `cell.output_size`, containing TensorArrays whose
- elements' shapes correspond to the shape data in `cell.output_size`.
-
- `final_state`: The final cell state. If `cell.state_size` is an int, this
- will be shaped `[batch_size, cell.state_size]`. If it is a
- `TensorShape`, this will be shaped `[batch_size] + cell.state_size`.
- If it is a (possibly nested) tuple of ints or `TensorShape`, this will
- be a tuple having the corresponding shapes.
-
- `final_loop_state`: The final loop state as returned by `loop_fn`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cell` is not an instance of RNNCell, or `loop_fn` is not
- a `callable`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.uniform_candidate_sampler.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.uniform_candidate_sampler.md
deleted file mode 100644
index c34056dc84..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.uniform_candidate_sampler.md
+++ /dev/null
@@ -1,49 +0,0 @@
-### `tf.nn.uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)` {#uniform_candidate_sampler}
-
-Samples a set of classes using a uniform base distribution.
-
-This operation randomly samples a tensor of sampled classes
-(`sampled_candidates`) from the range of integers `[0, range_max)`.
-
-The elements of `sampled_candidates` are drawn without replacement
-(if `unique=True`) or with replacement (if `unique=False`) from
-the base distribution.
-
-The base distribution for this operation is the uniform distribution
-over the range of integers `[0, range_max)`.
-
-In addition, this operation returns tensors `true_expected_count`
-and `sampled_expected_count` representing the number of times each
-of the target classes (`true_classes`) and the sampled
-classes (`sampled_candidates`) is expected to occur in an average
-tensor of sampled classes. These values correspond to `Q(y|x)`
-defined in [this
-document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
-If `unique=True`, then these are post-rejection probabilities and we
-compute them approximately.
-
-##### Args:
-
-
-* <b>`true_classes`</b>: A `Tensor` of type `int64` and shape `[batch_size,
- num_true]`. The target classes.
-* <b>`num_true`</b>: An `int`. The number of target classes per training example.
-* <b>`num_sampled`</b>: An `int`. The number of classes to randomly sample per batch.
-* <b>`unique`</b>: A `bool`. Determines whether all sampled classes in a batch are
- unique.
-* <b>`range_max`</b>: An `int`. The number of possible classes.
-* <b>`seed`</b>: An `int`. An operation-specific seed. Default is 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
-
-* <b>`sampled_candidates`</b>: A tensor of type `int64` and shape `[num_sampled]`.
- The sampled classes.
-* <b>`true_expected_count`</b>: A tensor of type `float`. Same shape as
- `true_classes`. The expected counts under the sampling distribution
- of each of `true_classes`.
-* <b>`sampled_expected_count`</b>: A tensor of type `float`. Same shape as
- `sampled_candidates`. The expected counts under the sampling distribution
- of each of `sampled_candidates`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.zero_fraction.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.zero_fraction.md
deleted file mode 100644
index dc519bbf76..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.zero_fraction.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.nn.zero_fraction(value, name=None)` {#zero_fraction}
-
-Returns the fraction of zeros in `value`.
-
-If `value` is empty, the result is `nan`.
-
-This is useful in summaries to measure and report sparsity. For example,
-
-```python
- z = tf.Relu(...)
- summ = tf.contrib.deprecated.scalar_summary('sparsity',
- tf.nn.zero_fraction(z))
-```
-
-##### Args:
-
-
-* <b>`value`</b>: A tensor of numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The fraction of zeros in `value`, with type `float32`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.ones.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.ones.md
deleted file mode 100644
index c218aa6c97..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.ones.md
+++ /dev/null
@@ -1,24 +0,0 @@
-### `tf.ones(shape, dtype=tf.float32, name=None)` {#ones}
-
-Creates a tensor with all elements set to 1.
-
-This operation returns a tensor of type `dtype` with shape `shape` and all
-elements set to 1.
-
-For example:
-
-```python
-tf.ones([2, 3], tf.int32) ==> [[1, 1, 1], [1, 1, 1]]
-```
-
-##### Args:
-
-
-* <b>`shape`</b>: Either a list of integers, or a 1-D `Tensor` of type `int32`.
-* <b>`dtype`</b>: The type of an element in the resulting `Tensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` with all elements set to 1.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.pad.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.pad.md
deleted file mode 100644
index 55a35db9b1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.pad.md
+++ /dev/null
@@ -1,57 +0,0 @@
-### `tf.pad(tensor, paddings, mode='CONSTANT', name=None)` {#pad}
-
-Pads a tensor.
-
-This operation pads a `tensor` according to the `paddings` you specify.
-`paddings` is an integer tensor with shape `[n, 2]`, where n is the rank of
-`tensor`. For each dimension D of `input`, `paddings[D, 0]` indicates how
-many values to add before the contents of `tensor` in that dimension, and
-`paddings[D, 1]` indicates how many values to add after the contents of
-`tensor` in that dimension. If `mode` is "REFLECT" then both `paddings[D, 0]`
-and `paddings[D, 1]` must be no greater than `tensor.dim_size(D) - 1`. If
-`mode` is "SYMMETRIC" then both `paddings[D, 0]` and `paddings[D, 1]` must be
-no greater than `tensor.dim_size(D)`.
-
-The padded size of each dimension D of the output is:
-
-`paddings[D, 0] + tensor.dim_size(D) + paddings[D, 1]`
-
-For example:
-
-```python
-# 't' is [[1, 2, 3], [4, 5, 6]].
-# 'paddings' is [[1, 1,], [2, 2]].
-# rank of 't' is 2.
-pad(t, paddings, "CONSTANT") ==> [[0, 0, 0, 0, 0, 0, 0],
- [0, 0, 1, 2, 3, 0, 0],
- [0, 0, 4, 5, 6, 0, 0],
- [0, 0, 0, 0, 0, 0, 0]]
-
-pad(t, paddings, "REFLECT") ==> [[6, 5, 4, 5, 6, 5, 4],
- [3, 2, 1, 2, 3, 2, 1],
- [6, 5, 4, 5, 6, 5, 4],
- [3, 2, 1, 2, 3, 2, 1]]
-
-pad(t, paddings, "SYMMETRIC") ==> [[2, 1, 1, 2, 3, 3, 2],
- [2, 1, 1, 2, 3, 3, 2],
- [5, 4, 4, 5, 6, 6, 5],
- [5, 4, 4, 5, 6, 6, 5]]
-```
-
-##### Args:
-
-
-* <b>`tensor`</b>: A `Tensor`.
-* <b>`paddings`</b>: A `Tensor` of type `int32`.
-* <b>`mode`</b>: One of "CONSTANT", "REFLECT", or "SYMMETRIC" (case-insensitive)
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `tensor`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: When mode is not one of "CONSTANT", "REFLECT", or "SYMMETRIC".
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.random_poisson.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.random_poisson.md
deleted file mode 100644
index d56c967168..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.random_poisson.md
+++ /dev/null
@@ -1,38 +0,0 @@
-### `tf.random_poisson(lam, shape, dtype=tf.float32, seed=None, name=None)` {#random_poisson}
-
-Draws `shape` samples from each of the given Poisson distribution(s).
-
-`lam` is the rate parameter describing the distribution(s).
-
-Example:
-
- samples = tf.random_poisson([0.5, 1.5], [10])
- # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents
- # the samples drawn from each distribution
-
- samples = tf.random_poisson([12.2, 3.3], [7, 5])
- # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]
- # represents the 7x5 samples drawn from each of the two distributions
-
-##### Args:
-
-
-* <b>`lam`</b>: A Tensor or Python value or N-D array of type `dtype`.
- `lam` provides the rate parameter(s) describing the poisson
- distribution(s) to sample.
-* <b>`shape`</b>: A 1-D integer Tensor or Python array. The shape of the output samples
- to be drawn per "rate"-parameterized distribution.
-* <b>`dtype`</b>: The type of `lam` and the output: `float16`, `float32`, or
- `float64`.
-* <b>`seed`</b>: A Python integer. Used to create a random seed for the distributions.
- See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
-
-* <b>`samples`</b>: a `Tensor` of shape `tf.concat(shape, tf.shape(lam))` with
- values of type `dtype`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.realdiv.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.realdiv.md
deleted file mode 100644
index facd6630dd..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.realdiv.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.realdiv(x, y, name=None)` {#realdiv}
-
-Returns x / y element-wise for real types.
-
-If `x` and `y` are reals, this will return the floating-point division.
-
-*NOTE*: `Div` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.saturate_cast.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.saturate_cast.md
deleted file mode 100644
index 6a77c2791e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.saturate_cast.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.saturate_cast(value, dtype, name=None)` {#saturate_cast}
-
-Performs a safe saturating cast of `value` to `dtype`.
-
-This function casts the input to `dtype` without applying any scaling. If
-there is a danger that values would over or underflow in the cast, this op
-applies the appropriate clamping before the cast.
-
-##### Args:
-
-
-* <b>`value`</b>: A `Tensor`.
-* <b>`dtype`</b>: The desired output `DType`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `value` safely cast to `dtype`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.scalar_mul.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.scalar_mul.md
deleted file mode 100644
index 5af291597d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.scalar_mul.md
+++ /dev/null
@@ -1,23 +0,0 @@
-### `tf.scalar_mul(scalar, x)` {#scalar_mul}
-
-Multiplies a scalar times a `Tensor` or `IndexedSlices` object.
-
-Intended for use in gradient code which might deal with `IndexedSlices`
-objects, which are easy to multiply by a scalar but more expensive to
-multiply with arbitrary tensors.
-
-##### Args:
-
-
-* <b>`scalar`</b>: A 0-D scalar `Tensor`. Must have known shape.
-* <b>`x`</b>: A `Tensor` or `IndexedSlices` to be scaled.
-
-##### Returns:
-
- `scalar * x` of the same type (`Tensor` or `IndexedSlices`) as `x`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if scalar is not a 0-D `scalar`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.scan.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.scan.md
deleted file mode 100644
index 047971e260..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.scan.md
+++ /dev/null
@@ -1,92 +0,0 @@
-### `tf.scan(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, infer_shape=True, name=None)` {#scan}
-
-scan on the list of tensors unpacked from `elems` on dimension 0.
-
-The simplest version of `scan` repeatedly applies the callable `fn` to a
-sequence of elements from first to last. The elements are made of the tensors
-unpacked from `elems` on dimension 0. The callable fn takes two tensors as
-arguments. The first argument is the accumulated value computed from the
-preceding invocation of fn. If `initializer` is None, `elems` must contain
-at least one element, and its first element is used as the initializer.
-
-Suppose that `elems` is unpacked into `values`, a list of tensors. The shape
-of the result tensor is `[len(values)] + fn(initializer, values[0]).shape`.
-
-This method also allows multi-arity `elems` and accumulator. If `elems`
-is a (possibly nested) list or tuple of tensors, then each of these tensors
-must have a matching first (unpack) dimension. The second argument of
-`fn` must match the structure of `elems`.
-
-If no `initializer` is provided, the output structure and dtypes of `fn`
-are assumed to be the same as its input; and in this case, the first
-argument of `fn` must match the structure of `elems`.
-
-If an `initializer` is provided, then the output of `fn` must have the same
-structure as `initializer`; and the first argument of `fn` must match
-this structure.
-
-For example, if `elems` is `(t1, [t2, t3])` and `initializer` is
-`[i1, i2]` then an appropriate signature for `fn` in `python2` is:
-`fn = lambda (acc_p1, acc_p2), (t1 [t2, t3]):` and `fn` must return a list,
-`[acc_n1, acc_n2]`. An alternative correct signature for `fn`, and the
- one that works in `python3`, is:
-`fn = lambda a, t:`, where `a` and `t` correspond to the input tuples.
-
-##### Args:
-
-
-* <b>`fn`</b>: The callable to be performed. It accepts two arguments. The first
- will have the same structure as `initializer` if one is provided,
- otherwise it will have the same structure as `elems`. The second
- will have the same (possibly nested) structure as `elems`. Its output
- must have the same structure as `initializer` if one is provided,
- otherwise it must have the same structure as `elems`.
-* <b>`elems`</b>: A tensor or (possibly nested) sequence of tensors, each of which
- will be unpacked along their first dimension. The nested sequence
- of the resulting slices will be the first argument to `fn`.
-* <b>`initializer`</b>: (optional) A tensor or (possibly nested) sequence of tensors,
- initial value for the accumulator, and the expected output type of `fn`.
-* <b>`parallel_iterations`</b>: (optional) The number of iterations allowed to run
- in parallel.
-* <b>`back_prop`</b>: (optional) True enables support for back propagation.
-* <b>`swap_memory`</b>: (optional) True enables GPU-CPU memory swapping.
-* <b>`infer_shape`</b>: (optional) False disables tests for consistent output shapes.
-* <b>`name`</b>: (optional) Name prefix for the returned tensors.
-
-##### Returns:
-
- A tensor or (possibly nested) sequence of tensors. Each tensor packs the
- results of applying `fn` to tensors unpacked from `elems` along the first
- dimension, and the previous accumulator value(s), from first to last.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `fn` is not callable or the structure of the output of
- `fn` and `initializer` do not match.
-* <b>`ValueError`</b>: if the lengths of the output of `fn` and `initializer`
- do not match.
-
-##### Examples:
-
- ```python
- elems = np.array([1, 2, 3, 4, 5, 6])
- sum = scan(lambda a, x: a + x, elems)
- # sum == [1, 3, 6, 10, 15, 21]
- ```
-
- ```python
- elems = np.array([1, 2, 3, 4, 5, 6])
- initializer = np.array(0)
- sum_one = scan(
- lambda a, x: x[0] - x[1] + a, (elems + 1, elems), initializer)
- # sum_one == [1, 2, 3, 4, 5, 6]
- ```
-
- ```python
- elems = np.array([1, 0, 0, 0, 0, 0])
- initializer = (np.array(0), np.array(1))
- fibonaccis = scan(lambda a, _: (a[1], a[0] + a[1]), elems, initializer)
- # fibonaccis == ([1, 1, 2, 3, 5, 8], [1, 2, 3, 5, 8, 13])
- ```
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.size.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.size.md
deleted file mode 100644
index 3df6d4cb2f..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.size.md
+++ /dev/null
@@ -1,26 +0,0 @@
-### `tf.size(input, name=None, out_type=tf.int32)` {#size}
-
-Returns the size of a tensor.
-
-This operation returns an integer representing the number of elements in
-`input`.
-
-For example:
-
-```python
-# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]
-size(t) ==> 12
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`out_type`</b>: (Optional) The specified output type of the operation
- (`int32` or `int64`). Defaults to tf.int32.
-
-##### Returns:
-
- A `Tensor` of type `out_type`. Defaults to tf.int32.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.space_to_batch.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.space_to_batch.md
deleted file mode 100644
index f61b0dfe08..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.space_to_batch.md
+++ /dev/null
@@ -1,110 +0,0 @@
-### `tf.space_to_batch(input, paddings, block_size, name=None)` {#space_to_batch}
-
-SpaceToBatch for 4-D tensors of type T.
-
-This is a legacy version of the more general SpaceToBatchND.
-
-Zero-pads and then rearranges (permutes) blocks of spatial data into batch.
-More specifically, this op outputs a copy of the input tensor where values from
-the `height` and `width` dimensions are moved to the `batch` dimension. After
-the zero-padding, both `height` and `width` of the input must be divisible by the
-block size.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. 4-D with shape `[batch, height, width, depth]`.
-* <b>`paddings`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies
- the padding of the input with zeros across the spatial dimensions as follows:
-
- paddings = [[pad_top, pad_bottom], [pad_left, pad_right]]
-
- The effective spatial dimensions of the zero-padded input tensor will be:
-
- height_pad = pad_top + height + pad_bottom
- width_pad = pad_left + width + pad_right
-
- The attr `block_size` must be greater than one. It indicates the block size.
-
- * Non-overlapping blocks of size `block_size x block size` in the height and
- width dimensions are rearranged into the batch dimension at each location.
- * The batch of the output tensor is `batch * block_size * block_size`.
- * Both height_pad and width_pad must be divisible by block_size.
-
- The shape of the output will be:
-
- [batch*block_size*block_size, height_pad/block_size, width_pad/block_size,
- depth]
-
- Some examples:
-
- (1) For the following input of shape `[1, 2, 2, 1]` and block_size of 2:
-
- ```prettyprint
- x = [[[[1], [2]], [[3], [4]]]]
- ```
-
- The output tensor has shape `[4, 1, 1, 1]` and value:
-
- ```prettyprint
- [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
- ```
-
- (2) For the following input of shape `[1, 2, 2, 3]` and block_size of 2:
-
- ```prettyprint
- x = [[[[1, 2, 3], [4, 5, 6]],
- [[7, 8, 9], [10, 11, 12]]]]
- ```
-
- The output tensor has shape `[4, 1, 1, 3]` and value:
-
- ```prettyprint
- [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]
- ```
-
- (3) For the following input of shape `[1, 4, 4, 1]` and block_size of 2:
-
- ```prettyprint
- x = [[[[1], [2], [3], [4]],
- [[5], [6], [7], [8]],
- [[9], [10], [11], [12]],
- [[13], [14], [15], [16]]]]
- ```
-
- The output tensor has shape `[4, 2, 2, 1]` and value:
-
- ```prettyprint
- x = [[[[1], [3]], [[9], [11]]],
- [[[2], [4]], [[10], [12]]],
- [[[5], [7]], [[13], [15]]],
- [[[6], [8]], [[14], [16]]]]
- ```
-
- (4) For the following input of shape `[2, 2, 4, 1]` and block_size of 2:
-
- ```prettyprint
- x = [[[[1], [2], [3], [4]],
- [[5], [6], [7], [8]]],
- [[[9], [10], [11], [12]],
- [[13], [14], [15], [16]]]]
- ```
-
- The output tensor has shape `[8, 1, 2, 1]` and value:
-
- ```prettyprint
- x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],
- [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]
- ```
-
- Among others, this operation is useful for reducing atrous convolution into
- regular convolution.
-
-* <b>`block_size`</b>: An `int` that is `>= 2`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_mask.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_mask.md
deleted file mode 100644
index 84f65ec55c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_mask.md
+++ /dev/null
@@ -1,40 +0,0 @@
-### `tf.sparse_mask(a, mask_indices, name=None)` {#sparse_mask}
-
-Masks elements of `IndexedSlices`.
-
-Given an `IndexedSlices` instance `a`, returns another `IndexedSlices` that
-contains a subset of the slices of `a`. Only the slices at indices not
-specified in `mask_indices` are returned.
-
-This is useful when you need to extract a subset of slices in an
-`IndexedSlices` object.
-
-For example:
-
-```python
-# `a` contains slices at indices [12, 26, 37, 45] from a large tensor
-# with shape [1000, 10]
-a.indices => [12, 26, 37, 45]
-tf.shape(a.values) => [4, 10]
-
-# `b` will be the subset of `a` slices at its second and third indices, so
-# we want to mask its first and last indices (which are at absolute
-# indices 12, 45)
-b = tf.sparse_mask(a, [12, 45])
-
-b.indices => [26, 37]
-tf.shape(b.values) => [2, 10]
-
-```
-
-##### Args:
-
-
-* <b>`a`</b>: An `IndexedSlices` instance.
-* <b>`mask_indices`</b>: Indices of elements to mask.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The masked `IndexedSlices` instance.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_merge.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_merge.md
deleted file mode 100644
index 99b87e7455..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_merge.md
+++ /dev/null
@@ -1,98 +0,0 @@
-### `tf.sparse_merge(sp_ids, sp_values, vocab_size, name=None, already_sorted=False)` {#sparse_merge}
-
-Combines a batch of feature ids and values into a single `SparseTensor`.
-
-The most common use case for this function occurs when feature ids and
-their corresponding values are stored in `Example` protos on disk.
-`parse_example` will return a batch of ids and a batch of values, and this
-function joins them into a single logical `SparseTensor` for use in
-functions such as `sparse_tensor_dense_matmul`, `sparse_to_dense`, etc.
-
-The `SparseTensor` returned by this function has the following properties:
-
- - `indices` is equivalent to `sp_ids.indices` with the last
- dimension discarded and replaced with `sp_ids.values`.
- - `values` is simply `sp_values.values`.
- - If `sp_ids.dense_shape = [D0, D1, ..., Dn, K]`, then
- `output.shape = [D0, D1, ..., Dn, vocab_size]`.
-
-For example, consider the following feature vectors:
-
-```python
- vector1 = [-3, 0, 0, 0, 0, 0]
- vector2 = [ 0, 1, 0, 4, 1, 0]
- vector3 = [ 5, 0, 0, 9, 0, 0]
-```
-
-These might be stored sparsely in the following Example protos by storing
-only the feature ids (column number if the vectors are treated as a matrix)
-of the non-zero elements and the corresponding values:
-
-```python
- examples = [Example(features={
- "ids": Feature(int64_list=Int64List(value=[0])),
- "values": Feature(float_list=FloatList(value=[-3]))}),
- Example(features={
- "ids": Feature(int64_list=Int64List(value=[1, 4, 3])),
- "values": Feature(float_list=FloatList(value=[1, 1, 4]))}),
- Example(features={
- "ids": Feature(int64_list=Int64List(value=[0, 3])),
- "values": Feature(float_list=FloatList(value=[5, 9]))})]
-```
-
-The result of calling parse_example on these examples will produce a
-dictionary with entries for "ids" and "values". Passing those two objects
-to this function along with vocab_size=6, will produce a `SparseTensor` that
-sparsely represents all three instances. Namely, the `indices` property will
-contain the coordinates of the non-zero entries in the feature matrix (the
-first dimension is the row number in the matrix, i.e., the index within the
-batch, and the second dimension is the column number, i.e., the feature id);
-`values` will contain the actual values. `shape` will be the shape of the
-original matrix, i.e., (3, 6). For our example above, the output will be
-equal to:
-
-```python
- SparseTensor(indices=[[0, 0], [1, 1], [1, 3], [1, 4], [2, 0], [2, 3]],
- values=[-3, 1, 4, 1, 5, 9],
- dense_shape=[3, 6])
-```
-
-This method generalizes to higher-dimensions by simply providing a list for
-both the sp_ids as well as the vocab_size.
-In this case the resulting `SparseTensor` has the following properties:
- - `indices` is equivalent to `sp_ids[0].indices` with the last
- dimension discarded and concatenated with
- `sp_ids[0].values, sp_ids[1].values, ...`.
- - `values` is simply `sp_values.values`.
- - If `sp_ids.dense_shape = [D0, D1, ..., Dn, K]`, then
- `output.shape = [D0, D1, ..., Dn] + vocab_size`.
-
-##### Args:
-
-
-* <b>`sp_ids`</b>: A single `SparseTensor` with `values` property of type `int32`
- or `int64` or a Python list of such `SparseTensor`s or a list thereof.
-* <b>`sp_values`</b>: A`SparseTensor` of any type.
-* <b>`vocab_size`</b>: A scalar `int64` Tensor (or Python int) containing the new size
- of the last dimension, `all(0 <= sp_ids.values < vocab_size)`.
- Or a list thereof with `all(0 <= sp_ids[i].values < vocab_size[i])` for
- all `i`.
-* <b>`name`</b>: A name prefix for the returned tensors (optional)
-* <b>`already_sorted`</b>: A boolean to specify whether the per-batch values in
- `sp_values` are already sorted. If so skip sorting, False by default
- (optional).
-
-##### Returns:
-
- A `SparseTensor` compactly representing a batch of feature ids and values,
- useful for passing to functions that expect such a `SparseTensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_values` is not a `SparseTensor`. Or if `sp_ids` is neither
- a `SparseTensor` nor a list thereof. Or if `vocab_size` is not a
- `Tensor` or a Python int and `sp_ids` is a `SparseTensor`. Or if
- `vocab_size` is not a or list thereof and `sp_ids` is a list.
-* <b>`ValueError`</b>: If `sp_ids` and `vocab_size` are lists of different lengths.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_reshape.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_reshape.md
deleted file mode 100644
index 263f676fc1..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.sparse_reshape.md
+++ /dev/null
@@ -1,51 +0,0 @@
-### `tf.sparse_reshape(sp_input, shape, name=None)` {#sparse_reshape}
-
-Reshapes a `SparseTensor` to represent values in a new dense shape.
-
-This operation has the same semantics as `reshape` on the represented dense
-tensor. The indices of non-empty values in `sp_input` are recomputed based
-on the new dense shape, and a new `SparseTensor` is returned containing the
-new indices and new shape. The order of non-empty values in `sp_input` is
-unchanged.
-
-If one component of `shape` is the special value -1, the size of that
-dimension is computed so that the total dense size remains constant. At
-most one component of `shape` can be -1. The number of dense elements
-implied by `shape` must be the same as the number of dense elements
-originally represented by `sp_input`.
-
-For example, if `sp_input` has shape `[2, 3, 6]` and `indices` / `values`:
-
- [0, 0, 0]: a
- [0, 0, 1]: b
- [0, 1, 0]: c
- [1, 0, 0]: d
- [1, 2, 3]: e
-
-and `shape` is `[9, -1]`, then the output will be a `SparseTensor` of
-shape `[9, 4]` and `indices` / `values`:
-
- [0, 0]: a
- [0, 1]: b
- [1, 2]: c
- [4, 2]: d
- [8, 1]: e
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The input `SparseTensor`.
-* <b>`shape`</b>: A 1-D (vector) int64 `Tensor` specifying the new dense shape of the
- represented `SparseTensor`.
-* <b>`name`</b>: A name prefix for the returned tensors (optional)
-
-##### Returns:
-
- A `SparseTensor` with the same non-empty values but with indices calculated
- by the new dense shape.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.squared_difference.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.squared_difference.md
deleted file mode 100644
index 19f25f473d..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.squared_difference.md
+++ /dev/null
@@ -1,18 +0,0 @@
-### `tf.squared_difference(x, y, name=None)` {#squared_difference}
-
-Returns (x - y)(x - y) element-wise.
-
-*NOTE*: `SquaredDifference` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.string_to_number.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.string_to_number.md
deleted file mode 100644
index c6837bfa4a..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.string_to_number.md
+++ /dev/null
@@ -1,20 +0,0 @@
-### `tf.string_to_number(string_tensor, out_type=None, name=None)` {#string_to_number}
-
-Converts each string in the input Tensor to the specified numeric type.
-
-(Note that int32 overflow results in an error while float overflow
-results in a rounded value.)
-
-##### Args:
-
-
-* <b>`string_tensor`</b>: A `Tensor` of type `string`.
-* <b>`out_type`</b>: An optional `tf.DType` from: `tf.float32, tf.int32`. Defaults to `tf.float32`.
- The numeric type to interpret each string in `string_tensor` as.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `out_type`.
- A Tensor of the same shape as the input `string_tensor`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.summary.TaggedRunMetadata.FromString.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.summary.TaggedRunMetadata.FromString.md
deleted file mode 100644
index 613f4ebd73..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.summary.TaggedRunMetadata.FromString.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.summary.TaggedRunMetadata.FromString(s)` {#TaggedRunMetadata.FromString}
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.tanh.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.tanh.md
deleted file mode 100644
index 154a13059c..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.tanh.md
+++ /dev/null
@@ -1,16 +0,0 @@
-### `tf.tanh(x, name=None)` {#tanh}
-
-Computes hyperbolic tangent of `x` element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A Tensor or SparseTensor with type `float`, `double`, `int32`,
- `complex64`, `int64`, or `qint32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A Tensor or SparseTensor respectively with the same type as `x` if
- `x.dtype != qint32` otherwise the return type is `quint8`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.test.is_gpu_available.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.test.is_gpu_available.md
deleted file mode 100644
index db6132b259..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.test.is_gpu_available.md
+++ /dev/null
@@ -1,13 +0,0 @@
-### `tf.test.is_gpu_available(cuda_only=False)` {#is_gpu_available}
-
-Returns whether TensorFlow can access a GPU.
-
-##### Args:
-
-
-* <b>`cuda_only`</b>: limit the search to CUDA gpus.
-
-##### Returns:
-
- True iff a gpu device of the requested kind is available.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.to_bfloat16.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.to_bfloat16.md
deleted file mode 100644
index 3d55da1110..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.to_bfloat16.md
+++ /dev/null
@@ -1,19 +0,0 @@
-### `tf.to_bfloat16(x, name='ToBFloat16')` {#to_bfloat16}
-
-Casts a tensor to type `bfloat16`.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` with same shape as `x` with type `bfloat16`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` cannot be cast to the `bfloat16`.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.FeedFnHook.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.FeedFnHook.md
deleted file mode 100644
index 1797a0d3b5..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.FeedFnHook.md
+++ /dev/null
@@ -1,88 +0,0 @@
-Runs `feed_fn` and sets the `feed_dict` accordingly.
-- - -
-
-#### `tf.train.FeedFnHook.__init__(feed_fn)` {#FeedFnHook.__init__}
-
-Constructs the FeedFnHook with given `feed_fn`.
-
-##### Args:
-
-
-* <b>`feed_fn`</b>: function, no arguments and returns `dict` to feed.
-
-
-- - -
-
-#### `tf.train.FeedFnHook.after_create_session(session, coord)` {#FeedFnHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.FeedFnHook.after_run(run_context, run_values)` {#FeedFnHook.after_run}
-
-Called after each call to run().
-
-The `run_values` argument contains results of requested ops/tensors by
-`before_run()`.
-
-The `run_context` argument is the same one send to `before_run` call.
-`run_context.request_stop()` can be called to stop the iteration.
-
-##### Args:
-
-
-* <b>`run_context`</b>: A `SessionRunContext` object.
-* <b>`run_values`</b>: A SessionRunValues object.
-
-
-- - -
-
-#### `tf.train.FeedFnHook.before_run(run_context)` {#FeedFnHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.FeedFnHook.begin()` {#FeedFnHook.begin}
-
-Called once before using the session.
-
-When called, the default graph is the one that will be launched in the
-session. The hook can modify the graph by adding new operations to it.
-After the `begin()` call the graph will be finalized and the other callbacks
-can not modify the graph anymore. Second call of `begin()` on the same
-graph, should not change the graph.
-
-
-- - -
-
-#### `tf.train.FeedFnHook.end(session)` {#FeedFnHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.FinalOpsHook.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.FinalOpsHook.md
deleted file mode 100644
index bf8e7184b6..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.FinalOpsHook.md
+++ /dev/null
@@ -1,111 +0,0 @@
-A run hook which evaluates `Tensors` at the end of a session.
-- - -
-
-#### `tf.train.FinalOpsHook.__init__(final_ops, final_ops_feed_dict=None)` {#FinalOpsHook.__init__}
-
-Constructs the FinalOpHook with ops to run at the end of the session.
-
-##### Args:
-
-
-* <b>`final_ops`</b>: A single `Tensor`, a list of `Tensors` or a dictionary of
- names to `Tensors`.
-* <b>`final_ops_feed_dict`</b>: A feed dictionary to use when running
- `final_ops_dict`.
-
-
-- - -
-
-#### `tf.train.FinalOpsHook.after_create_session(session, coord)` {#FinalOpsHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.FinalOpsHook.after_run(run_context, run_values)` {#FinalOpsHook.after_run}
-
-Called after each call to run().
-
-The `run_values` argument contains results of requested ops/tensors by
-`before_run()`.
-
-The `run_context` argument is the same one send to `before_run` call.
-`run_context.request_stop()` can be called to stop the iteration.
-
-##### Args:
-
-
-* <b>`run_context`</b>: A `SessionRunContext` object.
-* <b>`run_values`</b>: A SessionRunValues object.
-
-
-- - -
-
-#### `tf.train.FinalOpsHook.before_run(run_context)` {#FinalOpsHook.before_run}
-
-Called before each call to run().
-
-You can return from this call a `SessionRunArgs` object indicating ops or
-tensors to add to the upcoming `run()` call. These ops/tensors will be run
-together with the ops/tensors originally passed to the original run() call.
-The run args you return can also contain feeds to be added to the run()
-call.
-
-The `run_context` argument is a `SessionRunContext` that provides
-information about the upcoming `run()` call: the originally requested
-op/tensors, the TensorFlow Session.
-
-At this point graph is finalized and you can not add ops.
-
-##### Args:
-
-
-* <b>`run_context`</b>: A `SessionRunContext` object.
-
-##### Returns:
-
- None or a `SessionRunArgs` object.
-
-
-- - -
-
-#### `tf.train.FinalOpsHook.begin()` {#FinalOpsHook.begin}
-
-Called once before using the session.
-
-When called, the default graph is the one that will be launched in the
-session. The hook can modify the graph by adding new operations to it.
-After the `begin()` call the graph will be finalized and the other callbacks
-can not modify the graph anymore. Second call of `begin()` on the same
-graph, should not change the graph.
-
-
-- - -
-
-#### `tf.train.FinalOpsHook.end(session)` {#FinalOpsHook.end}
-
-
-
-
-- - -
-
-#### `tf.train.FinalOpsHook.final_ops_values` {#FinalOpsHook.final_ops_values}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.NanLossDuringTrainingError.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.NanLossDuringTrainingError.md
deleted file mode 100644
index d568c56114..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.NanLossDuringTrainingError.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-- - -
-
-#### `tf.train.NanLossDuringTrainingError.__str__()` {#NanLossDuringTrainingError.__str__}
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.ProximalAdagradOptimizer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.ProximalAdagradOptimizer.md
deleted file mode 100644
index 002dabfbf9..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.ProximalAdagradOptimizer.md
+++ /dev/null
@@ -1,30 +0,0 @@
-Optimizer that implements the Proximal Adagrad algorithm.
-
-See this [paper](http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting.pdf).
-
-- - -
-
-#### `tf.train.ProximalAdagradOptimizer.__init__(learning_rate, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='ProximalAdagrad')` {#ProximalAdagradOptimizer.__init__}
-
-Construct a new ProximalAdagrad optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A `Tensor` or a floating point value. The learning rate.
-* <b>`initial_accumulator_value`</b>: A floating point value.
- Starting value for the accumulators, must be positive.
-* <b>`l1_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`l2_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`use_locking`</b>: If `True` use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "Adagrad".
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `initial_accumulator_value` is invalid.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.SessionRunValues.__new__.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.SessionRunValues.__new__.md
deleted file mode 100644
index 3540616254..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.SessionRunValues.__new__.md
+++ /dev/null
@@ -1,4 +0,0 @@
-#### `tf.train.SessionRunValues.__new__(_cls, results, options, run_metadata)` {#SessionRunValues.__new__}
-
-Create new instance of SessionRunValues(results, options, run_metadata)
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.StepCounterHook.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.StepCounterHook.md
deleted file mode 100644
index 50ebc652ab..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.StepCounterHook.md
+++ /dev/null
@@ -1,65 +0,0 @@
-Steps per second monitor.
-- - -
-
-#### `tf.train.StepCounterHook.__init__(every_n_steps=100, every_n_secs=None, output_dir=None, summary_writer=None)` {#StepCounterHook.__init__}
-
-
-
-
-- - -
-
-#### `tf.train.StepCounterHook.after_create_session(session, coord)` {#StepCounterHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.StepCounterHook.after_run(run_context, run_values)` {#StepCounterHook.after_run}
-
-
-
-
-- - -
-
-#### `tf.train.StepCounterHook.before_run(run_context)` {#StepCounterHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.StepCounterHook.begin()` {#StepCounterHook.begin}
-
-
-
-
-- - -
-
-#### `tf.train.StepCounterHook.end(session)` {#StepCounterHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.input_producer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.input_producer.md
deleted file mode 100644
index c98eed194e..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.input_producer.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.train.input_producer(input_tensor, element_shape=None, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, summary_name=None, name=None, cancel_op=None)` {#input_producer}
-
-Output the rows of `input_tensor` to a queue for an input pipeline.
-
-Note: if `num_epochs` is not `None`, this function creates local counter
-`epochs`. Use `local_variables_initializer()` to initialize local variables.
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: A tensor with the rows to produce. Must be at least
- one-dimensional. Must either have a fully-defined shape, or
- `element_shape` must be defined.
-* <b>`element_shape`</b>: (Optional.) A `TensorShape` representing the shape of a
- row of `input_tensor`, if it cannot be inferred.
-* <b>`num_epochs`</b>: (Optional.) An integer. If specified `input_producer` produces
- each row of `input_tensor` `num_epochs` times before generating an
- `OutOfRange` error. If not specified, `input_producer` can cycle through
- the rows of `input_tensor` an unlimited number of times.
-* <b>`shuffle`</b>: (Optional.) A boolean. If true, the rows are randomly shuffled
- within each epoch.
-* <b>`seed`</b>: (Optional.) An integer. The seed to use if `shuffle` is true.
-* <b>`capacity`</b>: (Optional.) The capacity of the queue to be used for buffering
- the input.
-* <b>`shared_name`</b>: (Optional.) If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`summary_name`</b>: (Optional.) If set, a scalar summary for the current queue
- size will be generated, using this name as part of the tag.
-* <b>`name`</b>: (Optional.) A name for queue.
-* <b>`cancel_op`</b>: (Optional.) Cancel op for the queue
-
-##### Returns:
-
- A queue with the output rows. A `QueueRunner` for the queue is
- added to the current `QUEUE_RUNNER` collection of the current
- graph.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of the input cannot be inferred from the arguments.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.string_input_producer.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.string_input_producer.md
deleted file mode 100644
index 1aba482ef0..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.string_input_producer.md
+++ /dev/null
@@ -1,36 +0,0 @@
-### `tf.train.string_input_producer(string_tensor, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None, cancel_op=None)` {#string_input_producer}
-
-Output strings (e.g. filenames) to a queue for an input pipeline.
-
-Note: if `num_epochs` is not `None`, this function creates local counter
-`epochs`. Use `local_variables_initializer()` to initialize local variables.
-
-##### Args:
-
-
-* <b>`string_tensor`</b>: A 1-D string tensor with the strings to produce.
-* <b>`num_epochs`</b>: An integer (optional). If specified, `string_input_producer`
- produces each string from `string_tensor` `num_epochs` times before
- generating an `OutOfRange` error. If not specified,
- `string_input_producer` can cycle through the strings in `string_tensor`
- an unlimited number of times.
-* <b>`shuffle`</b>: Boolean. If true, the strings are randomly shuffled within each
- epoch.
-* <b>`seed`</b>: An integer (optional). Seed used if shuffle == True.
-* <b>`capacity`</b>: An integer. Sets the queue capacity.
-* <b>`shared_name`</b>: (optional). If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: A name for the operations (optional).
-* <b>`cancel_op`</b>: Cancel op for the queue (optional).
-
-##### Returns:
-
- A queue with the output strings. A `QueueRunner` for the Queue
- is added to the current `Graph`'s `QUEUE_RUNNER` collection.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the string_tensor is a null Python list. At runtime,
- will fail with an assertion if string_tensor becomes a null tensor.
-
diff --git a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.summary_iterator.md b/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.summary_iterator.md
deleted file mode 100644
index f998e62046..0000000000
--- a/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.train.summary_iterator.md
+++ /dev/null
@@ -1,42 +0,0 @@
-### `tf.train.summary_iterator(path)` {#summary_iterator}
-
-An iterator for reading `Event` protocol buffers from an event file.
-
-You can use this function to read events written to an event file. It returns
-a Python iterator that yields `Event` protocol buffers.
-
-Example: Print the contents of an events file.
-
-```python
-for e in tf.train.summary_iterator(path to events file):
- print(e)
-```
-
-Example: Print selected summary values.
-
-```python
-# This example supposes that the events file contains summaries with a
-# summary value tag 'loss'. These could have been added by calling
-# `add_summary()`, passing the output of a scalar summary op created with
-# with: `tf.summary.scalar('loss', loss_tensor)`.
-for e in tf.train.summary_iterator(path to events file):
- for v in e.summary.value:
- if v.tag == 'loss':
- print(v.simple_value)
-```
-
-See the protocol buffer definitions of
-[Event](https://www.tensorflow.org/code/tensorflow/core/util/event.proto)
-and
-[Summary](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)
-for more information about their attributes.
-
-##### Args:
-
-
-* <b>`path`</b>: The path to an event file created by a `SummaryWriter`.
-
-##### Yields:
-
- `Event` protocol buffers.
-
diff --git a/tensorflow/g3doc/api_docs/python/histogram_ops.md b/tensorflow/g3doc/api_docs/python/histogram_ops.md
deleted file mode 100644
index e9fa732e60..0000000000
--- a/tensorflow/g3doc/api_docs/python/histogram_ops.md
+++ /dev/null
@@ -1,48 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Histograms
-[TOC]
-
-Histograms. Please see @{$python/histogram_ops} guide.
-
-- - -
-
-### `tf.histogram_fixed_width(values, value_range, nbins=100, dtype=tf.int32, name=None)` {#histogram_fixed_width}
-
-Return histogram of values.
-
-Given the tensor `values`, this operation returns a rank 1 histogram counting
-the number of entries in `values` that fell into every bin. The bins are
-equal width and determined by the arguments `value_range` and `nbins`.
-
-##### Args:
-
-
-* <b>`values`</b>: Numeric `Tensor`.
-* <b>`value_range`</b>: Shape [2] `Tensor`. new_values <= value_range[0] will be
- mapped to hist[0], values >= value_range[1] will be mapped to hist[-1].
- Must be same dtype as new_values.
-* <b>`nbins`</b>: Scalar `int32 Tensor`. Number of histogram bins.
-* <b>`dtype`</b>: dtype for returned histogram.
-* <b>`name`</b>: A name for this operation (defaults to 'histogram_fixed_width').
-
-##### Returns:
-
- A 1-D `Tensor` holding histogram of values.
-
-
-* <b>`Examples`</b>:
-
-```python
-# Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf)
-nbins = 5
-value_range = [0.0, 5.0]
-new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15]
-
-with tf.default_session() as sess:
- hist = tf.histogram_fixed_width(new_values, value_range, nbins=5)
- variables.global_variables_initializer().run()
- sess.run(hist) => [2, 1, 1, 0, 2]
-```
-
-
diff --git a/tensorflow/g3doc/api_docs/python/image.md b/tensorflow/g3doc/api_docs/python/image.md
deleted file mode 100644
index 8d233dcadb..0000000000
--- a/tensorflow/g3doc/api_docs/python/image.md
+++ /dev/null
@@ -1,1415 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Images
-
-Note: Functions taking `Tensor` arguments can also take anything accepted by
-[`tf.convert_to_tensor`](framework.md#convert_to_tensor).
-
-[TOC]
-
-Image processing and decoding ops. See the @{$python/image} guide.
-
-- - -
-
-### `tf.image.decode_gif(contents, name=None)` {#decode_gif}
-
-Decode the first frame of a GIF-encoded image to a uint8 tensor.
-
-GIF with frame or transparency compression are not supported
-convert animated GIF from compressed to uncompressed by:
-
-convert $src.gif -coalesce $dst.gif
-
-##### Args:
-
-
-* <b>`contents`</b>: A `Tensor` of type `string`. 0-D. The GIF-encoded image.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `uint8`.
- 4-D with shape `[num_frames, height, width, 3]`. RGB order
-
-
-- - -
-
-### `tf.image.decode_jpeg(contents, channels=None, ratio=None, fancy_upscaling=None, try_recover_truncated=None, acceptable_fraction=None, dct_method=None, name=None)` {#decode_jpeg}
-
-Decode a JPEG-encoded image to a uint8 tensor.
-
-The attr `channels` indicates the desired number of color channels for the
-decoded image.
-
-Accepted values are:
-
-* 0: Use the number of channels in the JPEG-encoded image.
-* 1: output a grayscale image.
-* 3: output an RGB image.
-
-If needed, the JPEG-encoded image is transformed to match the requested number
-of color channels.
-
-The attr `ratio` allows downscaling the image by an integer factor during
-decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than
-downscaling the image later.
-
-##### Args:
-
-
-* <b>`contents`</b>: A `Tensor` of type `string`. 0-D. The JPEG-encoded image.
-* <b>`channels`</b>: An optional `int`. Defaults to `0`.
- Number of color channels for the decoded image.
-* <b>`ratio`</b>: An optional `int`. Defaults to `1`. Downscaling ratio.
-* <b>`fancy_upscaling`</b>: An optional `bool`. Defaults to `True`.
- If true use a slower but nicer upscaling of the
- chroma planes (yuv420/422 only).
-* <b>`try_recover_truncated`</b>: An optional `bool`. Defaults to `False`.
- If true try to recover an image from truncated input.
-* <b>`acceptable_fraction`</b>: An optional `float`. Defaults to `1`.
- The minimum required fraction of lines before a truncated
- input is accepted.
-* <b>`dct_method`</b>: An optional `string`. Defaults to `""`.
- string specifying a hint about the algorithm used for
- decompression. Defaults to "" which maps to a system-specific
- default. Currently valid values are ["INTEGER_FAST",
- "INTEGER_ACCURATE"]. The hint may be ignored (e.g., the internal
- jpeg library changes to a version that does not have that specific
- option.)
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `uint8`. 3-D with shape `[height, width, channels]`..
-
-
-- - -
-
-### `tf.image.encode_jpeg(image, format=None, quality=None, progressive=None, optimize_size=None, chroma_downsampling=None, density_unit=None, x_density=None, y_density=None, xmp_metadata=None, name=None)` {#encode_jpeg}
-
-JPEG-encode an image.
-
-`image` is a 3-D uint8 Tensor of shape `[height, width, channels]`.
-
-The attr `format` can be used to override the color format of the encoded
-output. Values can be:
-
-* `''`: Use a default format based on the number of channels in the image.
-* `grayscale`: Output a grayscale JPEG image. The `channels` dimension
- of `image` must be 1.
-* `rgb`: Output an RGB JPEG image. The `channels` dimension
- of `image` must be 3.
-
-If `format` is not specified or is the empty string, a default format is picked
-in function of the number of channels in `image`:
-
-* 1: Output a grayscale image.
-* 3: Output an RGB image.
-
-##### Args:
-
-
-* <b>`image`</b>: A `Tensor` of type `uint8`.
- 3-D with shape `[height, width, channels]`.
-* <b>`format`</b>: An optional `string` from: `"", "grayscale", "rgb"`. Defaults to `""`.
- Per pixel image format.
-* <b>`quality`</b>: An optional `int`. Defaults to `95`.
- Quality of the compression from 0 to 100 (higher is better and slower).
-* <b>`progressive`</b>: An optional `bool`. Defaults to `False`.
- If True, create a JPEG that loads progressively (coarse to fine).
-* <b>`optimize_size`</b>: An optional `bool`. Defaults to `False`.
- If True, spend CPU/RAM to reduce size with no quality change.
-* <b>`chroma_downsampling`</b>: An optional `bool`. Defaults to `True`.
- See http://en.wikipedia.org/wiki/Chroma_subsampling.
-* <b>`density_unit`</b>: An optional `string` from: `"in", "cm"`. Defaults to `"in"`.
- Unit used to specify `x_density` and `y_density`:
- pixels per inch (`'in'`) or centimeter (`'cm'`).
-* <b>`x_density`</b>: An optional `int`. Defaults to `300`.
- Horizontal pixels per density unit.
-* <b>`y_density`</b>: An optional `int`. Defaults to `300`.
- Vertical pixels per density unit.
-* <b>`xmp_metadata`</b>: An optional `string`. Defaults to `""`.
- If not empty, embed this XMP metadata in the image header.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`. 0-D. JPEG-encoded image.
-
-
-- - -
-
-### `tf.image.decode_png(contents, channels=None, dtype=None, name=None)` {#decode_png}
-
-Decode a PNG-encoded image to a uint8 or uint16 tensor.
-
-The attr `channels` indicates the desired number of color channels for the
-decoded image.
-
-Accepted values are:
-
-* 0: Use the number of channels in the PNG-encoded image.
-* 1: output a grayscale image.
-* 3: output an RGB image.
-* 4: output an RGBA image.
-
-If needed, the PNG-encoded image is transformed to match the requested number
-of color channels.
-
-##### Args:
-
-
-* <b>`contents`</b>: A `Tensor` of type `string`. 0-D. The PNG-encoded image.
-* <b>`channels`</b>: An optional `int`. Defaults to `0`.
- Number of color channels for the decoded image.
-* <b>`dtype`</b>: An optional `tf.DType` from: `tf.uint8, tf.uint16`. Defaults to `tf.uint8`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `dtype`. 3-D with shape `[height, width, channels]`.
-
-
-- - -
-
-### `tf.image.encode_png(image, compression=None, name=None)` {#encode_png}
-
-PNG-encode an image.
-
-`image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]`
-where `channels` is:
-
-* 1: for grayscale.
-* 2: for grayscale + alpha.
-* 3: for RGB.
-* 4: for RGBA.
-
-The ZLIB compression level, `compression`, can be -1 for the PNG-encoder
-default or a value from 0 to 9. 9 is the highest compression level, generating
-the smallest output, but is slower.
-
-##### Args:
-
-
-* <b>`image`</b>: A `Tensor`. Must be one of the following types: `uint8`, `uint16`.
- 3-D with shape `[height, width, channels]`.
-* <b>`compression`</b>: An optional `int`. Defaults to `-1`. Compression level.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`. 0-D. PNG-encoded image.
-
-
-- - -
-
-### `tf.image.decode_image(contents, channels=None, name=None)` {#decode_image}
-
-Convenience function for `decode_gif`, `decode_jpeg`, and `decode_png`.
-Detects whether an image is a GIF, JPEG, or PNG, and performs the appropriate
-operation to convert the input bytes `string` into a `Tensor` of type `uint8`.
-
-Note: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`, as
-opposed to `decode_jpeg` and `decode_png`, which return 3-D arrays
-`[height, width, num_channels]`. Make sure to take this into account when
-constructing your graph if you are intermixing GIF files with JPEG and/or PNG
-files.
-
-##### Args:
-
-
-* <b>`contents`</b>: 0-D `string`. The encoded image bytes.
-* <b>`channels`</b>: An optional `int`. Defaults to `0`. Number of color channels for
- the decoded image.
-* <b>`name`</b>: A name for the operation (optional)
-
-##### Returns:
-
- `Tensor` with type `uint8` with shape `[height, width, num_channels]` for
- JPEG and PNG images and shape `[num_frames, height, width, 3]` for GIF
- images.
-
-
-- - -
-
-### `tf.image.resize_images(images, size, method=0, align_corners=False)` {#resize_images}
-
-Resize `images` to `size` using the specified `method`.
-
-Resized images will be distorted if their original aspect ratio is not
-the same as `size`. To avoid distortions see
-[`resize_image_with_crop_or_pad`](#resize_image_with_crop_or_pad).
-
-`method` can be one of:
-
-* <b>`ResizeMethod.BILINEAR`</b>: [Bilinear interpolation.](https://en.wikipedia.org/wiki/Bilinear_interpolation)
-* <b>`ResizeMethod.NEAREST_NEIGHBOR`</b>: [Nearest neighbor interpolation.](https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation)
-* <b>`ResizeMethod.BICUBIC`</b>: [Bicubic interpolation.](https://en.wikipedia.org/wiki/Bicubic_interpolation)
-* <b>`ResizeMethod.AREA`</b>: Area interpolation.
-
-##### Args:
-
-
-* <b>`images`</b>: 4-D Tensor of shape `[batch, height, width, channels]` or
- 3-D Tensor of shape `[height, width, channels]`.
-* <b>`size`</b>: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The
- new size for the images.
-* <b>`method`</b>: ResizeMethod. Defaults to `ResizeMethod.BILINEAR`.
-* <b>`align_corners`</b>: bool. If true, exactly align all 4 corners of the input and
- output. Defaults to `false`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape of `images` is incompatible with the
- shape arguments to this function
-* <b>`ValueError`</b>: if `size` has invalid shape or type.
-* <b>`ValueError`</b>: if an unsupported resize method is specified.
-
-##### Returns:
-
- If `images` was 4-D, a 4-D float Tensor of shape
- `[batch, new_height, new_width, channels]`.
- If `images` was 3-D, a 3-D float Tensor of shape
- `[new_height, new_width, channels]`.
-
-
-- - -
-
-### `tf.image.resize_area(images, size, align_corners=None, name=None)` {#resize_area}
-
-Resize `images` to `size` using area interpolation.
-
-Input images can be of different types but output images are always float.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`.
- 4-D with shape `[batch, height, width, channels]`.
-* <b>`size`</b>: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The
- new size for the images.
-* <b>`align_corners`</b>: An optional `bool`. Defaults to `False`.
- If true, rescale input by (new_height - 1) / (height - 1), which
- exactly aligns the 4 corners of images and resized images. If false, rescale
- by new_height / height. Treat similarly the width dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`. 4-D with shape
- `[batch, new_height, new_width, channels]`.
-
-
-- - -
-
-### `tf.image.resize_bicubic(images, size, align_corners=None, name=None)` {#resize_bicubic}
-
-Resize `images` to `size` using bicubic interpolation.
-
-Input images can be of different types but output images are always float.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`.
- 4-D with shape `[batch, height, width, channels]`.
-* <b>`size`</b>: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The
- new size for the images.
-* <b>`align_corners`</b>: An optional `bool`. Defaults to `False`.
- If true, rescale input by (new_height - 1) / (height - 1), which
- exactly aligns the 4 corners of images and resized images. If false, rescale
- by new_height / height. Treat similarly the width dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`. 4-D with shape
- `[batch, new_height, new_width, channels]`.
-
-
-- - -
-
-### `tf.image.resize_bilinear(images, size, align_corners=None, name=None)` {#resize_bilinear}
-
-Resize `images` to `size` using bilinear interpolation.
-
-Input images can be of different types but output images are always float.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`.
- 4-D with shape `[batch, height, width, channels]`.
-* <b>`size`</b>: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The
- new size for the images.
-* <b>`align_corners`</b>: An optional `bool`. Defaults to `False`.
- If true, rescale input by (new_height - 1) / (height - 1), which
- exactly aligns the 4 corners of images and resized images. If false, rescale
- by new_height / height. Treat similarly the width dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`. 4-D with shape
- `[batch, new_height, new_width, channels]`.
-
-
-- - -
-
-### `tf.image.resize_nearest_neighbor(images, size, align_corners=None, name=None)` {#resize_nearest_neighbor}
-
-Resize `images` to `size` using nearest neighbor interpolation.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`.
- 4-D with shape `[batch, height, width, channels]`.
-* <b>`size`</b>: A 1-D int32 Tensor of 2 elements: `new_height, new_width`. The
- new size for the images.
-* <b>`align_corners`</b>: An optional `bool`. Defaults to `False`.
- If true, rescale input by (new_height - 1) / (height - 1), which
- exactly aligns the 4 corners of images and resized images. If false, rescale
- by new_height / height. Treat similarly the width dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `images`. 4-D with shape
- `[batch, new_height, new_width, channels]`.
-
-
-- - -
-
-### `tf.image.resize_image_with_crop_or_pad(image, target_height, target_width)` {#resize_image_with_crop_or_pad}
-
-Crops and/or pads an image to a target width and height.
-
-Resizes an image to a target width and height by either centrally
-cropping the image or padding it evenly with zeros.
-
-If `width` or `height` is greater than the specified `target_width` or
-`target_height` respectively, this op centrally crops along that dimension.
-If `width` or `height` is smaller than the specified `target_width` or
-`target_height` respectively, this op centrally pads with 0 along that
-dimension.
-
-##### Args:
-
-
-* <b>`image`</b>: 3-D tensor of shape `[height, width, channels]`
-* <b>`target_height`</b>: Target height.
-* <b>`target_width`</b>: Target width.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `target_height` or `target_width` are zero or negative.
-
-##### Returns:
-
- Cropped and/or padded image of shape
- `[target_height, target_width, channels]`
-
-
-- - -
-
-### `tf.image.central_crop(image, central_fraction)` {#central_crop}
-
-Crop the central region of the image.
-
-Remove the outer parts of an image but retain the central region of the image
-along each dimension. If we specify central_fraction = 0.5, this function
-returns the region marked with "X" in the below diagram.
-
- --------
- | |
- | XXXX |
- | XXXX |
- | | where "X" is the central 50% of the image.
- --------
-
-##### Args:
-
-
-* <b>`image`</b>: 3-D float Tensor of shape [height, width, depth]
-* <b>`central_fraction`</b>: float (0, 1], fraction of size to crop
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if central_crop_fraction is not within (0, 1].
-
-##### Returns:
-
- 3-D float Tensor
-
-
-- - -
-
-### `tf.image.pad_to_bounding_box(image, offset_height, offset_width, target_height, target_width)` {#pad_to_bounding_box}
-
-Pad `image` with zeros to the specified `height` and `width`.
-
-Adds `offset_height` rows of zeros on top, `offset_width` columns of
-zeros on the left, and then pads the image on the bottom and right
-with zeros until it has dimensions `target_height`, `target_width`.
-
-This op does nothing if `offset_*` is zero and the image already has size
-`target_height` by `target_width`.
-
-##### Args:
-
-
-* <b>`image`</b>: 3-D tensor with shape `[height, width, channels]`
-* <b>`offset_height`</b>: Number of rows of zeros to add on top.
-* <b>`offset_width`</b>: Number of columns of zeros to add on the left.
-* <b>`target_height`</b>: Height of output image.
-* <b>`target_width`</b>: Width of output image.
-
-##### Returns:
-
- 3-D tensor of shape `[target_height, target_width, channels]`
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `image` is incompatible with the `offset_*` or
- `target_*` arguments, or either `offset_height` or `offset_width` is
- negative.
-
-
-- - -
-
-### `tf.image.crop_to_bounding_box(image, offset_height, offset_width, target_height, target_width)` {#crop_to_bounding_box}
-
-Crops an image to a specified bounding box.
-
-This op cuts a rectangular part out of `image`. The top-left corner of the
-returned image is at `offset_height, offset_width` in `image`, and its
-lower-right corner is at
-`offset_height + target_height, offset_width + target_width`.
-
-##### Args:
-
-
-* <b>`image`</b>: 3-D tensor with shape `[height, width, channels]`
-* <b>`offset_height`</b>: Vertical coordinate of the top-left corner of the result in
- the input.
-* <b>`offset_width`</b>: Horizontal coordinate of the top-left corner of the result in
- the input.
-* <b>`target_height`</b>: Height of the result.
-* <b>`target_width`</b>: Width of the result.
-
-##### Returns:
-
- 3-D tensor of image with shape `[target_height, target_width, channels]`
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of `image` is incompatible with the `offset_*` or
- `target_*` arguments, or either `offset_height` or `offset_width` is
- negative, or either `target_height` or `target_width` is not positive.
-
-
-- - -
-
-### `tf.image.extract_glimpse(input, size, offsets, centered=None, normalized=None, uniform_noise=None, name=None)` {#extract_glimpse}
-
-Extracts a glimpse from the input tensor.
-
-Returns a set of windows called glimpses extracted at location
-`offsets` from the input tensor. If the windows only partially
-overlaps the inputs, the non overlapping areas will be filled with
-random noise.
-
-The result is a 4-D tensor of shape `[batch_size, glimpse_height,
-glimpse_width, channels]`. The channels and batch dimensions are the
-same as that of the input tensor. The height and width of the output
-windows are specified in the `size` parameter.
-
-The argument `normalized` and `centered` controls how the windows are built:
-
-* If the coordinates are normalized but not centered, 0.0 and 1.0
- correspond to the minimum and maximum of each height and width
- dimension.
-* If the coordinates are both normalized and centered, they range from
- -1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper
- left corner, the lower right corner is located at (1.0, 1.0) and the
- center is at (0, 0).
-* If the coordinates are not normalized they are interpreted as
- numbers of pixels.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `float32`.
- A 4-D float tensor of shape `[batch_size, height, width, channels]`.
-* <b>`size`</b>: A `Tensor` of type `int32`.
- A 1-D tensor of 2 elements containing the size of the glimpses
- to extract. The glimpse height must be specified first, following
- by the glimpse width.
-* <b>`offsets`</b>: A `Tensor` of type `float32`.
- A 2-D integer tensor of shape `[batch_size, 2]` containing
- the x, y locations of the center of each window.
-* <b>`centered`</b>: An optional `bool`. Defaults to `True`.
- indicates if the offset coordinates are centered relative to
- the image, in which case the (0, 0) offset is relative to the center
- of the input images. If false, the (0,0) offset corresponds to the
- upper left corner of the input images.
-* <b>`normalized`</b>: An optional `bool`. Defaults to `True`.
- indicates if the offset coordinates are normalized.
-* <b>`uniform_noise`</b>: An optional `bool`. Defaults to `True`.
- indicates if the noise should be generated using a
- uniform distribution or a Gaussian distribution.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`.
- A tensor representing the glimpses `[batch_size,
- glimpse_height, glimpse_width, channels]`.
-
-
-- - -
-
-### `tf.image.crop_and_resize(image, boxes, box_ind, crop_size, method=None, extrapolation_value=None, name=None)` {#crop_and_resize}
-
-Extracts crops from the input image tensor and bilinearly resizes them (possibly
-
-with aspect ratio change) to a common output size specified by `crop_size`. This
-is more general than the `crop_to_bounding_box` op which extracts a fixed size
-slice from the input image and does not allow resizing or aspect ratio change.
-
-Returns a tensor with `crops` from the input `image` at positions defined at the
-bounding box locations in `boxes`. The cropped boxes are all resized (with
-bilinear interpolation) to a fixed `size = [crop_height, crop_width]`. The
-result is a 4-D tensor `[num_boxes, crop_height, crop_width, depth]`.
-
-##### Args:
-
-
-* <b>`image`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`, `half`, `float32`, `float64`.
- A 4-D tensor of shape `[batch, image_height, image_width, depth]`.
- Both `image_height` and `image_width` need to be positive.
-* <b>`boxes`</b>: A `Tensor` of type `float32`.
- A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor
- specifies the coordinates of a box in the `box_ind[i]` image and is specified
- in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of
- `y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the
- `[0, 1]` interval of normalized image height is mapped to
- `[0, image_height - 1] in image height coordinates. We do allow y1 > y2, in
- which case the sampled crop is an up-down flipped version of the original
- image. The width dimension is treated similarly. Normalized coordinates
- outside the `[0, 1]` range are allowed, in which case we use
- `extrapolation_value` to extrapolate the input image values.
-* <b>`box_ind`</b>: A `Tensor` of type `int32`.
- A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`.
- The value of `box_ind[i]` specifies the image that the `i`-th box refers to.
-* <b>`crop_size`</b>: A `Tensor` of type `int32`.
- A 1-D tensor of 2 elements, `size = [crop_height, crop_width]`. All
- cropped image patches are resized to this size. The aspect ratio of the image
- content is not preserved. Both `crop_height` and `crop_width` need to be
- positive.
-* <b>`method`</b>: An optional `string` from: `"bilinear"`. Defaults to `"bilinear"`.
- A string specifying the interpolation method. Only 'bilinear' is
- supported for now.
-* <b>`extrapolation_value`</b>: An optional `float`. Defaults to `0`.
- Value used for extrapolation, when applicable.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32`.
- A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`.
-
-
-- - -
-
-### `tf.image.flip_up_down(image)` {#flip_up_down}
-
-Flip an image horizontally (upside down).
-
-Outputs the contents of `image` flipped along the first dimension, which is
-`height`.
-
-See also `reverse()`.
-
-##### Args:
-
-
-* <b>`image`</b>: A 3-D tensor of shape `[height, width, channels].`
-
-##### Returns:
-
- A 3-D tensor of the same type and shape as `image`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape of `image` not supported.
-
-
-- - -
-
-### `tf.image.random_flip_up_down(image, seed=None)` {#random_flip_up_down}
-
-Randomly flips an image vertically (upside down).
-
-With a 1 in 2 chance, outputs the contents of `image` flipped along the first
-dimension, which is `height`. Otherwise output the image as-is.
-
-##### Args:
-
-
-* <b>`image`</b>: A 3-D tensor of shape `[height, width, channels].`
-* <b>`seed`</b>: A Python integer. Used to create a random seed. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-
-##### Returns:
-
- A 3-D tensor of the same type and shape as `image`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape of `image` not supported.
-
-
-- - -
-
-### `tf.image.flip_left_right(image)` {#flip_left_right}
-
-Flip an image horizontally (left to right).
-
-Outputs the contents of `image` flipped along the second dimension, which is
-`width`.
-
-See also `reverse()`.
-
-##### Args:
-
-
-* <b>`image`</b>: A 3-D tensor of shape `[height, width, channels].`
-
-##### Returns:
-
- A 3-D tensor of the same type and shape as `image`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape of `image` not supported.
-
-
-- - -
-
-### `tf.image.random_flip_left_right(image, seed=None)` {#random_flip_left_right}
-
-Randomly flip an image horizontally (left to right).
-
-With a 1 in 2 chance, outputs the contents of `image` flipped along the
-second dimension, which is `width`. Otherwise output the image as-is.
-
-##### Args:
-
-
-* <b>`image`</b>: A 3-D tensor of shape `[height, width, channels].`
-* <b>`seed`</b>: A Python integer. Used to create a random seed. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-
-##### Returns:
-
- A 3-D tensor of the same type and shape as `image`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape of `image` not supported.
-
-
-- - -
-
-### `tf.image.transpose_image(image)` {#transpose_image}
-
-Transpose an image by swapping the first and second dimension.
-
-See also `transpose()`.
-
-##### Args:
-
-
-* <b>`image`</b>: 3-D tensor of shape `[height, width, channels]`
-
-##### Returns:
-
- A 3-D tensor of shape `[width, height, channels]`
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape of `image` not supported.
-
-
-- - -
-
-### `tf.image.rot90(image, k=1, name=None)` {#rot90}
-
-Rotate an image counter-clockwise by 90 degrees.
-
-##### Args:
-
-
-* <b>`image`</b>: A 3-D tensor of shape `[height, width, channels]`.
-* <b>`k`</b>: A scalar integer. The number of times the image is rotated by 90 degrees.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A rotated 3-D tensor of the same type and shape as `image`.
-
-
-
-- - -
-
-### `tf.image.rgb_to_grayscale(images, name=None)` {#rgb_to_grayscale}
-
-Converts one or more images from RGB to Grayscale.
-
-Outputs a tensor of the same `DType` and rank as `images`. The size of the
-last dimension of the output is 1, containing the Grayscale value of the
-pixels.
-
-##### Args:
-
-
-* <b>`images`</b>: The RGB tensor to convert. Last dimension must have size 3 and
- should contain RGB values.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The converted grayscale image(s).
-
-
-- - -
-
-### `tf.image.grayscale_to_rgb(images, name=None)` {#grayscale_to_rgb}
-
-Converts one or more images from Grayscale to RGB.
-
-Outputs a tensor of the same `DType` and rank as `images`. The size of the
-last dimension of the output is 3, containing the RGB value of the pixels.
-
-##### Args:
-
-
-* <b>`images`</b>: The Grayscale tensor to convert. Last dimension must be size 1.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The converted grayscale image(s).
-
-
-- - -
-
-### `tf.image.hsv_to_rgb(images, name=None)` {#hsv_to_rgb}
-
-Convert one or more images from HSV to RGB.
-
-Outputs a tensor of the same shape as the `images` tensor, containing the RGB
-value of the pixels. The output is only well defined if the value in `images`
-are in `[0,1]`.
-
-See `rgb_to_hsv` for a description of the HSV encoding.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
- 1-D or higher rank. HSV data to convert. Last dimension must be size 3.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `images`. `images` converted to RGB.
-
-
-- - -
-
-### `tf.image.rgb_to_hsv(images, name=None)` {#rgb_to_hsv}
-
-Converts one or more images from RGB to HSV.
-
-Outputs a tensor of the same shape as the `images` tensor, containing the HSV
-value of the pixels. The output is only well defined if the value in `images`
-are in `[0,1]`.
-
-`output[..., 0]` contains hue, `output[..., 1]` contains saturation, and
-`output[..., 2]` contains value. All HSV values are in `[0,1]`. A hue of 0
-corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
- 1-D or higher rank. RGB data to convert. Last dimension must be size 3.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `images`. `images` converted to HSV.
-
-
-- - -
-
-### `tf.image.convert_image_dtype(image, dtype, saturate=False, name=None)` {#convert_image_dtype}
-
-Convert `image` to `dtype`, scaling its values if needed.
-
-Images that are represented using floating point values are expected to have
-values in the range [0,1). Image data stored in integer data types are
-expected to have values in the range `[0,MAX]`, where `MAX` is the largest
-positive representable number for the data type.
-
-This op converts between data types, scaling the values appropriately before
-casting.
-
-Note that converting from floating point inputs to integer types may lead to
-over/underflow problems. Set saturate to `True` to avoid such problem in
-problematic conversions. If enabled, saturation will clip the output into the
-allowed range before performing a potentially dangerous cast (and only before
-performing such a cast, i.e., when casting from a floating point to an integer
-type, and when casting from a signed to an unsigned type; `saturate` has no
-effect on casts between floats, or on casts that increase the type's range).
-
-##### Args:
-
-
-* <b>`image`</b>: An image.
-* <b>`dtype`</b>: A `DType` to convert `image` to.
-* <b>`saturate`</b>: If `True`, clip the input before casting (if necessary).
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- `image`, converted to `dtype`.
-
-
-- - -
-
-### `tf.image.adjust_brightness(image, delta)` {#adjust_brightness}
-
-Adjust the brightness of RGB or Grayscale images.
-
-This is a convenience method that converts an RGB image to float
-representation, adjusts its brightness, and then converts it back to the
-original data type. If several adjustments are chained it is advisable to
-minimize the number of redundant conversions.
-
-The value `delta` is added to all components of the tensor `image`. Both
-`image` and `delta` are converted to `float` before adding (and `image` is
-scaled appropriately if it is in fixed-point representation). For regular
-images, `delta` should be in the range `[0,1)`, as it is added to the image in
-floating point representation, where pixel values are in the `[0,1)` range.
-
-##### Args:
-
-
-* <b>`image`</b>: A tensor.
-* <b>`delta`</b>: A scalar. Amount to add to the pixel values.
-
-##### Returns:
-
- A brightness-adjusted tensor of the same shape and type as `image`.
-
-
-- - -
-
-### `tf.image.random_brightness(image, max_delta, seed=None)` {#random_brightness}
-
-Adjust the brightness of images by a random factor.
-
-Equivalent to `adjust_brightness()` using a `delta` randomly picked in the
-interval `[-max_delta, max_delta)`.
-
-##### Args:
-
-
-* <b>`image`</b>: An image.
-* <b>`max_delta`</b>: float, must be non-negative.
-* <b>`seed`</b>: A Python integer. Used to create a random seed. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-
-##### Returns:
-
- The brightness-adjusted image.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `max_delta` is negative.
-
-
-- - -
-
-### `tf.image.adjust_contrast(images, contrast_factor)` {#adjust_contrast}
-
-Adjust contrast of RGB or grayscale images.
-
-This is a convenience method that converts an RGB image to float
-representation, adjusts its contrast, and then converts it back to the
-original data type. If several adjustments are chained it is advisable to
-minimize the number of redundant conversions.
-
-`images` is a tensor of at least 3 dimensions. The last 3 dimensions are
-interpreted as `[height, width, channels]`. The other dimensions only
-represent a collection of images, such as `[batch, height, width, channels].`
-
-Contrast is adjusted independently for each channel of each image.
-
-For each channel, this Op computes the mean of the image pixels in the
-channel and then adjusts each component `x` of each pixel to
-`(x - mean) * contrast_factor + mean`.
-
-##### Args:
-
-
-* <b>`images`</b>: Images to adjust. At least 3-D.
-* <b>`contrast_factor`</b>: A float multiplier for adjusting contrast.
-
-##### Returns:
-
- The contrast-adjusted image or images.
-
-
-- - -
-
-### `tf.image.random_contrast(image, lower, upper, seed=None)` {#random_contrast}
-
-Adjust the contrast of an image by a random factor.
-
-Equivalent to `adjust_contrast()` but uses a `contrast_factor` randomly
-picked in the interval `[lower, upper]`.
-
-##### Args:
-
-
-* <b>`image`</b>: An image tensor with 3 or more dimensions.
-* <b>`lower`</b>: float. Lower bound for the random contrast factor.
-* <b>`upper`</b>: float. Upper bound for the random contrast factor.
-* <b>`seed`</b>: A Python integer. Used to create a random seed. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-
-##### Returns:
-
- The contrast-adjusted tensor.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `upper <= lower` or if `lower < 0`.
-
-
-- - -
-
-### `tf.image.adjust_hue(image, delta, name=None)` {#adjust_hue}
-
-Adjust hue of an RGB image.
-
-This is a convenience method that converts an RGB image to float
-representation, converts it to HSV, add an offset to the hue channel, converts
-back to RGB and then back to the original data type. If several adjustments
-are chained it is advisable to minimize the number of redundant conversions.
-
-`image` is an RGB image. The image hue is adjusted by converting the
-image to HSV and rotating the hue channel (H) by
-`delta`. The image is then converted back to RGB.
-
-`delta` must be in the interval `[-1, 1]`.
-
-##### Args:
-
-
-* <b>`image`</b>: RGB image or images. Size of the last dimension must be 3.
-* <b>`delta`</b>: float. How much to add to the hue channel.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- Adjusted image(s), same shape and DType as `image`.
-
-
-- - -
-
-### `tf.image.random_hue(image, max_delta, seed=None)` {#random_hue}
-
-Adjust the hue of an RGB image by a random factor.
-
-Equivalent to `adjust_hue()` but uses a `delta` randomly
-picked in the interval `[-max_delta, max_delta]`.
-
-`max_delta` must be in the interval `[0, 0.5]`.
-
-##### Args:
-
-
-* <b>`image`</b>: RGB image or images. Size of the last dimension must be 3.
-* <b>`max_delta`</b>: float. Maximum value for the random delta.
-* <b>`seed`</b>: An operation-specific seed. It will be used in conjunction
- with the graph-level seed to determine the real seeds that will be
- used in this operation. Please see the documentation of
- set_random_seed for its interaction with the graph-level random seed.
-
-##### Returns:
-
- 3-D float tensor of shape `[height, width, channels]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `max_delta` is invalid.
-
-
-- - -
-
-### `tf.image.adjust_gamma(image, gamma=1, gain=1)` {#adjust_gamma}
-
-Performs Gamma Correction on the input image.
- Also known as Power Law Transform. This function transforms the
- input image pixelwise according to the equation Out = In**gamma
- after scaling each pixel to the range 0 to 1.
-
-##### Args:
-
- image : A Tensor.
- gamma : A scalar. Non negative real number.
- gain : A scalar. The constant multiplier.
-
-##### Returns:
-
- A Tensor. Gamma corrected output image.
-
-##### Notes:
-
- For gamma greater than 1, the histogram will shift towards left and
- the output image will be darker than the input image.
- For gamma less than 1, the histogram will shift towards right and
- the output image will be brighter than the input image.
-
-##### References:
-
- [1] http://en.wikipedia.org/wiki/Gamma_correction
-
-
-- - -
-
-### `tf.image.adjust_saturation(image, saturation_factor, name=None)` {#adjust_saturation}
-
-Adjust saturation of an RGB image.
-
-This is a convenience method that converts an RGB image to float
-representation, converts it to HSV, add an offset to the saturation channel,
-converts back to RGB and then back to the original data type. If several
-adjustments are chained it is advisable to minimize the number of redundant
-conversions.
-
-`image` is an RGB image. The image saturation is adjusted by converting the
-image to HSV and multiplying the saturation (S) channel by
-`saturation_factor` and clipping. The image is then converted back to RGB.
-
-##### Args:
-
-
-* <b>`image`</b>: RGB image or images. Size of the last dimension must be 3.
-* <b>`saturation_factor`</b>: float. Factor to multiply the saturation by.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- Adjusted image(s), same shape and DType as `image`.
-
-
-- - -
-
-### `tf.image.random_saturation(image, lower, upper, seed=None)` {#random_saturation}
-
-Adjust the saturation of an RGB image by a random factor.
-
-Equivalent to `adjust_saturation()` but uses a `saturation_factor` randomly
-picked in the interval `[lower, upper]`.
-
-##### Args:
-
-
-* <b>`image`</b>: RGB image or images. Size of the last dimension must be 3.
-* <b>`lower`</b>: float. Lower bound for the random saturation factor.
-* <b>`upper`</b>: float. Upper bound for the random saturation factor.
-* <b>`seed`</b>: An operation-specific seed. It will be used in conjunction
- with the graph-level seed to determine the real seeds that will be
- used in this operation. Please see the documentation of
- set_random_seed for its interaction with the graph-level random seed.
-
-##### Returns:
-
- Adjusted image(s), same shape and DType as `image`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `upper <= lower` or if `lower < 0`.
-
-
-- - -
-
-### `tf.image.per_image_standardization(image)` {#per_image_standardization}
-
-Linearly scales `image` to have zero mean and unit norm.
-
-This op computes `(x - mean) / adjusted_stddev`, where `mean` is the average
-of all values in image, and
-`adjusted_stddev = max(stddev, 1.0/sqrt(image.NumElements()))`.
-
-`stddev` is the standard deviation of all values in `image`. It is capped
-away from zero to protect against division by 0 when handling uniform images.
-
-##### Args:
-
-
-* <b>`image`</b>: 3-D tensor of shape `[height, width, channels]`.
-
-##### Returns:
-
- The standardized image with same shape as `image`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape of 'image' is incompatible with this function.
-
-
-- - -
-
-### `tf.image.draw_bounding_boxes(images, boxes, name=None)` {#draw_bounding_boxes}
-
-Draw bounding boxes on a batch of images.
-
-Outputs a copy of `images` but draws on top of the pixels zero or more bounding
-boxes specified by the locations in `boxes`. The coordinates of the each
-bounding box in `boxes` are encoded as `[y_min, x_min, y_max, x_max]`. The
-bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and
-height of the underlying image.
-
-For example, if an image is 100 x 200 pixels and the bounding box is
-`[0.1, 0.2, 0.5, 0.9]`, the bottom-left and upper-right coordinates of the
-bounding box will be `(10, 40)` to `(50, 180)`.
-
-Parts of the bounding box may fall outside the image.
-
-##### Args:
-
-
-* <b>`images`</b>: A `Tensor`. Must be one of the following types: `float32`, `half`.
- 4-D with shape `[batch, height, width, depth]`. A batch of images.
-* <b>`boxes`</b>: A `Tensor` of type `float32`.
- 3-D with shape `[batch, num_bounding_boxes, 4]` containing bounding
- boxes.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `images`.
- 4-D with the same shape as `images`. The batch of input images with
- bounding boxes drawn on the images.
-
-
-- - -
-
-### `tf.image.non_max_suppression(boxes, scores, max_output_size, iou_threshold=None, name=None)` {#non_max_suppression}
-
-Greedily selects a subset of bounding boxes in descending order of score,
-
-pruning away boxes that have high intersection-over-union (IOU) overlap
-with previously selected boxes. Bounding boxes are supplied as
-[y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any
-diagonal pair of box corners and the coordinates can be provided as normalized
-(i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm
-is agnostic to where the origin is in the coordinate system. Note that this
-algorithm is invariant to orthogonal transformations and translations
-of the coordinate system; thus translating or reflections of the coordinate
-system result in the same boxes being selected by the algorithm.
-
-The output of this operation is a set of integers indexing into the input
-collection of bounding boxes representing the selected boxes. The bounding
-box coordinates corresponding to the selected indices can then be obtained
-using the `tf.gather operation`. For example:
-
- selected_indices = tf.image.non_max_suppression(
- boxes, scores, max_output_size, iou_threshold)
- selected_boxes = tf.gather(boxes, selected_indices)
-
-##### Args:
-
-
-* <b>`boxes`</b>: A `Tensor` of type `float32`.
- A 2-D float tensor of shape `[num_boxes, 4]`.
-* <b>`scores`</b>: A `Tensor` of type `float32`.
- A 1-D float tensor of shape `[num_boxes]` representing a single
- score corresponding to each box (each row of boxes).
-* <b>`max_output_size`</b>: A `Tensor` of type `int32`.
- A scalar integer tensor representing the maximum number of
- boxes to be selected by non max suppression.
-* <b>`iou_threshold`</b>: An optional `float`. Defaults to `0.5`.
- A float representing the threshold for deciding whether boxes
- overlap too much with respect to IOU.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `int32`.
- A 1-D integer tensor of shape `[M]` representing the selected
- indices from the boxes tensor, where `M <= max_output_size`.
-
-
-- - -
-
-### `tf.image.sample_distorted_bounding_box(image_size, bounding_boxes, seed=None, seed2=None, min_object_covered=None, aspect_ratio_range=None, area_range=None, max_attempts=None, use_image_if_no_bounding_boxes=None, name=None)` {#sample_distorted_bounding_box}
-
-Generate a single randomly distorted bounding box for an image.
-
-Bounding box annotations are often supplied in addition to ground-truth labels
-in image recognition or object localization tasks. A common technique for
-training such a system is to randomly distort an image while preserving
-its content, i.e. *data augmentation*. This Op outputs a randomly distorted
-localization of an object, i.e. bounding box, given an `image_size`,
-`bounding_boxes` and a series of constraints.
-
-The output of this Op is a single bounding box that may be used to crop the
-original image. The output is returned as 3 tensors: `begin`, `size` and
-`bboxes`. The first 2 tensors can be fed directly into `tf.slice` to crop the
-image. The latter may be supplied to `tf.image.draw_bounding_boxes` to visualize
-what the bounding box looks like.
-
-Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. The
-bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and
-height of the underlying image.
-
-For example,
-
-```python
- # Generate a single distorted bounding box.
- begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box(
- tf.shape(image),
- bounding_boxes=bounding_boxes)
-
- # Draw the bounding box in an image summary.
- image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0),
- bbox_for_draw)
- tf.image_summary('images_with_box', image_with_box)
-
- # Employ the bounding box to distort the image.
- distorted_image = tf.slice(image, begin, size)
-```
-
-Note that if no bounding box information is available, setting
-`use_image_if_no_bounding_boxes = true` will assume there is a single implicit
-bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is
-false and no bounding boxes are supplied, an error is raised.
-
-##### Args:
-
-
-* <b>`image_size`</b>: A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`.
- 1-D, containing `[height, width, channels]`.
-* <b>`bounding_boxes`</b>: A `Tensor` of type `float32`.
- 3-D with shape `[batch, N, 4]` describing the N bounding boxes
- associated with the image.
-* <b>`seed`</b>: An optional `int`. Defaults to `0`.
- If either `seed` or `seed2` are set to non-zero, the random number
- generator is seeded by the given `seed`. Otherwise, it is seeded by a random
- seed.
-* <b>`seed2`</b>: An optional `int`. Defaults to `0`.
- A second seed to avoid seed collision.
-* <b>`min_object_covered`</b>: An optional `float`. Defaults to `0.1`.
- The cropped area of the image must contain at least this
- fraction of any bounding box supplied. The value of this parameter should be
- non-negative. In the case of 0, the cropped area does not need to overlap
- any of the bounding boxes supplied.
-* <b>`aspect_ratio_range`</b>: An optional list of `floats`. Defaults to `[0.75, 1.33]`.
- The cropped area of the image must have an aspect ratio =
- width / height within this range.
-* <b>`area_range`</b>: An optional list of `floats`. Defaults to `[0.05, 1]`.
- The cropped area of the image must contain a fraction of the
- supplied image within in this range.
-* <b>`max_attempts`</b>: An optional `int`. Defaults to `100`.
- Number of attempts at generating a cropped region of the image
- of the specified constraints. After `max_attempts` failures, return the entire
- image.
-* <b>`use_image_if_no_bounding_boxes`</b>: An optional `bool`. Defaults to `False`.
- Controls behavior if no bounding boxes supplied.
- If true, assume an implicit bounding box covering the whole input. If false,
- raise an error.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (begin, size, bboxes).
-
-* <b>`begin`</b>: A `Tensor`. Has the same type as `image_size`. 1-D, containing `[offset_height, offset_width, 0]`. Provide as input to
- `tf.slice`.
-* <b>`size`</b>: A `Tensor`. Has the same type as `image_size`. 1-D, containing `[target_height, target_width, -1]`. Provide as input to
- `tf.slice`.
-* <b>`bboxes`</b>: A `Tensor` of type `float32`. 3-D with shape `[1, 1, 4]` containing the distorted bounding box.
- Provide as input to `tf.image.draw_bounding_boxes`.
-
-
-- - -
-
-### `tf.image.total_variation(images, name=None)` {#total_variation}
-
-Calculate and return the total variation for one or more images.
-
-The total variation is the sum of the absolute differences for neighboring
-pixel-values in the input images. This measures how much noise is in the
-images.
-
-This can be used as a loss-function during optimization so as to suppress
-noise in images. If you have a batch of images, then you should calculate
-the scalar loss-value as the sum:
-`loss = tf.reduce_sum(tf.image.total_variation(images))`
-
-This implements the anisotropic 2-D version of the formula described here:
-
-https://en.wikipedia.org/wiki/Total_variation_denoising
-
-##### Args:
-
-
-* <b>`images`</b>: 4-D Tensor of shape `[batch, height, width, channels]` or
- 3-D Tensor of shape `[height, width, channels]`.
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if images.shape is not a 3-D or 4-D vector.
-
-##### Returns:
-
- The total variation of `images`.
-
- If `images` was 4-D, return a 1-D float Tensor of shape `[batch]` with the
- total variation for each image in the batch.
- If `images` was 3-D, return a scalar float with the total variation for
- that image.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/index.md b/tensorflow/g3doc/api_docs/python/index.md
deleted file mode 100644
index 0de79c8474..0000000000
--- a/tensorflow/g3doc/api_docs/python/index.md
+++ /dev/null
@@ -1,1204 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# TensorFlow Python reference documentation
-
-* **[Building Graphs](../../api_docs/python/framework.md)**:
- * [`add_to_collection`](../../api_docs/python/framework.md#add_to_collection)
- * [`as_dtype`](../../api_docs/python/framework.md#as_dtype)
- * [`container`](../../api_docs/python/framework.md#container)
- * [`control_dependencies`](../../api_docs/python/framework.md#control_dependencies)
- * [`convert_to_tensor`](../../api_docs/python/framework.md#convert_to_tensor)
- * [`convert_to_tensor_or_indexed_slices`](../../api_docs/python/framework.md#convert_to_tensor_or_indexed_slices)
- * [`convert_to_tensor_or_sparse_tensor`](../../api_docs/python/framework.md#convert_to_tensor_or_sparse_tensor)
- * [`device`](../../api_docs/python/framework.md#device)
- * [`DeviceSpec`](../../api_docs/python/framework.md#DeviceSpec)
- * [`Dimension`](../../api_docs/python/framework.md#Dimension)
- * [`DType`](../../api_docs/python/framework.md#DType)
- * [`get_collection`](../../api_docs/python/framework.md#get_collection)
- * [`get_collection_ref`](../../api_docs/python/framework.md#get_collection_ref)
- * [`get_default_graph`](../../api_docs/python/framework.md#get_default_graph)
- * [`get_seed`](../../api_docs/python/framework.md#get_seed)
- * [`Graph`](../../api_docs/python/framework.md#Graph)
- * [`GraphKeys`](../../api_docs/python/framework.md#GraphKeys)
- * [`import_graph_def`](../../api_docs/python/framework.md#import_graph_def)
- * [`load_file_system_library`](../../api_docs/python/framework.md#load_file_system_library)
- * [`load_op_library`](../../api_docs/python/framework.md#load_op_library)
- * [`name_scope`](../../api_docs/python/framework.md#name_scope)
- * [`NoGradient`](../../api_docs/python/framework.md#NoGradient)
- * [`NotDifferentiable`](../../api_docs/python/framework.md#NotDifferentiable)
- * [`op_scope`](../../api_docs/python/framework.md#op_scope)
- * [`Operation`](../../api_docs/python/framework.md#Operation)
- * [`register_tensor_conversion_function`](../../api_docs/python/framework.md#register_tensor_conversion_function)
- * [`RegisterGradient`](../../api_docs/python/framework.md#RegisterGradient)
- * [`reset_default_graph`](../../api_docs/python/framework.md#reset_default_graph)
- * [`Tensor`](../../api_docs/python/framework.md#Tensor)
- * [`TensorShape`](../../api_docs/python/framework.md#TensorShape)
-
-* **[Asserts and boolean checks.](../../api_docs/python/check_ops.md)**:
- * [`assert_equal`](../../api_docs/python/check_ops.md#assert_equal)
- * [`assert_greater`](../../api_docs/python/check_ops.md#assert_greater)
- * [`assert_greater_equal`](../../api_docs/python/check_ops.md#assert_greater_equal)
- * [`assert_integer`](../../api_docs/python/check_ops.md#assert_integer)
- * [`assert_less`](../../api_docs/python/check_ops.md#assert_less)
- * [`assert_less_equal`](../../api_docs/python/check_ops.md#assert_less_equal)
- * [`assert_negative`](../../api_docs/python/check_ops.md#assert_negative)
- * [`assert_non_negative`](../../api_docs/python/check_ops.md#assert_non_negative)
- * [`assert_non_positive`](../../api_docs/python/check_ops.md#assert_non_positive)
- * [`assert_positive`](../../api_docs/python/check_ops.md#assert_positive)
- * [`assert_proper_iterable`](../../api_docs/python/check_ops.md#assert_proper_iterable)
- * [`assert_rank`](../../api_docs/python/check_ops.md#assert_rank)
- * [`assert_rank_at_least`](../../api_docs/python/check_ops.md#assert_rank_at_least)
- * [`assert_type`](../../api_docs/python/check_ops.md#assert_type)
- * [`is_non_decreasing`](../../api_docs/python/check_ops.md#is_non_decreasing)
- * [`is_numeric_tensor`](../../api_docs/python/check_ops.md#is_numeric_tensor)
- * [`is_strictly_increasing`](../../api_docs/python/check_ops.md#is_strictly_increasing)
-
-* **[Constants, Sequences, and Random Values](../../api_docs/python/constant_op.md)**:
- * [`constant`](../../api_docs/python/constant_op.md#constant)
- * [`fill`](../../api_docs/python/constant_op.md#fill)
- * [`linspace`](../../api_docs/python/constant_op.md#linspace)
- * [`multinomial`](../../api_docs/python/constant_op.md#multinomial)
- * [`ones`](../../api_docs/python/constant_op.md#ones)
- * [`ones_like`](../../api_docs/python/constant_op.md#ones_like)
- * [`random_crop`](../../api_docs/python/constant_op.md#random_crop)
- * [`random_gamma`](../../api_docs/python/constant_op.md#random_gamma)
- * [`random_normal`](../../api_docs/python/constant_op.md#random_normal)
- * [`random_poisson`](../../api_docs/python/constant_op.md#random_poisson)
- * [`random_shuffle`](../../api_docs/python/constant_op.md#random_shuffle)
- * [`random_uniform`](../../api_docs/python/constant_op.md#random_uniform)
- * [`range`](../../api_docs/python/constant_op.md#range)
- * [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- * [`truncated_normal`](../../api_docs/python/constant_op.md#truncated_normal)
- * [`zeros`](../../api_docs/python/constant_op.md#zeros)
- * [`zeros_like`](../../api_docs/python/constant_op.md#zeros_like)
-
-* **[Variables](../../api_docs/python/state_ops.md)**:
- * [`all_variables`](../../api_docs/python/state_ops.md#all_variables)
- * [`assert_variables_initialized`](../../api_docs/python/state_ops.md#assert_variables_initialized)
- * [`assign`](../../api_docs/python/state_ops.md#assign)
- * [`assign_add`](../../api_docs/python/state_ops.md#assign_add)
- * [`assign_sub`](../../api_docs/python/state_ops.md#assign_sub)
- * [`constant_initializer`](../../api_docs/python/state_ops.md#constant_initializer)
- * [`count_up_to`](../../api_docs/python/state_ops.md#count_up_to)
- * [`device`](../../api_docs/python/state_ops.md#device)
- * [`export_meta_graph`](../../api_docs/python/state_ops.md#export_meta_graph)
- * [`fixed_size_partitioner`](../../api_docs/python/state_ops.md#fixed_size_partitioner)
- * [`get_checkpoint_state`](../../api_docs/python/state_ops.md#get_checkpoint_state)
- * [`get_local_variable`](../../api_docs/python/state_ops.md#get_local_variable)
- * [`get_variable`](../../api_docs/python/state_ops.md#get_variable)
- * [`get_variable_scope`](../../api_docs/python/state_ops.md#get_variable_scope)
- * [`global_variables`](../../api_docs/python/state_ops.md#global_variables)
- * [`global_variables_initializer`](../../api_docs/python/state_ops.md#global_variables_initializer)
- * [`import_meta_graph`](../../api_docs/python/state_ops.md#import_meta_graph)
- * [`IndexedSlices`](../../api_docs/python/state_ops.md#IndexedSlices)
- * [`initialize_all_tables`](../../api_docs/python/state_ops.md#initialize_all_tables)
- * [`initialize_all_variables`](../../api_docs/python/state_ops.md#initialize_all_variables)
- * [`initialize_local_variables`](../../api_docs/python/state_ops.md#initialize_local_variables)
- * [`initialize_variables`](../../api_docs/python/state_ops.md#initialize_variables)
- * [`is_variable_initialized`](../../api_docs/python/state_ops.md#is_variable_initialized)
- * [`latest_checkpoint`](../../api_docs/python/state_ops.md#latest_checkpoint)
- * [`local_variables`](../../api_docs/python/state_ops.md#local_variables)
- * [`local_variables_initializer`](../../api_docs/python/state_ops.md#local_variables_initializer)
- * [`make_template`](../../api_docs/python/state_ops.md#make_template)
- * [`min_max_variable_partitioner`](../../api_docs/python/state_ops.md#min_max_variable_partitioner)
- * [`model_variables`](../../api_docs/python/state_ops.md#model_variables)
- * [`moving_average_variables`](../../api_docs/python/state_ops.md#moving_average_variables)
- * [`no_regularizer`](../../api_docs/python/state_ops.md#no_regularizer)
- * [`ones_initializer`](../../api_docs/python/state_ops.md#ones_initializer)
- * [`orthogonal_initializer`](../../api_docs/python/state_ops.md#orthogonal_initializer)
- * [`random_normal_initializer`](../../api_docs/python/state_ops.md#random_normal_initializer)
- * [`random_uniform_initializer`](../../api_docs/python/state_ops.md#random_uniform_initializer)
- * [`report_uninitialized_variables`](../../api_docs/python/state_ops.md#report_uninitialized_variables)
- * [`Saver`](../../api_docs/python/state_ops.md#Saver)
- * [`scatter_add`](../../api_docs/python/state_ops.md#scatter_add)
- * [`scatter_div`](../../api_docs/python/state_ops.md#scatter_div)
- * [`scatter_mul`](../../api_docs/python/state_ops.md#scatter_mul)
- * [`scatter_nd_add`](../../api_docs/python/state_ops.md#scatter_nd_add)
- * [`scatter_nd_sub`](../../api_docs/python/state_ops.md#scatter_nd_sub)
- * [`scatter_nd_update`](../../api_docs/python/state_ops.md#scatter_nd_update)
- * [`scatter_sub`](../../api_docs/python/state_ops.md#scatter_sub)
- * [`scatter_update`](../../api_docs/python/state_ops.md#scatter_update)
- * [`sparse_mask`](../../api_docs/python/state_ops.md#sparse_mask)
- * [`tables_initializer`](../../api_docs/python/state_ops.md#tables_initializer)
- * [`trainable_variables`](../../api_docs/python/state_ops.md#trainable_variables)
- * [`truncated_normal_initializer`](../../api_docs/python/state_ops.md#truncated_normal_initializer)
- * [`uniform_unit_scaling_initializer`](../../api_docs/python/state_ops.md#uniform_unit_scaling_initializer)
- * [`update_checkpoint_state`](../../api_docs/python/state_ops.md#update_checkpoint_state)
- * [`Variable`](../../api_docs/python/state_ops.md#Variable)
- * [`variable_axis_size_partitioner`](../../api_docs/python/state_ops.md#variable_axis_size_partitioner)
- * [`variable_op_scope`](../../api_docs/python/state_ops.md#variable_op_scope)
- * [`variable_scope`](../../api_docs/python/state_ops.md#variable_scope)
- * [`variables_initializer`](../../api_docs/python/state_ops.md#variables_initializer)
- * [`VariableScope`](../../api_docs/python/state_ops.md#VariableScope)
- * [`zeros_initializer`](../../api_docs/python/state_ops.md#zeros_initializer)
-
-* **[Tensor Transformations](../../api_docs/python/array_ops.md)**:
- * [`batch_to_space`](../../api_docs/python/array_ops.md#batch_to_space)
- * [`batch_to_space_nd`](../../api_docs/python/array_ops.md#batch_to_space_nd)
- * [`bitcast`](../../api_docs/python/array_ops.md#bitcast)
- * [`boolean_mask`](../../api_docs/python/array_ops.md#boolean_mask)
- * [`broadcast_dynamic_shape`](../../api_docs/python/array_ops.md#broadcast_dynamic_shape)
- * [`broadcast_static_shape`](../../api_docs/python/array_ops.md#broadcast_static_shape)
- * [`cast`](../../api_docs/python/array_ops.md#cast)
- * [`concat`](../../api_docs/python/array_ops.md#concat)
- * [`copy`](../../api_docs/python/array_ops.md#copy)
- * [`depth_to_space`](../../api_docs/python/array_ops.md#depth_to_space)
- * [`dequantize`](../../api_docs/python/array_ops.md#dequantize)
- * [`dynamic_partition`](../../api_docs/python/array_ops.md#dynamic_partition)
- * [`dynamic_stitch`](../../api_docs/python/array_ops.md#dynamic_stitch)
- * [`expand_dims`](../../api_docs/python/array_ops.md#expand_dims)
- * [`extract_image_patches`](../../api_docs/python/array_ops.md#extract_image_patches)
- * [`fake_quant_with_min_max_args`](../../api_docs/python/array_ops.md#fake_quant_with_min_max_args)
- * [`fake_quant_with_min_max_args_gradient`](../../api_docs/python/array_ops.md#fake_quant_with_min_max_args_gradient)
- * [`fake_quant_with_min_max_vars`](../../api_docs/python/array_ops.md#fake_quant_with_min_max_vars)
- * [`fake_quant_with_min_max_vars_gradient`](../../api_docs/python/array_ops.md#fake_quant_with_min_max_vars_gradient)
- * [`fake_quant_with_min_max_vars_per_channel`](../../api_docs/python/array_ops.md#fake_quant_with_min_max_vars_per_channel)
- * [`fake_quant_with_min_max_vars_per_channel_gradient`](../../api_docs/python/array_ops.md#fake_quant_with_min_max_vars_per_channel_gradient)
- * [`gather`](../../api_docs/python/array_ops.md#gather)
- * [`gather_nd`](../../api_docs/python/array_ops.md#gather_nd)
- * [`meshgrid`](../../api_docs/python/array_ops.md#meshgrid)
- * [`one_hot`](../../api_docs/python/array_ops.md#one_hot)
- * [`pad`](../../api_docs/python/array_ops.md#pad)
- * [`parallel_stack`](../../api_docs/python/array_ops.md#parallel_stack)
- * [`quantize_v2`](../../api_docs/python/array_ops.md#quantize_v2)
- * [`quantized_concat`](../../api_docs/python/array_ops.md#quantized_concat)
- * [`rank`](../../api_docs/python/array_ops.md#rank)
- * [`required_space_to_batch_paddings`](../../api_docs/python/array_ops.md#required_space_to_batch_paddings)
- * [`reshape`](../../api_docs/python/array_ops.md#reshape)
- * [`reverse`](../../api_docs/python/array_ops.md#reverse)
- * [`reverse_sequence`](../../api_docs/python/array_ops.md#reverse_sequence)
- * [`reverse_v2`](../../api_docs/python/array_ops.md#reverse_v2)
- * [`saturate_cast`](../../api_docs/python/array_ops.md#saturate_cast)
- * [`scatter_nd`](../../api_docs/python/array_ops.md#scatter_nd)
- * [`sequence_mask`](../../api_docs/python/array_ops.md#sequence_mask)
- * [`setdiff1d`](../../api_docs/python/array_ops.md#setdiff1d)
- * [`shape`](../../api_docs/python/array_ops.md#shape)
- * [`shape_n`](../../api_docs/python/array_ops.md#shape_n)
- * [`size`](../../api_docs/python/array_ops.md#size)
- * [`slice`](../../api_docs/python/array_ops.md#slice)
- * [`space_to_batch`](../../api_docs/python/array_ops.md#space_to_batch)
- * [`space_to_batch_nd`](../../api_docs/python/array_ops.md#space_to_batch_nd)
- * [`space_to_depth`](../../api_docs/python/array_ops.md#space_to_depth)
- * [`split`](../../api_docs/python/array_ops.md#split)
- * [`squeeze`](../../api_docs/python/array_ops.md#squeeze)
- * [`stack`](../../api_docs/python/array_ops.md#stack)
- * [`strided_slice`](../../api_docs/python/array_ops.md#strided_slice)
- * [`string_to_number`](../../api_docs/python/array_ops.md#string_to_number)
- * [`tile`](../../api_docs/python/array_ops.md#tile)
- * [`to_bfloat16`](../../api_docs/python/array_ops.md#to_bfloat16)
- * [`to_double`](../../api_docs/python/array_ops.md#to_double)
- * [`to_float`](../../api_docs/python/array_ops.md#to_float)
- * [`to_int32`](../../api_docs/python/array_ops.md#to_int32)
- * [`to_int64`](../../api_docs/python/array_ops.md#to_int64)
- * [`transpose`](../../api_docs/python/array_ops.md#transpose)
- * [`unique_with_counts`](../../api_docs/python/array_ops.md#unique_with_counts)
- * [`unstack`](../../api_docs/python/array_ops.md#unstack)
-
-* **[Math](../../api_docs/python/math_ops.md)**:
- * [`abs`](../../api_docs/python/math_ops.md#abs)
- * [`accumulate_n`](../../api_docs/python/math_ops.md#accumulate_n)
- * [`acos`](../../api_docs/python/math_ops.md#acos)
- * [`add`](../../api_docs/python/math_ops.md#add)
- * [`add_n`](../../api_docs/python/math_ops.md#add_n)
- * [`argmax`](../../api_docs/python/math_ops.md#argmax)
- * [`argmin`](../../api_docs/python/math_ops.md#argmin)
- * [`asin`](../../api_docs/python/math_ops.md#asin)
- * [`atan`](../../api_docs/python/math_ops.md#atan)
- * [`betainc`](../../api_docs/python/math_ops.md#betainc)
- * [`ceil`](../../api_docs/python/math_ops.md#ceil)
- * [`cholesky`](../../api_docs/python/math_ops.md#cholesky)
- * [`cholesky_solve`](../../api_docs/python/math_ops.md#cholesky_solve)
- * [`complex`](../../api_docs/python/math_ops.md#complex)
- * [`conj`](../../api_docs/python/math_ops.md#conj)
- * [`cos`](../../api_docs/python/math_ops.md#cos)
- * [`count_nonzero`](../../api_docs/python/math_ops.md#count_nonzero)
- * [`cross`](../../api_docs/python/math_ops.md#cross)
- * [`cumprod`](../../api_docs/python/math_ops.md#cumprod)
- * [`cumsum`](../../api_docs/python/math_ops.md#cumsum)
- * [`diag`](../../api_docs/python/math_ops.md#diag)
- * [`diag_part`](../../api_docs/python/math_ops.md#diag_part)
- * [`digamma`](../../api_docs/python/math_ops.md#digamma)
- * [`div`](../../api_docs/python/math_ops.md#div)
- * [`divide`](../../api_docs/python/math_ops.md#divide)
- * [`edit_distance`](../../api_docs/python/math_ops.md#edit_distance)
- * [`einsum`](../../api_docs/python/math_ops.md#einsum)
- * [`erf`](../../api_docs/python/math_ops.md#erf)
- * [`erfc`](../../api_docs/python/math_ops.md#erfc)
- * [`exp`](../../api_docs/python/math_ops.md#exp)
- * [`expm1`](../../api_docs/python/math_ops.md#expm1)
- * [`eye`](../../api_docs/python/math_ops.md#eye)
- * [`fft`](../../api_docs/python/math_ops.md#fft)
- * [`fft2d`](../../api_docs/python/math_ops.md#fft2d)
- * [`fft3d`](../../api_docs/python/math_ops.md#fft3d)
- * [`floor`](../../api_docs/python/math_ops.md#floor)
- * [`floor_div`](../../api_docs/python/math_ops.md#floor_div)
- * [`floordiv`](../../api_docs/python/math_ops.md#floordiv)
- * [`floormod`](../../api_docs/python/math_ops.md#floormod)
- * [`ifft`](../../api_docs/python/math_ops.md#ifft)
- * [`ifft2d`](../../api_docs/python/math_ops.md#ifft2d)
- * [`ifft3d`](../../api_docs/python/math_ops.md#ifft3d)
- * [`igamma`](../../api_docs/python/math_ops.md#igamma)
- * [`igammac`](../../api_docs/python/math_ops.md#igammac)
- * [`imag`](../../api_docs/python/math_ops.md#imag)
- * [`invert_permutation`](../../api_docs/python/math_ops.md#invert_permutation)
- * [`lbeta`](../../api_docs/python/math_ops.md#lbeta)
- * [`lgamma`](../../api_docs/python/math_ops.md#lgamma)
- * [`log`](../../api_docs/python/math_ops.md#log)
- * [`log1p`](../../api_docs/python/math_ops.md#log1p)
- * [`matmul`](../../api_docs/python/math_ops.md#matmul)
- * [`matrix_band_part`](../../api_docs/python/math_ops.md#matrix_band_part)
- * [`matrix_determinant`](../../api_docs/python/math_ops.md#matrix_determinant)
- * [`matrix_diag`](../../api_docs/python/math_ops.md#matrix_diag)
- * [`matrix_diag_part`](../../api_docs/python/math_ops.md#matrix_diag_part)
- * [`matrix_inverse`](../../api_docs/python/math_ops.md#matrix_inverse)
- * [`matrix_set_diag`](../../api_docs/python/math_ops.md#matrix_set_diag)
- * [`matrix_solve`](../../api_docs/python/math_ops.md#matrix_solve)
- * [`matrix_solve_ls`](../../api_docs/python/math_ops.md#matrix_solve_ls)
- * [`matrix_transpose`](../../api_docs/python/math_ops.md#matrix_transpose)
- * [`matrix_triangular_solve`](../../api_docs/python/math_ops.md#matrix_triangular_solve)
- * [`maximum`](../../api_docs/python/math_ops.md#maximum)
- * [`minimum`](../../api_docs/python/math_ops.md#minimum)
- * [`mod`](../../api_docs/python/math_ops.md#mod)
- * [`multiply`](../../api_docs/python/math_ops.md#multiply)
- * [`negative`](../../api_docs/python/math_ops.md#negative)
- * [`norm`](../../api_docs/python/math_ops.md#norm)
- * [`polygamma`](../../api_docs/python/math_ops.md#polygamma)
- * [`pow`](../../api_docs/python/math_ops.md#pow)
- * [`qr`](../../api_docs/python/math_ops.md#qr)
- * [`real`](../../api_docs/python/math_ops.md#real)
- * [`realdiv`](../../api_docs/python/math_ops.md#realdiv)
- * [`reciprocal`](../../api_docs/python/math_ops.md#reciprocal)
- * [`reduce_all`](../../api_docs/python/math_ops.md#reduce_all)
- * [`reduce_any`](../../api_docs/python/math_ops.md#reduce_any)
- * [`reduce_logsumexp`](../../api_docs/python/math_ops.md#reduce_logsumexp)
- * [`reduce_max`](../../api_docs/python/math_ops.md#reduce_max)
- * [`reduce_mean`](../../api_docs/python/math_ops.md#reduce_mean)
- * [`reduce_min`](../../api_docs/python/math_ops.md#reduce_min)
- * [`reduce_prod`](../../api_docs/python/math_ops.md#reduce_prod)
- * [`reduce_sum`](../../api_docs/python/math_ops.md#reduce_sum)
- * [`rint`](../../api_docs/python/math_ops.md#rint)
- * [`round`](../../api_docs/python/math_ops.md#round)
- * [`rsqrt`](../../api_docs/python/math_ops.md#rsqrt)
- * [`scalar_mul`](../../api_docs/python/math_ops.md#scalar_mul)
- * [`segment_max`](../../api_docs/python/math_ops.md#segment_max)
- * [`segment_mean`](../../api_docs/python/math_ops.md#segment_mean)
- * [`segment_min`](../../api_docs/python/math_ops.md#segment_min)
- * [`segment_prod`](../../api_docs/python/math_ops.md#segment_prod)
- * [`segment_sum`](../../api_docs/python/math_ops.md#segment_sum)
- * [`self_adjoint_eig`](../../api_docs/python/math_ops.md#self_adjoint_eig)
- * [`self_adjoint_eigvals`](../../api_docs/python/math_ops.md#self_adjoint_eigvals)
- * [`setdiff1d`](../../api_docs/python/math_ops.md#setdiff1d)
- * [`sign`](../../api_docs/python/math_ops.md#sign)
- * [`sin`](../../api_docs/python/math_ops.md#sin)
- * [`sparse_segment_mean`](../../api_docs/python/math_ops.md#sparse_segment_mean)
- * [`sparse_segment_sqrt_n`](../../api_docs/python/math_ops.md#sparse_segment_sqrt_n)
- * [`sparse_segment_sum`](../../api_docs/python/math_ops.md#sparse_segment_sum)
- * [`sqrt`](../../api_docs/python/math_ops.md#sqrt)
- * [`square`](../../api_docs/python/math_ops.md#square)
- * [`squared_difference`](../../api_docs/python/math_ops.md#squared_difference)
- * [`subtract`](../../api_docs/python/math_ops.md#subtract)
- * [`svd`](../../api_docs/python/math_ops.md#svd)
- * [`tan`](../../api_docs/python/math_ops.md#tan)
- * [`tensordot`](../../api_docs/python/math_ops.md#tensordot)
- * [`trace`](../../api_docs/python/math_ops.md#trace)
- * [`transpose`](../../api_docs/python/math_ops.md#transpose)
- * [`truediv`](../../api_docs/python/math_ops.md#truediv)
- * [`truncatediv`](../../api_docs/python/math_ops.md#truncatediv)
- * [`truncatemod`](../../api_docs/python/math_ops.md#truncatemod)
- * [`unique`](../../api_docs/python/math_ops.md#unique)
- * [`unsorted_segment_max`](../../api_docs/python/math_ops.md#unsorted_segment_max)
- * [`unsorted_segment_sum`](../../api_docs/python/math_ops.md#unsorted_segment_sum)
- * [`where`](../../api_docs/python/math_ops.md#where)
- * [`zeta`](../../api_docs/python/math_ops.md#zeta)
-
-* **[Strings](../../api_docs/python/string_ops.md)**:
- * [`as_string`](../../api_docs/python/string_ops.md#as_string)
- * [`decode_base64`](../../api_docs/python/string_ops.md#decode_base64)
- * [`encode_base64`](../../api_docs/python/string_ops.md#encode_base64)
- * [`reduce_join`](../../api_docs/python/string_ops.md#reduce_join)
- * [`string_join`](../../api_docs/python/string_ops.md#string_join)
- * [`string_split`](../../api_docs/python/string_ops.md#string_split)
- * [`string_to_hash_bucket`](../../api_docs/python/string_ops.md#string_to_hash_bucket)
- * [`string_to_hash_bucket_fast`](../../api_docs/python/string_ops.md#string_to_hash_bucket_fast)
- * [`string_to_hash_bucket_strong`](../../api_docs/python/string_ops.md#string_to_hash_bucket_strong)
- * [`substr`](../../api_docs/python/string_ops.md#substr)
-
-* **[Histograms](../../api_docs/python/histogram_ops.md)**:
- * [`histogram_fixed_width`](../../api_docs/python/histogram_ops.md#histogram_fixed_width)
-
-* **[Control Flow](../../api_docs/python/control_flow_ops.md)**:
- * [`add_check_numerics_ops`](../../api_docs/python/control_flow_ops.md#add_check_numerics_ops)
- * [`Assert`](../../api_docs/python/control_flow_ops.md#Assert)
- * [`case`](../../api_docs/python/control_flow_ops.md#case)
- * [`check_numerics`](../../api_docs/python/control_flow_ops.md#check_numerics)
- * [`cond`](../../api_docs/python/control_flow_ops.md#cond)
- * [`count_up_to`](../../api_docs/python/control_flow_ops.md#count_up_to)
- * [`equal`](../../api_docs/python/control_flow_ops.md#equal)
- * [`greater`](../../api_docs/python/control_flow_ops.md#greater)
- * [`greater_equal`](../../api_docs/python/control_flow_ops.md#greater_equal)
- * [`group`](../../api_docs/python/control_flow_ops.md#group)
- * [`identity`](../../api_docs/python/control_flow_ops.md#identity)
- * [`is_finite`](../../api_docs/python/control_flow_ops.md#is_finite)
- * [`is_inf`](../../api_docs/python/control_flow_ops.md#is_inf)
- * [`is_nan`](../../api_docs/python/control_flow_ops.md#is_nan)
- * [`less`](../../api_docs/python/control_flow_ops.md#less)
- * [`less_equal`](../../api_docs/python/control_flow_ops.md#less_equal)
- * [`logical_and`](../../api_docs/python/control_flow_ops.md#logical_and)
- * [`logical_not`](../../api_docs/python/control_flow_ops.md#logical_not)
- * [`logical_or`](../../api_docs/python/control_flow_ops.md#logical_or)
- * [`logical_xor`](../../api_docs/python/control_flow_ops.md#logical_xor)
- * [`no_op`](../../api_docs/python/control_flow_ops.md#no_op)
- * [`not_equal`](../../api_docs/python/control_flow_ops.md#not_equal)
- * [`Print`](../../api_docs/python/control_flow_ops.md#Print)
- * [`tuple`](../../api_docs/python/control_flow_ops.md#tuple)
- * [`verify_tensor_all_finite`](../../api_docs/python/control_flow_ops.md#verify_tensor_all_finite)
- * [`where`](../../api_docs/python/control_flow_ops.md#where)
- * [`while_loop`](../../api_docs/python/control_flow_ops.md#while_loop)
-
-* **[Higher Order Functions](../../api_docs/python/functional_ops.md)**:
- * [`foldl`](../../api_docs/python/functional_ops.md#foldl)
- * [`foldr`](../../api_docs/python/functional_ops.md#foldr)
- * [`map_fn`](../../api_docs/python/functional_ops.md#map_fn)
- * [`scan`](../../api_docs/python/functional_ops.md#scan)
-
-* **[TensorArray Operations](../../api_docs/python/tensor_array_ops.md)**:
- * [`TensorArray`](../../api_docs/python/tensor_array_ops.md#TensorArray)
-
-* **[Tensor Handle Operations](../../api_docs/python/session_ops.md)**:
- * [`delete_session_tensor`](../../api_docs/python/session_ops.md#delete_session_tensor)
- * [`get_session_handle`](../../api_docs/python/session_ops.md#get_session_handle)
- * [`get_session_tensor`](../../api_docs/python/session_ops.md#get_session_tensor)
-
-* **[Images](../../api_docs/python/image.md)**:
- * [`adjust_brightness`](../../api_docs/python/image.md#adjust_brightness)
- * [`adjust_contrast`](../../api_docs/python/image.md#adjust_contrast)
- * [`adjust_gamma`](../../api_docs/python/image.md#adjust_gamma)
- * [`adjust_hue`](../../api_docs/python/image.md#adjust_hue)
- * [`adjust_saturation`](../../api_docs/python/image.md#adjust_saturation)
- * [`central_crop`](../../api_docs/python/image.md#central_crop)
- * [`convert_image_dtype`](../../api_docs/python/image.md#convert_image_dtype)
- * [`crop_and_resize`](../../api_docs/python/image.md#crop_and_resize)
- * [`crop_to_bounding_box`](../../api_docs/python/image.md#crop_to_bounding_box)
- * [`decode_gif`](../../api_docs/python/image.md#decode_gif)
- * [`decode_image`](../../api_docs/python/image.md#decode_image)
- * [`decode_jpeg`](../../api_docs/python/image.md#decode_jpeg)
- * [`decode_png`](../../api_docs/python/image.md#decode_png)
- * [`draw_bounding_boxes`](../../api_docs/python/image.md#draw_bounding_boxes)
- * [`encode_jpeg`](../../api_docs/python/image.md#encode_jpeg)
- * [`encode_png`](../../api_docs/python/image.md#encode_png)
- * [`extract_glimpse`](../../api_docs/python/image.md#extract_glimpse)
- * [`flip_left_right`](../../api_docs/python/image.md#flip_left_right)
- * [`flip_up_down`](../../api_docs/python/image.md#flip_up_down)
- * [`grayscale_to_rgb`](../../api_docs/python/image.md#grayscale_to_rgb)
- * [`hsv_to_rgb`](../../api_docs/python/image.md#hsv_to_rgb)
- * [`non_max_suppression`](../../api_docs/python/image.md#non_max_suppression)
- * [`pad_to_bounding_box`](../../api_docs/python/image.md#pad_to_bounding_box)
- * [`per_image_standardization`](../../api_docs/python/image.md#per_image_standardization)
- * [`random_brightness`](../../api_docs/python/image.md#random_brightness)
- * [`random_contrast`](../../api_docs/python/image.md#random_contrast)
- * [`random_flip_left_right`](../../api_docs/python/image.md#random_flip_left_right)
- * [`random_flip_up_down`](../../api_docs/python/image.md#random_flip_up_down)
- * [`random_hue`](../../api_docs/python/image.md#random_hue)
- * [`random_saturation`](../../api_docs/python/image.md#random_saturation)
- * [`resize_area`](../../api_docs/python/image.md#resize_area)
- * [`resize_bicubic`](../../api_docs/python/image.md#resize_bicubic)
- * [`resize_bilinear`](../../api_docs/python/image.md#resize_bilinear)
- * [`resize_image_with_crop_or_pad`](../../api_docs/python/image.md#resize_image_with_crop_or_pad)
- * [`resize_images`](../../api_docs/python/image.md#resize_images)
- * [`resize_nearest_neighbor`](../../api_docs/python/image.md#resize_nearest_neighbor)
- * [`rgb_to_grayscale`](../../api_docs/python/image.md#rgb_to_grayscale)
- * [`rgb_to_hsv`](../../api_docs/python/image.md#rgb_to_hsv)
- * [`rot90`](../../api_docs/python/image.md#rot90)
- * [`sample_distorted_bounding_box`](../../api_docs/python/image.md#sample_distorted_bounding_box)
- * [`total_variation`](../../api_docs/python/image.md#total_variation)
- * [`transpose_image`](../../api_docs/python/image.md#transpose_image)
-
-* **[Sparse Tensors](../../api_docs/python/sparse_ops.md)**:
- * [`sparse_add`](../../api_docs/python/sparse_ops.md#sparse_add)
- * [`sparse_concat`](../../api_docs/python/sparse_ops.md#sparse_concat)
- * [`sparse_fill_empty_rows`](../../api_docs/python/sparse_ops.md#sparse_fill_empty_rows)
- * [`sparse_maximum`](../../api_docs/python/sparse_ops.md#sparse_maximum)
- * [`sparse_merge`](../../api_docs/python/sparse_ops.md#sparse_merge)
- * [`sparse_minimum`](../../api_docs/python/sparse_ops.md#sparse_minimum)
- * [`sparse_reduce_sum`](../../api_docs/python/sparse_ops.md#sparse_reduce_sum)
- * [`sparse_reduce_sum_sparse`](../../api_docs/python/sparse_ops.md#sparse_reduce_sum_sparse)
- * [`sparse_reorder`](../../api_docs/python/sparse_ops.md#sparse_reorder)
- * [`sparse_reset_shape`](../../api_docs/python/sparse_ops.md#sparse_reset_shape)
- * [`sparse_reshape`](../../api_docs/python/sparse_ops.md#sparse_reshape)
- * [`sparse_retain`](../../api_docs/python/sparse_ops.md#sparse_retain)
- * [`sparse_softmax`](../../api_docs/python/sparse_ops.md#sparse_softmax)
- * [`sparse_split`](../../api_docs/python/sparse_ops.md#sparse_split)
- * [`sparse_tensor_dense_matmul`](../../api_docs/python/sparse_ops.md#sparse_tensor_dense_matmul)
- * [`sparse_tensor_to_dense`](../../api_docs/python/sparse_ops.md#sparse_tensor_to_dense)
- * [`sparse_to_dense`](../../api_docs/python/sparse_ops.md#sparse_to_dense)
- * [`sparse_to_indicator`](../../api_docs/python/sparse_ops.md#sparse_to_indicator)
- * [`sparse_transpose`](../../api_docs/python/sparse_ops.md#sparse_transpose)
- * [`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor)
- * [`SparseTensorValue`](../../api_docs/python/sparse_ops.md#SparseTensorValue)
-
-* **[Inputs and Readers](../../api_docs/python/io_ops.md)**:
- * [`batch`](../../api_docs/python/io_ops.md#batch)
- * [`batch_join`](../../api_docs/python/io_ops.md#batch_join)
- * [`ConditionalAccumulator`](../../api_docs/python/io_ops.md#ConditionalAccumulator)
- * [`ConditionalAccumulatorBase`](../../api_docs/python/io_ops.md#ConditionalAccumulatorBase)
- * [`decode_csv`](../../api_docs/python/io_ops.md#decode_csv)
- * [`decode_json_example`](../../api_docs/python/io_ops.md#decode_json_example)
- * [`decode_raw`](../../api_docs/python/io_ops.md#decode_raw)
- * [`FIFOQueue`](../../api_docs/python/io_ops.md#FIFOQueue)
- * [`FixedLenFeature`](../../api_docs/python/io_ops.md#FixedLenFeature)
- * [`FixedLengthRecordReader`](../../api_docs/python/io_ops.md#FixedLengthRecordReader)
- * [`FixedLenSequenceFeature`](../../api_docs/python/io_ops.md#FixedLenSequenceFeature)
- * [`IdentityReader`](../../api_docs/python/io_ops.md#IdentityReader)
- * [`input_producer`](../../api_docs/python/io_ops.md#input_producer)
- * [`limit_epochs`](../../api_docs/python/io_ops.md#limit_epochs)
- * [`match_filenames_once`](../../api_docs/python/io_ops.md#match_filenames_once)
- * [`matching_files`](../../api_docs/python/io_ops.md#matching_files)
- * [`maybe_batch`](../../api_docs/python/io_ops.md#maybe_batch)
- * [`maybe_batch_join`](../../api_docs/python/io_ops.md#maybe_batch_join)
- * [`maybe_shuffle_batch`](../../api_docs/python/io_ops.md#maybe_shuffle_batch)
- * [`maybe_shuffle_batch_join`](../../api_docs/python/io_ops.md#maybe_shuffle_batch_join)
- * [`PaddingFIFOQueue`](../../api_docs/python/io_ops.md#PaddingFIFOQueue)
- * [`parse_example`](../../api_docs/python/io_ops.md#parse_example)
- * [`parse_single_example`](../../api_docs/python/io_ops.md#parse_single_example)
- * [`parse_tensor`](../../api_docs/python/io_ops.md#parse_tensor)
- * [`placeholder`](../../api_docs/python/io_ops.md#placeholder)
- * [`placeholder_with_default`](../../api_docs/python/io_ops.md#placeholder_with_default)
- * [`PriorityQueue`](../../api_docs/python/io_ops.md#PriorityQueue)
- * [`QueueBase`](../../api_docs/python/io_ops.md#QueueBase)
- * [`RandomShuffleQueue`](../../api_docs/python/io_ops.md#RandomShuffleQueue)
- * [`range_input_producer`](../../api_docs/python/io_ops.md#range_input_producer)
- * [`read_file`](../../api_docs/python/io_ops.md#read_file)
- * [`ReaderBase`](../../api_docs/python/io_ops.md#ReaderBase)
- * [`shuffle_batch`](../../api_docs/python/io_ops.md#shuffle_batch)
- * [`shuffle_batch_join`](../../api_docs/python/io_ops.md#shuffle_batch_join)
- * [`slice_input_producer`](../../api_docs/python/io_ops.md#slice_input_producer)
- * [`sparse_placeholder`](../../api_docs/python/io_ops.md#sparse_placeholder)
- * [`SparseConditionalAccumulator`](../../api_docs/python/io_ops.md#SparseConditionalAccumulator)
- * [`SparseFeature`](../../api_docs/python/io_ops.md#SparseFeature)
- * [`string_input_producer`](../../api_docs/python/io_ops.md#string_input_producer)
- * [`TextLineReader`](../../api_docs/python/io_ops.md#TextLineReader)
- * [`TFRecordReader`](../../api_docs/python/io_ops.md#TFRecordReader)
- * [`VarLenFeature`](../../api_docs/python/io_ops.md#VarLenFeature)
- * [`WholeFileReader`](../../api_docs/python/io_ops.md#WholeFileReader)
- * [`write_file`](../../api_docs/python/io_ops.md#write_file)
-
-* **[Data IO (Python functions)](../../api_docs/python/python_io.md)**:
- * [`tf_record_iterator`](../../api_docs/python/python_io.md#tf_record_iterator)
- * [`TFRecordCompressionType`](../../api_docs/python/python_io.md#TFRecordCompressionType)
- * [`TFRecordOptions`](../../api_docs/python/python_io.md#TFRecordOptions)
- * [`TFRecordWriter`](../../api_docs/python/python_io.md#TFRecordWriter)
-
-* **[Neural Network](../../api_docs/python/nn.md)**:
- * [`atrous_conv2d`](../../api_docs/python/nn.md#atrous_conv2d)
- * [`atrous_conv2d_transpose`](../../api_docs/python/nn.md#atrous_conv2d_transpose)
- * [`avg_pool`](../../api_docs/python/nn.md#avg_pool)
- * [`avg_pool3d`](../../api_docs/python/nn.md#avg_pool3d)
- * [`batch_norm_with_global_normalization`](../../api_docs/python/nn.md#batch_norm_with_global_normalization)
- * [`batch_normalization`](../../api_docs/python/nn.md#batch_normalization)
- * [`bias_add`](../../api_docs/python/nn.md#bias_add)
- * [`bidirectional_dynamic_rnn`](../../api_docs/python/nn.md#bidirectional_dynamic_rnn)
- * [`compute_accidental_hits`](../../api_docs/python/nn.md#compute_accidental_hits)
- * [`conv1d`](../../api_docs/python/nn.md#conv1d)
- * [`conv2d`](../../api_docs/python/nn.md#conv2d)
- * [`conv2d_backprop_filter`](../../api_docs/python/nn.md#conv2d_backprop_filter)
- * [`conv2d_backprop_input`](../../api_docs/python/nn.md#conv2d_backprop_input)
- * [`conv2d_transpose`](../../api_docs/python/nn.md#conv2d_transpose)
- * [`conv3d`](../../api_docs/python/nn.md#conv3d)
- * [`conv3d_backprop_filter_v2`](../../api_docs/python/nn.md#conv3d_backprop_filter_v2)
- * [`conv3d_transpose`](../../api_docs/python/nn.md#conv3d_transpose)
- * [`convolution`](../../api_docs/python/nn.md#convolution)
- * [`crelu`](../../api_docs/python/nn.md#crelu)
- * [`ctc_beam_search_decoder`](../../api_docs/python/nn.md#ctc_beam_search_decoder)
- * [`ctc_greedy_decoder`](../../api_docs/python/nn.md#ctc_greedy_decoder)
- * [`ctc_loss`](../../api_docs/python/nn.md#ctc_loss)
- * [`depthwise_conv2d`](../../api_docs/python/nn.md#depthwise_conv2d)
- * [`depthwise_conv2d_native`](../../api_docs/python/nn.md#depthwise_conv2d_native)
- * [`depthwise_conv2d_native_backprop_filter`](../../api_docs/python/nn.md#depthwise_conv2d_native_backprop_filter)
- * [`depthwise_conv2d_native_backprop_input`](../../api_docs/python/nn.md#depthwise_conv2d_native_backprop_input)
- * [`dilation2d`](../../api_docs/python/nn.md#dilation2d)
- * [`dropout`](../../api_docs/python/nn.md#dropout)
- * [`dynamic_rnn`](../../api_docs/python/nn.md#dynamic_rnn)
- * [`elu`](../../api_docs/python/nn.md#elu)
- * [`embedding_lookup`](../../api_docs/python/nn.md#embedding_lookup)
- * [`embedding_lookup_sparse`](../../api_docs/python/nn.md#embedding_lookup_sparse)
- * [`erosion2d`](../../api_docs/python/nn.md#erosion2d)
- * [`fixed_unigram_candidate_sampler`](../../api_docs/python/nn.md#fixed_unigram_candidate_sampler)
- * [`fractional_avg_pool`](../../api_docs/python/nn.md#fractional_avg_pool)
- * [`fractional_max_pool`](../../api_docs/python/nn.md#fractional_max_pool)
- * [`fused_batch_norm`](../../api_docs/python/nn.md#fused_batch_norm)
- * [`in_top_k`](../../api_docs/python/nn.md#in_top_k)
- * [`l2_loss`](../../api_docs/python/nn.md#l2_loss)
- * [`l2_normalize`](../../api_docs/python/nn.md#l2_normalize)
- * [`learned_unigram_candidate_sampler`](../../api_docs/python/nn.md#learned_unigram_candidate_sampler)
- * [`local_response_normalization`](../../api_docs/python/nn.md#local_response_normalization)
- * [`log_poisson_loss`](../../api_docs/python/nn.md#log_poisson_loss)
- * [`log_softmax`](../../api_docs/python/nn.md#log_softmax)
- * [`log_uniform_candidate_sampler`](../../api_docs/python/nn.md#log_uniform_candidate_sampler)
- * [`max_pool`](../../api_docs/python/nn.md#max_pool)
- * [`max_pool3d`](../../api_docs/python/nn.md#max_pool3d)
- * [`max_pool_with_argmax`](../../api_docs/python/nn.md#max_pool_with_argmax)
- * [`moments`](../../api_docs/python/nn.md#moments)
- * [`nce_loss`](../../api_docs/python/nn.md#nce_loss)
- * [`normalize_moments`](../../api_docs/python/nn.md#normalize_moments)
- * [`pool`](../../api_docs/python/nn.md#pool)
- * [`quantized_avg_pool`](../../api_docs/python/nn.md#quantized_avg_pool)
- * [`quantized_conv2d`](../../api_docs/python/nn.md#quantized_conv2d)
- * [`quantized_max_pool`](../../api_docs/python/nn.md#quantized_max_pool)
- * [`quantized_relu_x`](../../api_docs/python/nn.md#quantized_relu_x)
- * [`raw_rnn`](../../api_docs/python/nn.md#raw_rnn)
- * [`relu`](../../api_docs/python/nn.md#relu)
- * [`relu6`](../../api_docs/python/nn.md#relu6)
- * [`sampled_softmax_loss`](../../api_docs/python/nn.md#sampled_softmax_loss)
- * [`separable_conv2d`](../../api_docs/python/nn.md#separable_conv2d)
- * [`sigmoid`](../../api_docs/python/nn.md#sigmoid)
- * [`sigmoid_cross_entropy_with_logits`](../../api_docs/python/nn.md#sigmoid_cross_entropy_with_logits)
- * [`softmax`](../../api_docs/python/nn.md#softmax)
- * [`softmax_cross_entropy_with_logits`](../../api_docs/python/nn.md#softmax_cross_entropy_with_logits)
- * [`softplus`](../../api_docs/python/nn.md#softplus)
- * [`softsign`](../../api_docs/python/nn.md#softsign)
- * [`sparse_softmax_cross_entropy_with_logits`](../../api_docs/python/nn.md#sparse_softmax_cross_entropy_with_logits)
- * [`sufficient_statistics`](../../api_docs/python/nn.md#sufficient_statistics)
- * [`tanh`](../../api_docs/python/nn.md#tanh)
- * [`top_k`](../../api_docs/python/nn.md#top_k)
- * [`uniform_candidate_sampler`](../../api_docs/python/nn.md#uniform_candidate_sampler)
- * [`weighted_cross_entropy_with_logits`](../../api_docs/python/nn.md#weighted_cross_entropy_with_logits)
- * [`weighted_moments`](../../api_docs/python/nn.md#weighted_moments)
- * [`with_space_to_batch`](../../api_docs/python/nn.md#with_space_to_batch)
- * [`zero_fraction`](../../api_docs/python/nn.md#zero_fraction)
-
-* **[Running Graphs](../../api_docs/python/client.md)**:
- * [`AbortedError`](../../api_docs/python/client.md#AbortedError)
- * [`AlreadyExistsError`](../../api_docs/python/client.md#AlreadyExistsError)
- * [`CancelledError`](../../api_docs/python/client.md#CancelledError)
- * [`DataLossError`](../../api_docs/python/client.md#DataLossError)
- * [`DeadlineExceededError`](../../api_docs/python/client.md#DeadlineExceededError)
- * [`error_code_from_exception_type`](../../api_docs/python/client.md#error_code_from_exception_type)
- * [`exception_type_from_error_code`](../../api_docs/python/client.md#exception_type_from_error_code)
- * [`FailedPreconditionError`](../../api_docs/python/client.md#FailedPreconditionError)
- * [`get_default_session`](../../api_docs/python/client.md#get_default_session)
- * [`InteractiveSession`](../../api_docs/python/client.md#InteractiveSession)
- * [`InternalError`](../../api_docs/python/client.md#InternalError)
- * [`InvalidArgumentError`](../../api_docs/python/client.md#InvalidArgumentError)
- * [`NotFoundError`](../../api_docs/python/client.md#NotFoundError)
- * [`OpError`](../../api_docs/python/client.md#OpError)
- * [`OutOfRangeError`](../../api_docs/python/client.md#OutOfRangeError)
- * [`PermissionDeniedError`](../../api_docs/python/client.md#PermissionDeniedError)
- * [`raise_exception_on_not_ok_status`](../../api_docs/python/client.md#raise_exception_on_not_ok_status)
- * [`ResourceExhaustedError`](../../api_docs/python/client.md#ResourceExhaustedError)
- * [`Session`](../../api_docs/python/client.md#Session)
- * [`UnauthenticatedError`](../../api_docs/python/client.md#UnauthenticatedError)
- * [`UnavailableError`](../../api_docs/python/client.md#UnavailableError)
- * [`UnimplementedError`](../../api_docs/python/client.md#UnimplementedError)
- * [`UnknownError`](../../api_docs/python/client.md#UnknownError)
-
-* **[Training](../../api_docs/python/train.md)**:
- * [`AdadeltaOptimizer`](../../api_docs/python/train.md#AdadeltaOptimizer)
- * [`AdagradDAOptimizer`](../../api_docs/python/train.md#AdagradDAOptimizer)
- * [`AdagradOptimizer`](../../api_docs/python/train.md#AdagradOptimizer)
- * [`AdamOptimizer`](../../api_docs/python/train.md#AdamOptimizer)
- * [`add_queue_runner`](../../api_docs/python/train.md#add_queue_runner)
- * [`AggregationMethod`](../../api_docs/python/train.md#AggregationMethod)
- * [`assert_global_step`](../../api_docs/python/train.md#assert_global_step)
- * [`basic_train_loop`](../../api_docs/python/train.md#basic_train_loop)
- * [`checkpoint_exists`](../../api_docs/python/train.md#checkpoint_exists)
- * [`CheckpointSaverHook`](../../api_docs/python/train.md#CheckpointSaverHook)
- * [`ChiefSessionCreator`](../../api_docs/python/train.md#ChiefSessionCreator)
- * [`clip_by_average_norm`](../../api_docs/python/train.md#clip_by_average_norm)
- * [`clip_by_global_norm`](../../api_docs/python/train.md#clip_by_global_norm)
- * [`clip_by_norm`](../../api_docs/python/train.md#clip_by_norm)
- * [`clip_by_value`](../../api_docs/python/train.md#clip_by_value)
- * [`ClusterSpec`](../../api_docs/python/train.md#ClusterSpec)
- * [`Coordinator`](../../api_docs/python/train.md#Coordinator)
- * [`do_quantize_training_on_graphdef`](../../api_docs/python/train.md#do_quantize_training_on_graphdef)
- * [`exponential_decay`](../../api_docs/python/train.md#exponential_decay)
- * [`ExponentialMovingAverage`](../../api_docs/python/train.md#ExponentialMovingAverage)
- * [`FeedFnHook`](../../api_docs/python/train.md#FeedFnHook)
- * [`FinalOpsHook`](../../api_docs/python/train.md#FinalOpsHook)
- * [`FtrlOptimizer`](../../api_docs/python/train.md#FtrlOptimizer)
- * [`generate_checkpoint_state_proto`](../../api_docs/python/train.md#generate_checkpoint_state_proto)
- * [`get_checkpoint_mtimes`](../../api_docs/python/train.md#get_checkpoint_mtimes)
- * [`get_global_step`](../../api_docs/python/train.md#get_global_step)
- * [`global_norm`](../../api_docs/python/train.md#global_norm)
- * [`global_step`](../../api_docs/python/train.md#global_step)
- * [`GlobalStepWaiterHook`](../../api_docs/python/train.md#GlobalStepWaiterHook)
- * [`GradientDescentOptimizer`](../../api_docs/python/train.md#GradientDescentOptimizer)
- * [`gradients`](../../api_docs/python/train.md#gradients)
- * [`hessians`](../../api_docs/python/train.md#hessians)
- * [`inverse_time_decay`](../../api_docs/python/train.md#inverse_time_decay)
- * [`LoggingTensorHook`](../../api_docs/python/train.md#LoggingTensorHook)
- * [`LooperThread`](../../api_docs/python/train.md#LooperThread)
- * [`MomentumOptimizer`](../../api_docs/python/train.md#MomentumOptimizer)
- * [`MonitoredSession`](../../api_docs/python/train.md#MonitoredSession)
- * [`MonitoredTrainingSession`](../../api_docs/python/train.md#MonitoredTrainingSession)
- * [`NanLossDuringTrainingError`](../../api_docs/python/train.md#NanLossDuringTrainingError)
- * [`NanTensorHook`](../../api_docs/python/train.md#NanTensorHook)
- * [`natural_exp_decay`](../../api_docs/python/train.md#natural_exp_decay)
- * [`NewCheckpointReader`](../../api_docs/python/train.md#NewCheckpointReader)
- * [`Optimizer`](../../api_docs/python/train.md#Optimizer)
- * [`piecewise_constant`](../../api_docs/python/train.md#piecewise_constant)
- * [`polynomial_decay`](../../api_docs/python/train.md#polynomial_decay)
- * [`ProximalAdagradOptimizer`](../../api_docs/python/train.md#ProximalAdagradOptimizer)
- * [`ProximalGradientDescentOptimizer`](../../api_docs/python/train.md#ProximalGradientDescentOptimizer)
- * [`QueueRunner`](../../api_docs/python/train.md#QueueRunner)
- * [`replica_device_setter`](../../api_docs/python/train.md#replica_device_setter)
- * [`RMSPropOptimizer`](../../api_docs/python/train.md#RMSPropOptimizer)
- * [`Scaffold`](../../api_docs/python/train.md#Scaffold)
- * [`Server`](../../api_docs/python/train.md#Server)
- * [`SessionCreator`](../../api_docs/python/train.md#SessionCreator)
- * [`SessionManager`](../../api_docs/python/train.md#SessionManager)
- * [`SessionRunArgs`](../../api_docs/python/train.md#SessionRunArgs)
- * [`SessionRunContext`](../../api_docs/python/train.md#SessionRunContext)
- * [`SessionRunHook`](../../api_docs/python/train.md#SessionRunHook)
- * [`SessionRunValues`](../../api_docs/python/train.md#SessionRunValues)
- * [`SingularMonitoredSession`](../../api_docs/python/train.md#SingularMonitoredSession)
- * [`start_queue_runners`](../../api_docs/python/train.md#start_queue_runners)
- * [`StepCounterHook`](../../api_docs/python/train.md#StepCounterHook)
- * [`stop_gradient`](../../api_docs/python/train.md#stop_gradient)
- * [`StopAtStepHook`](../../api_docs/python/train.md#StopAtStepHook)
- * [`summary_iterator`](../../api_docs/python/train.md#summary_iterator)
- * [`SummarySaverHook`](../../api_docs/python/train.md#SummarySaverHook)
- * [`Supervisor`](../../api_docs/python/train.md#Supervisor)
- * [`SyncReplicasOptimizer`](../../api_docs/python/train.md#SyncReplicasOptimizer)
- * [`WorkerSessionCreator`](../../api_docs/python/train.md#WorkerSessionCreator)
- * [`write_graph`](../../api_docs/python/train.md#write_graph)
-
-* **[Wraps python functions](../../api_docs/python/script_ops.md)**:
- * [`py_func`](../../api_docs/python/script_ops.md#py_func)
-
-* **[Summary Operations](../../api_docs/python/summary.md)**:
- * [`audio`](../../api_docs/python/summary.md#audio)
- * [`FileWriter`](../../api_docs/python/summary.md#FileWriter)
- * [`FileWriterCache`](../../api_docs/python/summary.md#FileWriterCache)
- * [`get_summary_description`](../../api_docs/python/summary.md#get_summary_description)
- * [`histogram`](../../api_docs/python/summary.md#histogram)
- * [`image`](../../api_docs/python/summary.md#image)
- * [`merge`](../../api_docs/python/summary.md#merge)
- * [`merge_all`](../../api_docs/python/summary.md#merge_all)
- * [`scalar`](../../api_docs/python/summary.md#scalar)
- * [`SummaryDescription`](../../api_docs/python/summary.md#SummaryDescription)
- * [`TaggedRunMetadata`](../../api_docs/python/summary.md#TaggedRunMetadata)
- * [`tensor_summary`](../../api_docs/python/summary.md#tensor_summary)
-
-* **[Testing](../../api_docs/python/test.md)**:
- * [`assert_equal_graph_def`](../../api_docs/python/test.md#assert_equal_graph_def)
- * [`Benchmark`](../../api_docs/python/test.md#Benchmark)
- * [`compute_gradient`](../../api_docs/python/test.md#compute_gradient)
- * [`compute_gradient_error`](../../api_docs/python/test.md#compute_gradient_error)
- * [`get_temp_dir`](../../api_docs/python/test.md#get_temp_dir)
- * [`gpu_device_name`](../../api_docs/python/test.md#gpu_device_name)
- * [`is_built_with_cuda`](../../api_docs/python/test.md#is_built_with_cuda)
- * [`is_gpu_available`](../../api_docs/python/test.md#is_gpu_available)
- * [`main`](../../api_docs/python/test.md#main)
- * [`test_src_dir_path`](../../api_docs/python/test.md#test_src_dir_path)
- * [`TestCase`](../../api_docs/python/test.md#TestCase)
-
-* **[BayesFlow Entropy (contrib)](../../api_docs/python/contrib.bayesflow.entropy.md)**:
- * [`elbo_ratio`](../../api_docs/python/contrib.bayesflow.entropy.md#elbo_ratio)
- * [`entropy_shannon`](../../api_docs/python/contrib.bayesflow.entropy.md#entropy_shannon)
- * [`renyi_alpha`](../../api_docs/python/contrib.bayesflow.entropy.md#renyi_alpha)
- * [`renyi_ratio`](../../api_docs/python/contrib.bayesflow.entropy.md#renyi_ratio)
-
-* **[BayesFlow Monte Carlo (contrib)](../../api_docs/python/contrib.bayesflow.monte_carlo.md)**:
- * [`expectation`](../../api_docs/python/contrib.bayesflow.monte_carlo.md#expectation)
- * [`expectation_importance_sampler`](../../api_docs/python/contrib.bayesflow.monte_carlo.md#expectation_importance_sampler)
- * [`expectation_importance_sampler_logspace`](../../api_docs/python/contrib.bayesflow.monte_carlo.md#expectation_importance_sampler_logspace)
-
-* **[BayesFlow Stochastic Graph (contrib)](../../api_docs/python/contrib.bayesflow.stochastic_graph.md)**:
- * [`surrogate_loss`](../../api_docs/python/contrib.bayesflow.stochastic_graph.md#surrogate_loss)
-
-* **[BayesFlow Stochastic Tensors (contrib)](../../api_docs/python/contrib.bayesflow.stochastic_tensor.md)**:
- * [`BaseStochasticTensor`](../../api_docs/python/contrib.bayesflow.stochastic_tensor.md#BaseStochasticTensor)
- * [`get_current_value_type`](../../api_docs/python/contrib.bayesflow.stochastic_tensor.md#get_current_value_type)
- * [`MeanValue`](../../api_docs/python/contrib.bayesflow.stochastic_tensor.md#MeanValue)
- * [`ObservedStochasticTensor`](../../api_docs/python/contrib.bayesflow.stochastic_tensor.md#ObservedStochasticTensor)
- * [`SampleValue`](../../api_docs/python/contrib.bayesflow.stochastic_tensor.md#SampleValue)
- * [`StochasticTensor`](../../api_docs/python/contrib.bayesflow.stochastic_tensor.md#StochasticTensor)
- * [`value_type`](../../api_docs/python/contrib.bayesflow.stochastic_tensor.md#value_type)
-
-* **[BayesFlow Variational Inference (contrib)](../../api_docs/python/contrib.bayesflow.variational_inference.md)**:
- * [`elbo`](../../api_docs/python/contrib.bayesflow.variational_inference.md#elbo)
- * [`elbo_with_log_joint`](../../api_docs/python/contrib.bayesflow.variational_inference.md#elbo_with_log_joint)
- * [`ELBOForms`](../../api_docs/python/contrib.bayesflow.variational_inference.md#ELBOForms)
- * [`register_prior`](../../api_docs/python/contrib.bayesflow.variational_inference.md#register_prior)
-
-* **[CRF (contrib)](../../api_docs/python/contrib.crf.md)**:
- * [`crf_binary_score`](../../api_docs/python/contrib.crf.md#crf_binary_score)
- * [`crf_log_likelihood`](../../api_docs/python/contrib.crf.md#crf_log_likelihood)
- * [`crf_log_norm`](../../api_docs/python/contrib.crf.md#crf_log_norm)
- * [`crf_sequence_score`](../../api_docs/python/contrib.crf.md#crf_sequence_score)
- * [`crf_unary_score`](../../api_docs/python/contrib.crf.md#crf_unary_score)
- * [`CrfForwardRnnCell`](../../api_docs/python/contrib.crf.md#CrfForwardRnnCell)
- * [`viterbi_decode`](../../api_docs/python/contrib.crf.md#viterbi_decode)
-
-* **[Statistical Distributions (contrib)](../../api_docs/python/contrib.distributions.md)**:
- * [`Bernoulli`](../../api_docs/python/contrib.distributions.md#Bernoulli)
- * [`BernoulliWithSigmoidProbs`](../../api_docs/python/contrib.distributions.md#BernoulliWithSigmoidProbs)
- * [`Beta`](../../api_docs/python/contrib.distributions.md#Beta)
- * [`BetaWithSoftplusConcentration`](../../api_docs/python/contrib.distributions.md#BetaWithSoftplusConcentration)
- * [`Binomial`](../../api_docs/python/contrib.distributions.md#Binomial)
- * [`Categorical`](../../api_docs/python/contrib.distributions.md#Categorical)
- * [`Chi2`](../../api_docs/python/contrib.distributions.md#Chi2)
- * [`Chi2WithAbsDf`](../../api_docs/python/contrib.distributions.md#Chi2WithAbsDf)
- * [`ConditionalDistribution`](../../api_docs/python/contrib.distributions.md#ConditionalDistribution)
- * [`ConditionalTransformedDistribution`](../../api_docs/python/contrib.distributions.md#ConditionalTransformedDistribution)
- * [`Dirichlet`](../../api_docs/python/contrib.distributions.md#Dirichlet)
- * [`DirichletMultinomial`](../../api_docs/python/contrib.distributions.md#DirichletMultinomial)
- * [`Distribution`](../../api_docs/python/contrib.distributions.md#Distribution)
- * [`Exponential`](../../api_docs/python/contrib.distributions.md#Exponential)
- * [`ExponentialWithSoftplusRate`](../../api_docs/python/contrib.distributions.md#ExponentialWithSoftplusRate)
- * [`ExpRelaxedOneHotCategorical`](../../api_docs/python/contrib.distributions.md#ExpRelaxedOneHotCategorical)
- * [`Gamma`](../../api_docs/python/contrib.distributions.md#Gamma)
- * [`GammaWithSoftplusConcentrationRate`](../../api_docs/python/contrib.distributions.md#GammaWithSoftplusConcentrationRate)
- * [`InverseGamma`](../../api_docs/python/contrib.distributions.md#InverseGamma)
- * [`InverseGammaWithSoftplusConcentrationRate`](../../api_docs/python/contrib.distributions.md#InverseGammaWithSoftplusConcentrationRate)
- * [`kl`](../../api_docs/python/contrib.distributions.md#kl)
- * [`Laplace`](../../api_docs/python/contrib.distributions.md#Laplace)
- * [`LaplaceWithSoftplusScale`](../../api_docs/python/contrib.distributions.md#LaplaceWithSoftplusScale)
- * [`Logistic`](../../api_docs/python/contrib.distributions.md#Logistic)
- * [`matrix_diag_transform`](../../api_docs/python/contrib.distributions.md#matrix_diag_transform)
- * [`Mixture`](../../api_docs/python/contrib.distributions.md#Mixture)
- * [`Multinomial`](../../api_docs/python/contrib.distributions.md#Multinomial)
- * [`MultivariateNormalDiag`](../../api_docs/python/contrib.distributions.md#MultivariateNormalDiag)
- * [`MultivariateNormalDiagPlusLowRank`](../../api_docs/python/contrib.distributions.md#MultivariateNormalDiagPlusLowRank)
- * [`MultivariateNormalDiagWithSoftplusScale`](../../api_docs/python/contrib.distributions.md#MultivariateNormalDiagWithSoftplusScale)
- * [`MultivariateNormalTriL`](../../api_docs/python/contrib.distributions.md#MultivariateNormalTriL)
- * [`Normal`](../../api_docs/python/contrib.distributions.md#Normal)
- * [`normal_conjugates_known_scale_posterior`](../../api_docs/python/contrib.distributions.md#normal_conjugates_known_scale_posterior)
- * [`normal_conjugates_known_scale_predictive`](../../api_docs/python/contrib.distributions.md#normal_conjugates_known_scale_predictive)
- * [`NormalWithSoftplusScale`](../../api_docs/python/contrib.distributions.md#NormalWithSoftplusScale)
- * [`OneHotCategorical`](../../api_docs/python/contrib.distributions.md#OneHotCategorical)
- * [`Poisson`](../../api_docs/python/contrib.distributions.md#Poisson)
- * [`QuantizedDistribution`](../../api_docs/python/contrib.distributions.md#QuantizedDistribution)
- * [`RegisterKL`](../../api_docs/python/contrib.distributions.md#RegisterKL)
- * [`RelaxedBernoulli`](../../api_docs/python/contrib.distributions.md#RelaxedBernoulli)
- * [`RelaxedOneHotCategorical`](../../api_docs/python/contrib.distributions.md#RelaxedOneHotCategorical)
- * [`ReparameterizationType`](../../api_docs/python/contrib.distributions.md#ReparameterizationType)
- * [`softplus_inverse`](../../api_docs/python/contrib.distributions.md#softplus_inverse)
- * [`StudentT`](../../api_docs/python/contrib.distributions.md#StudentT)
- * [`StudentTWithAbsDfSoftplusScale`](../../api_docs/python/contrib.distributions.md#StudentTWithAbsDfSoftplusScale)
- * [`TransformedDistribution`](../../api_docs/python/contrib.distributions.md#TransformedDistribution)
- * [`Uniform`](../../api_docs/python/contrib.distributions.md#Uniform)
- * [`WishartCholesky`](../../api_docs/python/contrib.distributions.md#WishartCholesky)
- * [`WishartFull`](../../api_docs/python/contrib.distributions.md#WishartFull)
-
-* **[Random variable transformations (contrib)](../../api_docs/python/contrib.distributions.bijector.md)**:
- * [`Affine`](../../api_docs/python/contrib.distributions.bijector.md#Affine)
- * [`AffineLinearOperator`](../../api_docs/python/contrib.distributions.bijector.md#AffineLinearOperator)
- * [`Bijector`](../../api_docs/python/contrib.distributions.bijector.md#Bijector)
- * [`Chain`](../../api_docs/python/contrib.distributions.bijector.md#Chain)
- * [`CholeskyOuterProduct`](../../api_docs/python/contrib.distributions.bijector.md#CholeskyOuterProduct)
- * [`Exp`](../../api_docs/python/contrib.distributions.bijector.md#Exp)
- * [`Identity`](../../api_docs/python/contrib.distributions.bijector.md#Identity)
- * [`Inline`](../../api_docs/python/contrib.distributions.bijector.md#Inline)
- * [`Invert`](../../api_docs/python/contrib.distributions.bijector.md#Invert)
- * [`PowerTransform`](../../api_docs/python/contrib.distributions.bijector.md#PowerTransform)
- * [`SigmoidCentered`](../../api_docs/python/contrib.distributions.bijector.md#SigmoidCentered)
- * [`SoftmaxCentered`](../../api_docs/python/contrib.distributions.bijector.md#SoftmaxCentered)
- * [`Softplus`](../../api_docs/python/contrib.distributions.bijector.md#Softplus)
-
-* **[FFmpeg (contrib)](../../api_docs/python/contrib.ffmpeg.md)**:
- * [`decode_audio`](../../api_docs/python/contrib.ffmpeg.md#decode_audio)
- * [`encode_audio`](../../api_docs/python/contrib.ffmpeg.md#encode_audio)
-
-* **[Framework (contrib)](../../api_docs/python/contrib.framework.md)**:
- * [`add_arg_scope`](../../api_docs/python/contrib.framework.md#add_arg_scope)
- * [`add_model_variable`](../../api_docs/python/contrib.framework.md#add_model_variable)
- * [`arg_scope`](../../api_docs/python/contrib.framework.md#arg_scope)
- * [`arg_scoped_arguments`](../../api_docs/python/contrib.framework.md#arg_scoped_arguments)
- * [`assert_global_step`](../../api_docs/python/contrib.framework.md#assert_global_step)
- * [`assert_or_get_global_step`](../../api_docs/python/contrib.framework.md#assert_or_get_global_step)
- * [`assert_same_float_dtype`](../../api_docs/python/contrib.framework.md#assert_same_float_dtype)
- * [`assert_scalar`](../../api_docs/python/contrib.framework.md#assert_scalar)
- * [`assert_scalar_int`](../../api_docs/python/contrib.framework.md#assert_scalar_int)
- * [`assign_from_checkpoint`](../../api_docs/python/contrib.framework.md#assign_from_checkpoint)
- * [`assign_from_checkpoint_fn`](../../api_docs/python/contrib.framework.md#assign_from_checkpoint_fn)
- * [`assign_from_values`](../../api_docs/python/contrib.framework.md#assign_from_values)
- * [`assign_from_values_fn`](../../api_docs/python/contrib.framework.md#assign_from_values_fn)
- * [`convert_to_tensor_or_sparse_tensor`](../../api_docs/python/contrib.framework.md#convert_to_tensor_or_sparse_tensor)
- * [`create_global_step`](../../api_docs/python/contrib.framework.md#create_global_step)
- * [`deprecated`](../../api_docs/python/contrib.framework.md#deprecated)
- * [`deprecated_arg_values`](../../api_docs/python/contrib.framework.md#deprecated_arg_values)
- * [`deprecated_args`](../../api_docs/python/contrib.framework.md#deprecated_args)
- * [`filter_variables`](../../api_docs/python/contrib.framework.md#filter_variables)
- * [`get_global_step`](../../api_docs/python/contrib.framework.md#get_global_step)
- * [`get_graph_from_inputs`](../../api_docs/python/contrib.framework.md#get_graph_from_inputs)
- * [`get_local_variables`](../../api_docs/python/contrib.framework.md#get_local_variables)
- * [`get_model_variables`](../../api_docs/python/contrib.framework.md#get_model_variables)
- * [`get_or_create_global_step`](../../api_docs/python/contrib.framework.md#get_or_create_global_step)
- * [`get_unique_variable`](../../api_docs/python/contrib.framework.md#get_unique_variable)
- * [`get_variable_full_name`](../../api_docs/python/contrib.framework.md#get_variable_full_name)
- * [`get_variables`](../../api_docs/python/contrib.framework.md#get_variables)
- * [`get_variables_by_name`](../../api_docs/python/contrib.framework.md#get_variables_by_name)
- * [`get_variables_by_suffix`](../../api_docs/python/contrib.framework.md#get_variables_by_suffix)
- * [`get_variables_to_restore`](../../api_docs/python/contrib.framework.md#get_variables_to_restore)
- * [`has_arg_scope`](../../api_docs/python/contrib.framework.md#has_arg_scope)
- * [`init_from_checkpoint`](../../api_docs/python/contrib.framework.md#init_from_checkpoint)
- * [`is_non_decreasing`](../../api_docs/python/contrib.framework.md#is_non_decreasing)
- * [`is_numeric_tensor`](../../api_docs/python/contrib.framework.md#is_numeric_tensor)
- * [`is_strictly_increasing`](../../api_docs/python/contrib.framework.md#is_strictly_increasing)
- * [`is_tensor`](../../api_docs/python/contrib.framework.md#is_tensor)
- * [`list_variables`](../../api_docs/python/contrib.framework.md#list_variables)
- * [`load_checkpoint`](../../api_docs/python/contrib.framework.md#load_checkpoint)
- * [`load_variable`](../../api_docs/python/contrib.framework.md#load_variable)
- * [`local_variable`](../../api_docs/python/contrib.framework.md#local_variable)
- * [`model_variable`](../../api_docs/python/contrib.framework.md#model_variable)
- * [`reduce_sum_n`](../../api_docs/python/contrib.framework.md#reduce_sum_n)
- * [`remove_squeezable_dimensions`](../../api_docs/python/contrib.framework.md#remove_squeezable_dimensions)
- * [`variable`](../../api_docs/python/contrib.framework.md#variable)
- * [`VariableDeviceChooser`](../../api_docs/python/contrib.framework.md#VariableDeviceChooser)
- * [`with_same_shape`](../../api_docs/python/contrib.framework.md#with_same_shape)
- * [`with_shape`](../../api_docs/python/contrib.framework.md#with_shape)
- * [`zero_initializer`](../../api_docs/python/contrib.framework.md#zero_initializer)
-
-* **[Graph Editor (contrib)](../../api_docs/python/contrib.graph_editor.md)**:
- * [`add_control_inputs`](../../api_docs/python/contrib.graph_editor.md#add_control_inputs)
- * [`assign_renamed_collections_handler`](../../api_docs/python/contrib.graph_editor.md#assign_renamed_collections_handler)
- * [`bypass`](../../api_docs/python/contrib.graph_editor.md#bypass)
- * [`can_be_regex`](../../api_docs/python/contrib.graph_editor.md#can_be_regex)
- * [`check_cios`](../../api_docs/python/contrib.graph_editor.md#check_cios)
- * [`compute_boundary_ts`](../../api_docs/python/contrib.graph_editor.md#compute_boundary_ts)
- * [`connect`](../../api_docs/python/contrib.graph_editor.md#connect)
- * [`ControlOutputs`](../../api_docs/python/contrib.graph_editor.md#ControlOutputs)
- * [`copy_op_handler`](../../api_docs/python/contrib.graph_editor.md#copy_op_handler)
- * [`copy_with_input_replacements`](../../api_docs/python/contrib.graph_editor.md#copy_with_input_replacements)
- * [`detach`](../../api_docs/python/contrib.graph_editor.md#detach)
- * [`detach_control_inputs`](../../api_docs/python/contrib.graph_editor.md#detach_control_inputs)
- * [`detach_control_outputs`](../../api_docs/python/contrib.graph_editor.md#detach_control_outputs)
- * [`detach_inputs`](../../api_docs/python/contrib.graph_editor.md#detach_inputs)
- * [`detach_outputs`](../../api_docs/python/contrib.graph_editor.md#detach_outputs)
- * [`filter_ops`](../../api_docs/python/contrib.graph_editor.md#filter_ops)
- * [`filter_ops_from_regex`](../../api_docs/python/contrib.graph_editor.md#filter_ops_from_regex)
- * [`filter_ts`](../../api_docs/python/contrib.graph_editor.md#filter_ts)
- * [`filter_ts_from_regex`](../../api_docs/python/contrib.graph_editor.md#filter_ts_from_regex)
- * [`get_backward_walk_ops`](../../api_docs/python/contrib.graph_editor.md#get_backward_walk_ops)
- * [`get_consuming_ops`](../../api_docs/python/contrib.graph_editor.md#get_consuming_ops)
- * [`get_forward_walk_ops`](../../api_docs/python/contrib.graph_editor.md#get_forward_walk_ops)
- * [`get_generating_ops`](../../api_docs/python/contrib.graph_editor.md#get_generating_ops)
- * [`get_name_scope_ops`](../../api_docs/python/contrib.graph_editor.md#get_name_scope_ops)
- * [`get_ops_ios`](../../api_docs/python/contrib.graph_editor.md#get_ops_ios)
- * [`get_tensors`](../../api_docs/python/contrib.graph_editor.md#get_tensors)
- * [`get_walks_intersection_ops`](../../api_docs/python/contrib.graph_editor.md#get_walks_intersection_ops)
- * [`get_walks_union_ops`](../../api_docs/python/contrib.graph_editor.md#get_walks_union_ops)
- * [`get_within_boundary_ops`](../../api_docs/python/contrib.graph_editor.md#get_within_boundary_ops)
- * [`graph_replace`](../../api_docs/python/contrib.graph_editor.md#graph_replace)
- * [`keep_t_if_possible_handler`](../../api_docs/python/contrib.graph_editor.md#keep_t_if_possible_handler)
- * [`make_list_of_op`](../../api_docs/python/contrib.graph_editor.md#make_list_of_op)
- * [`make_list_of_t`](../../api_docs/python/contrib.graph_editor.md#make_list_of_t)
- * [`make_placeholder_from_dtype_and_shape`](../../api_docs/python/contrib.graph_editor.md#make_placeholder_from_dtype_and_shape)
- * [`make_placeholder_from_tensor`](../../api_docs/python/contrib.graph_editor.md#make_placeholder_from_tensor)
- * [`make_regex`](../../api_docs/python/contrib.graph_editor.md#make_regex)
- * [`make_view`](../../api_docs/python/contrib.graph_editor.md#make_view)
- * [`make_view_from_scope`](../../api_docs/python/contrib.graph_editor.md#make_view_from_scope)
- * [`op_type`](../../api_docs/python/contrib.graph_editor.md#op_type)
- * [`OpMatcher`](../../api_docs/python/contrib.graph_editor.md#OpMatcher)
- * [`ph`](../../api_docs/python/contrib.graph_editor.md#ph)
- * [`placeholder_name`](../../api_docs/python/contrib.graph_editor.md#placeholder_name)
- * [`remove_control_inputs`](../../api_docs/python/contrib.graph_editor.md#remove_control_inputs)
- * [`replace_t_with_placeholder_handler`](../../api_docs/python/contrib.graph_editor.md#replace_t_with_placeholder_handler)
- * [`reroute_inputs`](../../api_docs/python/contrib.graph_editor.md#reroute_inputs)
- * [`reroute_ios`](../../api_docs/python/contrib.graph_editor.md#reroute_ios)
- * [`reroute_outputs`](../../api_docs/python/contrib.graph_editor.md#reroute_outputs)
- * [`reroute_ts`](../../api_docs/python/contrib.graph_editor.md#reroute_ts)
- * [`select_ops`](../../api_docs/python/contrib.graph_editor.md#select_ops)
- * [`select_ops_and_ts`](../../api_docs/python/contrib.graph_editor.md#select_ops_and_ts)
- * [`select_ts`](../../api_docs/python/contrib.graph_editor.md#select_ts)
- * [`sgv`](../../api_docs/python/contrib.graph_editor.md#sgv)
- * [`sgv_scope`](../../api_docs/python/contrib.graph_editor.md#sgv_scope)
- * [`SubGraphView`](../../api_docs/python/contrib.graph_editor.md#SubGraphView)
- * [`swap_inputs`](../../api_docs/python/contrib.graph_editor.md#swap_inputs)
- * [`swap_ios`](../../api_docs/python/contrib.graph_editor.md#swap_ios)
- * [`swap_outputs`](../../api_docs/python/contrib.graph_editor.md#swap_outputs)
- * [`swap_ts`](../../api_docs/python/contrib.graph_editor.md#swap_ts)
- * [`transform_op_if_inside_handler`](../../api_docs/python/contrib.graph_editor.md#transform_op_if_inside_handler)
- * [`Transformer`](../../api_docs/python/contrib.graph_editor.md#Transformer)
- * [`TransformerInfo`](../../api_docs/python/contrib.graph_editor.md#TransformerInfo)
-
-* **[Integrate (contrib)](../../api_docs/python/contrib.integrate.md)**:
- * [`odeint`](../../api_docs/python/contrib.integrate.md#odeint)
-
-* **[Layers (contrib)](../../api_docs/python/contrib.layers.md)**:
- * [`apply_regularization`](../../api_docs/python/contrib.layers.md#apply_regularization)
- * [`avg_pool2d`](../../api_docs/python/contrib.layers.md#avg_pool2d)
- * [`batch_norm`](../../api_docs/python/contrib.layers.md#batch_norm)
- * [`bucketized_column`](../../api_docs/python/contrib.layers.md#bucketized_column)
- * [`check_feature_columns`](../../api_docs/python/contrib.layers.md#check_feature_columns)
- * [`conv2d_in_plane`](../../api_docs/python/contrib.layers.md#conv2d_in_plane)
- * [`conv2d_transpose`](../../api_docs/python/contrib.layers.md#conv2d_transpose)
- * [`convolution2d`](../../api_docs/python/contrib.layers.md#convolution2d)
- * [`convolution2d_in_plane`](../../api_docs/python/contrib.layers.md#convolution2d_in_plane)
- * [`convolution2d_transpose`](../../api_docs/python/contrib.layers.md#convolution2d_transpose)
- * [`create_feature_spec_for_parsing`](../../api_docs/python/contrib.layers.md#create_feature_spec_for_parsing)
- * [`crossed_column`](../../api_docs/python/contrib.layers.md#crossed_column)
- * [`dropout`](../../api_docs/python/contrib.layers.md#dropout)
- * [`embed_sequence`](../../api_docs/python/contrib.layers.md#embed_sequence)
- * [`embedding_column`](../../api_docs/python/contrib.layers.md#embedding_column)
- * [`flatten`](../../api_docs/python/contrib.layers.md#flatten)
- * [`fully_connected`](../../api_docs/python/contrib.layers.md#fully_connected)
- * [`infer_real_valued_columns`](../../api_docs/python/contrib.layers.md#infer_real_valued_columns)
- * [`input_from_feature_columns`](../../api_docs/python/contrib.layers.md#input_from_feature_columns)
- * [`joint_weighted_sum_from_feature_columns`](../../api_docs/python/contrib.layers.md#joint_weighted_sum_from_feature_columns)
- * [`l1_regularizer`](../../api_docs/python/contrib.layers.md#l1_regularizer)
- * [`l2_regularizer`](../../api_docs/python/contrib.layers.md#l2_regularizer)
- * [`layer_norm`](../../api_docs/python/contrib.layers.md#layer_norm)
- * [`legacy_fully_connected`](../../api_docs/python/contrib.layers.md#legacy_fully_connected)
- * [`legacy_linear`](../../api_docs/python/contrib.layers.md#legacy_linear)
- * [`legacy_relu`](../../api_docs/python/contrib.layers.md#legacy_relu)
- * [`linear`](../../api_docs/python/contrib.layers.md#linear)
- * [`make_place_holder_tensors_for_base_features`](../../api_docs/python/contrib.layers.md#make_place_holder_tensors_for_base_features)
- * [`max_pool2d`](../../api_docs/python/contrib.layers.md#max_pool2d)
- * [`multi_class_target`](../../api_docs/python/contrib.layers.md#multi_class_target)
- * [`one_hot_column`](../../api_docs/python/contrib.layers.md#one_hot_column)
- * [`one_hot_encoding`](../../api_docs/python/contrib.layers.md#one_hot_encoding)
- * [`optimize_loss`](../../api_docs/python/contrib.layers.md#optimize_loss)
- * [`parse_feature_columns_from_examples`](../../api_docs/python/contrib.layers.md#parse_feature_columns_from_examples)
- * [`parse_feature_columns_from_sequence_examples`](../../api_docs/python/contrib.layers.md#parse_feature_columns_from_sequence_examples)
- * [`real_valued_column`](../../api_docs/python/contrib.layers.md#real_valued_column)
- * [`regression_target`](../../api_docs/python/contrib.layers.md#regression_target)
- * [`relu`](../../api_docs/python/contrib.layers.md#relu)
- * [`relu6`](../../api_docs/python/contrib.layers.md#relu6)
- * [`repeat`](../../api_docs/python/contrib.layers.md#repeat)
- * [`safe_embedding_lookup_sparse`](../../api_docs/python/contrib.layers.md#safe_embedding_lookup_sparse)
- * [`scattered_embedding_column`](../../api_docs/python/contrib.layers.md#scattered_embedding_column)
- * [`separable_conv2d`](../../api_docs/python/contrib.layers.md#separable_conv2d)
- * [`separable_convolution2d`](../../api_docs/python/contrib.layers.md#separable_convolution2d)
- * [`sequence_input_from_feature_columns`](../../api_docs/python/contrib.layers.md#sequence_input_from_feature_columns)
- * [`shared_embedding_columns`](../../api_docs/python/contrib.layers.md#shared_embedding_columns)
- * [`softmax`](../../api_docs/python/contrib.layers.md#softmax)
- * [`sparse_column_with_hash_bucket`](../../api_docs/python/contrib.layers.md#sparse_column_with_hash_bucket)
- * [`sparse_column_with_integerized_feature`](../../api_docs/python/contrib.layers.md#sparse_column_with_integerized_feature)
- * [`sparse_column_with_keys`](../../api_docs/python/contrib.layers.md#sparse_column_with_keys)
- * [`stack`](../../api_docs/python/contrib.layers.md#stack)
- * [`sum_regularizer`](../../api_docs/python/contrib.layers.md#sum_regularizer)
- * [`summarize_activation`](../../api_docs/python/contrib.layers.md#summarize_activation)
- * [`summarize_activations`](../../api_docs/python/contrib.layers.md#summarize_activations)
- * [`summarize_collection`](../../api_docs/python/contrib.layers.md#summarize_collection)
- * [`summarize_tensor`](../../api_docs/python/contrib.layers.md#summarize_tensor)
- * [`summarize_tensors`](../../api_docs/python/contrib.layers.md#summarize_tensors)
- * [`unit_norm`](../../api_docs/python/contrib.layers.md#unit_norm)
- * [`variance_scaling_initializer`](../../api_docs/python/contrib.layers.md#variance_scaling_initializer)
- * [`weighted_sparse_column`](../../api_docs/python/contrib.layers.md#weighted_sparse_column)
- * [`weighted_sum_from_feature_columns`](../../api_docs/python/contrib.layers.md#weighted_sum_from_feature_columns)
- * [`xavier_initializer`](../../api_docs/python/contrib.layers.md#xavier_initializer)
- * [`xavier_initializer_conv2d`](../../api_docs/python/contrib.layers.md#xavier_initializer_conv2d)
-
-* **[Learn (contrib)](../../api_docs/python/contrib.learn.md)**:
- * [`BaseEstimator`](../../api_docs/python/contrib.learn.md#BaseEstimator)
- * [`build_parsing_serving_input_fn`](../../api_docs/python/contrib.learn.md#build_parsing_serving_input_fn)
- * [`DNNClassifier`](../../api_docs/python/contrib.learn.md#DNNClassifier)
- * [`DNNLinearCombinedClassifier`](../../api_docs/python/contrib.learn.md#DNNLinearCombinedClassifier)
- * [`DNNLinearCombinedRegressor`](../../api_docs/python/contrib.learn.md#DNNLinearCombinedRegressor)
- * [`DNNRegressor`](../../api_docs/python/contrib.learn.md#DNNRegressor)
- * [`Estimator`](../../api_docs/python/contrib.learn.md#Estimator)
- * [`Evaluable`](../../api_docs/python/contrib.learn.md#Evaluable)
- * [`evaluate`](../../api_docs/python/contrib.learn.md#evaluate)
- * [`Experiment`](../../api_docs/python/contrib.learn.md#Experiment)
- * [`ExportStrategy`](../../api_docs/python/contrib.learn.md#ExportStrategy)
- * [`extract_dask_data`](../../api_docs/python/contrib.learn.md#extract_dask_data)
- * [`extract_dask_labels`](../../api_docs/python/contrib.learn.md#extract_dask_labels)
- * [`extract_pandas_data`](../../api_docs/python/contrib.learn.md#extract_pandas_data)
- * [`extract_pandas_labels`](../../api_docs/python/contrib.learn.md#extract_pandas_labels)
- * [`extract_pandas_matrix`](../../api_docs/python/contrib.learn.md#extract_pandas_matrix)
- * [`infer`](../../api_docs/python/contrib.learn.md#infer)
- * [`infer_real_valued_columns_from_input`](../../api_docs/python/contrib.learn.md#infer_real_valued_columns_from_input)
- * [`infer_real_valued_columns_from_input_fn`](../../api_docs/python/contrib.learn.md#infer_real_valued_columns_from_input_fn)
- * [`InputFnOps`](../../api_docs/python/contrib.learn.md#InputFnOps)
- * [`KMeansClustering`](../../api_docs/python/contrib.learn.md#KMeansClustering)
- * [`LinearClassifier`](../../api_docs/python/contrib.learn.md#LinearClassifier)
- * [`LinearRegressor`](../../api_docs/python/contrib.learn.md#LinearRegressor)
- * [`LogisticRegressor`](../../api_docs/python/contrib.learn.md#LogisticRegressor)
- * [`make_export_strategy`](../../api_docs/python/contrib.learn.md#make_export_strategy)
- * [`MetricSpec`](../../api_docs/python/contrib.learn.md#MetricSpec)
- * [`ModeKeys`](../../api_docs/python/contrib.learn.md#ModeKeys)
- * [`ModelFnOps`](../../api_docs/python/contrib.learn.md#ModelFnOps)
- * [`NanLossDuringTrainingError`](../../api_docs/python/contrib.learn.md#NanLossDuringTrainingError)
- * [`NotFittedError`](../../api_docs/python/contrib.learn.md#NotFittedError)
- * [`PredictionKey`](../../api_docs/python/contrib.learn.md#PredictionKey)
- * [`ProblemType`](../../api_docs/python/contrib.learn.md#ProblemType)
- * [`read_batch_examples`](../../api_docs/python/contrib.learn.md#read_batch_examples)
- * [`read_batch_features`](../../api_docs/python/contrib.learn.md#read_batch_features)
- * [`read_batch_record_features`](../../api_docs/python/contrib.learn.md#read_batch_record_features)
- * [`run_feeds`](../../api_docs/python/contrib.learn.md#run_feeds)
- * [`run_n`](../../api_docs/python/contrib.learn.md#run_n)
- * [`RunConfig`](../../api_docs/python/contrib.learn.md#RunConfig)
- * [`TaskType`](../../api_docs/python/contrib.learn.md#TaskType)
- * [`train`](../../api_docs/python/contrib.learn.md#train)
- * [`Trainable`](../../api_docs/python/contrib.learn.md#Trainable)
-
-* **[Monitors (contrib)](../../api_docs/python/contrib.learn.monitors.md)**:
- * [`BaseMonitor`](../../api_docs/python/contrib.learn.monitors.md#BaseMonitor)
- * [`CaptureVariable`](../../api_docs/python/contrib.learn.monitors.md#CaptureVariable)
- * [`CheckpointSaver`](../../api_docs/python/contrib.learn.monitors.md#CheckpointSaver)
- * [`EveryN`](../../api_docs/python/contrib.learn.monitors.md#EveryN)
- * [`ExportMonitor`](../../api_docs/python/contrib.learn.monitors.md#ExportMonitor)
- * [`get_default_monitors`](../../api_docs/python/contrib.learn.monitors.md#get_default_monitors)
- * [`GraphDump`](../../api_docs/python/contrib.learn.monitors.md#GraphDump)
- * [`LoggingTrainable`](../../api_docs/python/contrib.learn.monitors.md#LoggingTrainable)
- * [`NanLoss`](../../api_docs/python/contrib.learn.monitors.md#NanLoss)
- * [`PrintTensor`](../../api_docs/python/contrib.learn.monitors.md#PrintTensor)
- * [`replace_monitors_with_hooks`](../../api_docs/python/contrib.learn.monitors.md#replace_monitors_with_hooks)
- * [`RunHookAdapterForMonitors`](../../api_docs/python/contrib.learn.monitors.md#RunHookAdapterForMonitors)
- * [`StepCounter`](../../api_docs/python/contrib.learn.monitors.md#StepCounter)
- * [`StopAtStep`](../../api_docs/python/contrib.learn.monitors.md#StopAtStep)
- * [`SummarySaver`](../../api_docs/python/contrib.learn.monitors.md#SummarySaver)
- * [`SummaryWriterCache`](../../api_docs/python/contrib.learn.monitors.md#SummaryWriterCache)
- * [`ValidationMonitor`](../../api_docs/python/contrib.learn.monitors.md#ValidationMonitor)
-
-* **[Sequence to Sequence (contrib)](../../api_docs/python/contrib.legacy_seq2seq.md)**:
- * [`attention_decoder`](../../api_docs/python/contrib.legacy_seq2seq.md#attention_decoder)
- * [`basic_rnn_seq2seq`](../../api_docs/python/contrib.legacy_seq2seq.md#basic_rnn_seq2seq)
- * [`embedding_attention_decoder`](../../api_docs/python/contrib.legacy_seq2seq.md#embedding_attention_decoder)
- * [`embedding_attention_seq2seq`](../../api_docs/python/contrib.legacy_seq2seq.md#embedding_attention_seq2seq)
- * [`embedding_rnn_decoder`](../../api_docs/python/contrib.legacy_seq2seq.md#embedding_rnn_decoder)
- * [`embedding_rnn_seq2seq`](../../api_docs/python/contrib.legacy_seq2seq.md#embedding_rnn_seq2seq)
- * [`embedding_tied_rnn_seq2seq`](../../api_docs/python/contrib.legacy_seq2seq.md#embedding_tied_rnn_seq2seq)
- * [`model_with_buckets`](../../api_docs/python/contrib.legacy_seq2seq.md#model_with_buckets)
- * [`one2many_rnn_seq2seq`](../../api_docs/python/contrib.legacy_seq2seq.md#one2many_rnn_seq2seq)
- * [`rnn_decoder`](../../api_docs/python/contrib.legacy_seq2seq.md#rnn_decoder)
- * [`sequence_loss`](../../api_docs/python/contrib.legacy_seq2seq.md#sequence_loss)
- * [`sequence_loss_by_example`](../../api_docs/python/contrib.legacy_seq2seq.md#sequence_loss_by_example)
- * [`tied_rnn_seq2seq`](../../api_docs/python/contrib.legacy_seq2seq.md#tied_rnn_seq2seq)
-
-* **[Linear Algebra (contrib)](../../api_docs/python/contrib.linalg.md)**:
- * [`LinearOperator`](../../api_docs/python/contrib.linalg.md#LinearOperator)
- * [`LinearOperatorComposition`](../../api_docs/python/contrib.linalg.md#LinearOperatorComposition)
- * [`LinearOperatorDiag`](../../api_docs/python/contrib.linalg.md#LinearOperatorDiag)
- * [`LinearOperatorIdentity`](../../api_docs/python/contrib.linalg.md#LinearOperatorIdentity)
- * [`LinearOperatorMatrix`](../../api_docs/python/contrib.linalg.md#LinearOperatorMatrix)
- * [`LinearOperatorScaledIdentity`](../../api_docs/python/contrib.linalg.md#LinearOperatorScaledIdentity)
- * [`LinearOperatorTriL`](../../api_docs/python/contrib.linalg.md#LinearOperatorTriL)
- * [`LinearOperatorUDVHUpdate`](../../api_docs/python/contrib.linalg.md#LinearOperatorUDVHUpdate)
-
-* **[Losses (contrib)](../../api_docs/python/contrib.losses.md)**:
- * [`absolute_difference`](../../api_docs/python/contrib.losses.md#absolute_difference)
- * [`add_loss`](../../api_docs/python/contrib.losses.md#add_loss)
- * [`compute_weighted_loss`](../../api_docs/python/contrib.losses.md#compute_weighted_loss)
- * [`cosine_distance`](../../api_docs/python/contrib.losses.md#cosine_distance)
- * [`get_losses`](../../api_docs/python/contrib.losses.md#get_losses)
- * [`get_regularization_losses`](../../api_docs/python/contrib.losses.md#get_regularization_losses)
- * [`get_total_loss`](../../api_docs/python/contrib.losses.md#get_total_loss)
- * [`hinge_loss`](../../api_docs/python/contrib.losses.md#hinge_loss)
- * [`log_loss`](../../api_docs/python/contrib.losses.md#log_loss)
- * [`mean_pairwise_squared_error`](../../api_docs/python/contrib.losses.md#mean_pairwise_squared_error)
- * [`mean_squared_error`](../../api_docs/python/contrib.losses.md#mean_squared_error)
- * [`sigmoid_cross_entropy`](../../api_docs/python/contrib.losses.md#sigmoid_cross_entropy)
- * [`softmax_cross_entropy`](../../api_docs/python/contrib.losses.md#softmax_cross_entropy)
- * [`sparse_softmax_cross_entropy`](../../api_docs/python/contrib.losses.md#sparse_softmax_cross_entropy)
-
-* **[Optimization (contrib)](../../api_docs/python/contrib.opt.md)**:
- * [`ExternalOptimizerInterface`](../../api_docs/python/contrib.opt.md#ExternalOptimizerInterface)
- * [`MovingAverageOptimizer`](../../api_docs/python/contrib.opt.md#MovingAverageOptimizer)
- * [`ScipyOptimizerInterface`](../../api_docs/python/contrib.opt.md#ScipyOptimizerInterface)
- * [`VariableClippingOptimizer`](../../api_docs/python/contrib.opt.md#VariableClippingOptimizer)
-
-* **[RNN and Cells (contrib)](../../api_docs/python/contrib.rnn.md)**:
- * [`AttentionCellWrapper`](../../api_docs/python/contrib.rnn.md#AttentionCellWrapper)
- * [`BasicLSTMCell`](../../api_docs/python/contrib.rnn.md#BasicLSTMCell)
- * [`BasicRNNCell`](../../api_docs/python/contrib.rnn.md#BasicRNNCell)
- * [`CompiledWrapper`](../../api_docs/python/contrib.rnn.md#CompiledWrapper)
- * [`CoupledInputForgetGateLSTMCell`](../../api_docs/python/contrib.rnn.md#CoupledInputForgetGateLSTMCell)
- * [`DeviceWrapper`](../../api_docs/python/contrib.rnn.md#DeviceWrapper)
- * [`DropoutWrapper`](../../api_docs/python/contrib.rnn.md#DropoutWrapper)
- * [`EmbeddingWrapper`](../../api_docs/python/contrib.rnn.md#EmbeddingWrapper)
- * [`FusedRNNCell`](../../api_docs/python/contrib.rnn.md#FusedRNNCell)
- * [`FusedRNNCellAdaptor`](../../api_docs/python/contrib.rnn.md#FusedRNNCellAdaptor)
- * [`GridLSTMCell`](../../api_docs/python/contrib.rnn.md#GridLSTMCell)
- * [`GRUBlockCell`](../../api_docs/python/contrib.rnn.md#GRUBlockCell)
- * [`GRUCell`](../../api_docs/python/contrib.rnn.md#GRUCell)
- * [`InputProjectionWrapper`](../../api_docs/python/contrib.rnn.md#InputProjectionWrapper)
- * [`LayerNormBasicLSTMCell`](../../api_docs/python/contrib.rnn.md#LayerNormBasicLSTMCell)
- * [`LSTMBlockCell`](../../api_docs/python/contrib.rnn.md#LSTMBlockCell)
- * [`LSTMBlockFusedCell`](../../api_docs/python/contrib.rnn.md#LSTMBlockFusedCell)
- * [`LSTMBlockWrapper`](../../api_docs/python/contrib.rnn.md#LSTMBlockWrapper)
- * [`LSTMCell`](../../api_docs/python/contrib.rnn.md#LSTMCell)
- * [`LSTMStateTuple`](../../api_docs/python/contrib.rnn.md#LSTMStateTuple)
- * [`MultiRNNCell`](../../api_docs/python/contrib.rnn.md#MultiRNNCell)
- * [`OutputProjectionWrapper`](../../api_docs/python/contrib.rnn.md#OutputProjectionWrapper)
- * [`ResidualWrapper`](../../api_docs/python/contrib.rnn.md#ResidualWrapper)
- * [`RNNCell`](../../api_docs/python/contrib.rnn.md#RNNCell)
- * [`stack_bidirectional_dynamic_rnn`](../../api_docs/python/contrib.rnn.md#stack_bidirectional_dynamic_rnn)
- * [`static_bidirectional_rnn`](../../api_docs/python/contrib.rnn.md#static_bidirectional_rnn)
- * [`static_rnn`](../../api_docs/python/contrib.rnn.md#static_rnn)
- * [`static_state_saving_rnn`](../../api_docs/python/contrib.rnn.md#static_state_saving_rnn)
- * [`TimeFreqLSTMCell`](../../api_docs/python/contrib.rnn.md#TimeFreqLSTMCell)
- * [`TimeReversedFusedRNN`](../../api_docs/python/contrib.rnn.md#TimeReversedFusedRNN)
-
-* **[Metrics (contrib)](../../api_docs/python/contrib.metrics.md)**:
- * [`accuracy`](../../api_docs/python/contrib.metrics.md#accuracy)
- * [`aggregate_metric_map`](../../api_docs/python/contrib.metrics.md#aggregate_metric_map)
- * [`aggregate_metrics`](../../api_docs/python/contrib.metrics.md#aggregate_metrics)
- * [`auc_using_histogram`](../../api_docs/python/contrib.metrics.md#auc_using_histogram)
- * [`confusion_matrix`](../../api_docs/python/contrib.metrics.md#confusion_matrix)
- * [`set_difference`](../../api_docs/python/contrib.metrics.md#set_difference)
- * [`set_intersection`](../../api_docs/python/contrib.metrics.md#set_intersection)
- * [`set_size`](../../api_docs/python/contrib.metrics.md#set_size)
- * [`set_union`](../../api_docs/python/contrib.metrics.md#set_union)
- * [`streaming_accuracy`](../../api_docs/python/contrib.metrics.md#streaming_accuracy)
- * [`streaming_auc`](../../api_docs/python/contrib.metrics.md#streaming_auc)
- * [`streaming_concat`](../../api_docs/python/contrib.metrics.md#streaming_concat)
- * [`streaming_covariance`](../../api_docs/python/contrib.metrics.md#streaming_covariance)
- * [`streaming_false_negatives`](../../api_docs/python/contrib.metrics.md#streaming_false_negatives)
- * [`streaming_false_negatives_at_thresholds`](../../api_docs/python/contrib.metrics.md#streaming_false_negatives_at_thresholds)
- * [`streaming_false_positives`](../../api_docs/python/contrib.metrics.md#streaming_false_positives)
- * [`streaming_false_positives_at_thresholds`](../../api_docs/python/contrib.metrics.md#streaming_false_positives_at_thresholds)
- * [`streaming_mean`](../../api_docs/python/contrib.metrics.md#streaming_mean)
- * [`streaming_mean_absolute_error`](../../api_docs/python/contrib.metrics.md#streaming_mean_absolute_error)
- * [`streaming_mean_cosine_distance`](../../api_docs/python/contrib.metrics.md#streaming_mean_cosine_distance)
- * [`streaming_mean_iou`](../../api_docs/python/contrib.metrics.md#streaming_mean_iou)
- * [`streaming_mean_relative_error`](../../api_docs/python/contrib.metrics.md#streaming_mean_relative_error)
- * [`streaming_mean_squared_error`](../../api_docs/python/contrib.metrics.md#streaming_mean_squared_error)
- * [`streaming_mean_tensor`](../../api_docs/python/contrib.metrics.md#streaming_mean_tensor)
- * [`streaming_pearson_correlation`](../../api_docs/python/contrib.metrics.md#streaming_pearson_correlation)
- * [`streaming_percentage_less`](../../api_docs/python/contrib.metrics.md#streaming_percentage_less)
- * [`streaming_precision`](../../api_docs/python/contrib.metrics.md#streaming_precision)
- * [`streaming_precision_at_thresholds`](../../api_docs/python/contrib.metrics.md#streaming_precision_at_thresholds)
- * [`streaming_recall`](../../api_docs/python/contrib.metrics.md#streaming_recall)
- * [`streaming_recall_at_k`](../../api_docs/python/contrib.metrics.md#streaming_recall_at_k)
- * [`streaming_recall_at_thresholds`](../../api_docs/python/contrib.metrics.md#streaming_recall_at_thresholds)
- * [`streaming_root_mean_squared_error`](../../api_docs/python/contrib.metrics.md#streaming_root_mean_squared_error)
- * [`streaming_sensitivity_at_specificity`](../../api_docs/python/contrib.metrics.md#streaming_sensitivity_at_specificity)
- * [`streaming_sparse_average_precision_at_k`](../../api_docs/python/contrib.metrics.md#streaming_sparse_average_precision_at_k)
- * [`streaming_sparse_precision_at_k`](../../api_docs/python/contrib.metrics.md#streaming_sparse_precision_at_k)
- * [`streaming_sparse_precision_at_top_k`](../../api_docs/python/contrib.metrics.md#streaming_sparse_precision_at_top_k)
- * [`streaming_sparse_recall_at_k`](../../api_docs/python/contrib.metrics.md#streaming_sparse_recall_at_k)
- * [`streaming_specificity_at_sensitivity`](../../api_docs/python/contrib.metrics.md#streaming_specificity_at_sensitivity)
- * [`streaming_true_negatives`](../../api_docs/python/contrib.metrics.md#streaming_true_negatives)
- * [`streaming_true_negatives_at_thresholds`](../../api_docs/python/contrib.metrics.md#streaming_true_negatives_at_thresholds)
- * [`streaming_true_positives`](../../api_docs/python/contrib.metrics.md#streaming_true_positives)
- * [`streaming_true_positives_at_thresholds`](../../api_docs/python/contrib.metrics.md#streaming_true_positives_at_thresholds)
-
-* **[Training (contrib)](../../api_docs/python/contrib.training.md)**:
- * [`batch_sequences_with_states`](../../api_docs/python/contrib.training.md#batch_sequences_with_states)
- * [`bucket`](../../api_docs/python/contrib.training.md#bucket)
- * [`bucket_by_sequence_length`](../../api_docs/python/contrib.training.md#bucket_by_sequence_length)
- * [`NextQueuedSequenceBatch`](../../api_docs/python/contrib.training.md#NextQueuedSequenceBatch)
- * [`rejection_sample`](../../api_docs/python/contrib.training.md#rejection_sample)
- * [`resample_at_rate`](../../api_docs/python/contrib.training.md#resample_at_rate)
- * [`SequenceQueueingStateSaver`](../../api_docs/python/contrib.training.md#SequenceQueueingStateSaver)
- * [`stratified_sample`](../../api_docs/python/contrib.training.md#stratified_sample)
- * [`weighted_resample`](../../api_docs/python/contrib.training.md#weighted_resample)
-
-* **[Utilities (contrib)](../../api_docs/python/contrib.util.md)**:
- * [`constant_value`](../../api_docs/python/contrib.util.md#constant_value)
- * [`make_ndarray`](../../api_docs/python/contrib.util.md#make_ndarray)
- * [`make_tensor_proto`](../../api_docs/python/contrib.util.md#make_tensor_proto)
- * [`ops_used_by_graph_def`](../../api_docs/python/contrib.util.md#ops_used_by_graph_def)
- * [`stripped_op_list_for_graph`](../../api_docs/python/contrib.util.md#stripped_op_list_for_graph)
-
-* **[Copying Graph Elements (contrib)](../../api_docs/python/contrib.copy_graph.md)**:
- * [`copy_op_to_graph`](../../api_docs/python/contrib.copy_graph.md#copy_op_to_graph)
- * [`copy_variable_to_graph`](../../api_docs/python/contrib.copy_graph.md#copy_variable_to_graph)
- * [`get_copied_op`](../../api_docs/python/contrib.copy_graph.md#get_copied_op)
-
-* **[TensorFlow Debugger](../../api_docs/python/tf_debug.md)**:
- * [`add_debug_tensor_watch`](../../api_docs/python/tf_debug.md#add_debug_tensor_watch)
- * [`DebugDumpDir`](../../api_docs/python/tf_debug.md#DebugDumpDir)
- * [`DebugTensorDatum`](../../api_docs/python/tf_debug.md#DebugTensorDatum)
- * [`DumpingDebugHook`](../../api_docs/python/tf_debug.md#DumpingDebugHook)
- * [`DumpingDebugWrapperSession`](../../api_docs/python/tf_debug.md#DumpingDebugWrapperSession)
- * [`has_inf_or_nan`](../../api_docs/python/tf_debug.md#has_inf_or_nan)
- * [`load_tensor_from_event_file`](../../api_docs/python/tf_debug.md#load_tensor_from_event_file)
- * [`LocalCLIDebugHook`](../../api_docs/python/tf_debug.md#LocalCLIDebugHook)
- * [`LocalCLIDebugWrapperSession`](../../api_docs/python/tf_debug.md#LocalCLIDebugWrapperSession)
- * [`watch_graph`](../../api_docs/python/tf_debug.md#watch_graph)
- * [`watch_graph_with_blacklists`](../../api_docs/python/tf_debug.md#watch_graph_with_blacklists)
-
diff --git a/tensorflow/g3doc/api_docs/python/io_ops.md b/tensorflow/g3doc/api_docs/python/io_ops.md
deleted file mode 100644
index db2abc44b3..0000000000
--- a/tensorflow/g3doc/api_docs/python/io_ops.md
+++ /dev/null
@@ -1,4575 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Inputs and Readers
-
-Note: Functions taking `Tensor` arguments can also take anything accepted by
-[`tf.convert_to_tensor`](framework.md#convert_to_tensor).
-
-[TOC]
-
-Inputs and Readers. See the @{$python/io_ops} guide.
-
-- - -
-
-### `tf.placeholder(dtype, shape=None, name=None)` {#placeholder}
-
-Inserts a placeholder for a tensor that will be always fed.
-
-**Important**: This tensor will produce an error if evaluated. Its value must
-be fed using the `feed_dict` optional argument to `Session.run()`,
-`Tensor.eval()`, or `Operation.run()`.
-
-For example:
-
-```python
-x = tf.placeholder(tf.float32, shape=(1024, 1024))
-y = tf.matmul(x, x)
-
-with tf.Session() as sess:
- print(sess.run(y)) # ERROR: will fail because x was not fed.
-
- rand_array = np.random.rand(1024, 1024)
- print(sess.run(y, feed_dict={x: rand_array})) # Will succeed.
-```
-
-##### Args:
-
-
-* <b>`dtype`</b>: The type of elements in the tensor to be fed.
-* <b>`shape`</b>: The shape of the tensor to be fed (optional). If the shape is not
- specified, you can feed a tensor of any shape.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` that may be used as a handle for feeding a value, but not
- evaluated directly.
-
-
-- - -
-
-### `tf.placeholder_with_default(input, shape, name=None)` {#placeholder_with_default}
-
-A placeholder op that passes through `input` when its output is not fed.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. The default value to produce when `output` is not fed.
-* <b>`shape`</b>: A `tf.TensorShape` or list of `ints`.
- The (possibly partial) shape of the tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- A placeholder tensor that defaults to `input` if it is not fed.
-
-
-- - -
-
-### `tf.sparse_placeholder(dtype, shape=None, name=None)` {#sparse_placeholder}
-
-Inserts a placeholder for a sparse tensor that will be always fed.
-
-**Important**: This sparse tensor will produce an error if evaluated.
-Its value must be fed using the `feed_dict` optional argument to
-`Session.run()`, `Tensor.eval()`, or `Operation.run()`.
-
-For example:
-
-```python
-x = tf.sparse_placeholder(tf.float32)
-y = tf.sparse_reduce_sum(x)
-
-with tf.Session() as sess:
- print(sess.run(y)) # ERROR: will fail because x was not fed.
-
- indices = np.array([[3, 2, 0], [4, 5, 1]], dtype=np.int64)
- values = np.array([1.0, 2.0], dtype=np.float32)
- shape = np.array([7, 9, 2], dtype=np.int64)
- print(sess.run(y, feed_dict={
- x: tf.SparseTensorValue(indices, values, shape)})) # Will succeed.
- print(sess.run(y, feed_dict={
- x: (indices, values, shape)})) # Will succeed.
-
- sp = tf.SparseTensor(indices=indices, values=values, dense_shape=shape)
- sp_value = sp.eval(session)
- print(sess.run(y, feed_dict={x: sp_value})) # Will succeed.
-```
-
-##### Args:
-
-
-* <b>`dtype`</b>: The type of `values` elements in the tensor to be fed.
-* <b>`shape`</b>: The shape of the tensor to be fed (optional). If the shape is not
- specified, you can feed a sparse tensor of any shape.
-* <b>`name`</b>: A name for prefixing the operations (optional).
-
-##### Returns:
-
- A `SparseTensor` that may be used as a handle for feeding a value, but not
- evaluated directly.
-
-
-- - -
-
-### `class tf.ReaderBase` {#ReaderBase}
-
-Base class for different Reader types, that produce a record every step.
-
-Conceptually, Readers convert string 'work units' into records (key,
-value pairs). Typically the 'work units' are filenames and the
-records are extracted from the contents of those files. We want a
-single record produced per step, but a work unit can correspond to
-many records.
-
-Therefore we introduce some decoupling using a queue. The queue
-contains the work units and the Reader dequeues from the queue when
-it is asked to produce a record (via Read()) but it has finished the
-last work unit.
-- - -
-
-#### `tf.ReaderBase.__init__(reader_ref, supports_serialize=False)` {#ReaderBase.__init__}
-
-Creates a new ReaderBase.
-
-##### Args:
-
-
-* <b>`reader_ref`</b>: The operation that implements the reader.
-* <b>`supports_serialize`</b>: True if the reader implementation can
- serialize its state.
-
-
-- - -
-
-#### `tf.ReaderBase.num_records_produced(name=None)` {#ReaderBase.num_records_produced}
-
-Returns the number of records this reader has produced.
-
-This is the same as the number of Read executions that have
-succeeded.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.ReaderBase.num_work_units_completed(name=None)` {#ReaderBase.num_work_units_completed}
-
-Returns the number of work units this reader has finished processing.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.ReaderBase.read(queue, name=None)` {#ReaderBase.read}
-
-Returns the next record (key, value pair) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g. when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (key, value).
-
-* <b>`key`</b>: A string scalar Tensor.
-* <b>`value`</b>: A string scalar Tensor.
-
-
-- - -
-
-#### `tf.ReaderBase.read_up_to(queue, num_records, name=None)` {#ReaderBase.read_up_to}
-
-Returns up to num_records (key, value pairs) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g., when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-It may return less than num_records even before the last batch.
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`num_records`</b>: Number of records to read.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (keys, values).
-
-* <b>`keys`</b>: A 1-D string Tensor.
-* <b>`values`</b>: A 1-D string Tensor.
-
-
-- - -
-
-#### `tf.ReaderBase.reader_ref` {#ReaderBase.reader_ref}
-
-Op that implements the reader.
-
-
-- - -
-
-#### `tf.ReaderBase.reset(name=None)` {#ReaderBase.reset}
-
-Restore a reader to its initial clean state.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.ReaderBase.restore_state(state, name=None)` {#ReaderBase.restore_state}
-
-Restore a reader to a previously saved state.
-
-Not all Readers support being restored, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`state`</b>: A string Tensor.
- Result of a SerializeState of a Reader with matching type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.ReaderBase.serialize_state(name=None)` {#ReaderBase.serialize_state}
-
-Produce a string tensor that encodes the state of a reader.
-
-Not all Readers support being serialized, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A string Tensor.
-
-
-- - -
-
-#### `tf.ReaderBase.supports_serialize` {#ReaderBase.supports_serialize}
-
-Whether the Reader implementation can serialize its state.
-
-
-
-- - -
-
-### `class tf.TextLineReader` {#TextLineReader}
-
-A Reader that outputs the lines of a file delimited by newlines.
-
-Newlines are stripped from the output.
-See ReaderBase for supported methods.
-- - -
-
-#### `tf.TextLineReader.__init__(skip_header_lines=None, name=None)` {#TextLineReader.__init__}
-
-Create a TextLineReader.
-
-##### Args:
-
-
-* <b>`skip_header_lines`</b>: An optional int. Defaults to 0. Number of lines
- to skip from the beginning of every file.
-* <b>`name`</b>: A name for the operation (optional).
-
-
-- - -
-
-#### `tf.TextLineReader.num_records_produced(name=None)` {#TextLineReader.num_records_produced}
-
-Returns the number of records this reader has produced.
-
-This is the same as the number of Read executions that have
-succeeded.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.TextLineReader.num_work_units_completed(name=None)` {#TextLineReader.num_work_units_completed}
-
-Returns the number of work units this reader has finished processing.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.TextLineReader.read(queue, name=None)` {#TextLineReader.read}
-
-Returns the next record (key, value pair) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g. when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (key, value).
-
-* <b>`key`</b>: A string scalar Tensor.
-* <b>`value`</b>: A string scalar Tensor.
-
-
-- - -
-
-#### `tf.TextLineReader.read_up_to(queue, num_records, name=None)` {#TextLineReader.read_up_to}
-
-Returns up to num_records (key, value pairs) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g., when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-It may return less than num_records even before the last batch.
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`num_records`</b>: Number of records to read.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (keys, values).
-
-* <b>`keys`</b>: A 1-D string Tensor.
-* <b>`values`</b>: A 1-D string Tensor.
-
-
-- - -
-
-#### `tf.TextLineReader.reader_ref` {#TextLineReader.reader_ref}
-
-Op that implements the reader.
-
-
-- - -
-
-#### `tf.TextLineReader.reset(name=None)` {#TextLineReader.reset}
-
-Restore a reader to its initial clean state.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.TextLineReader.restore_state(state, name=None)` {#TextLineReader.restore_state}
-
-Restore a reader to a previously saved state.
-
-Not all Readers support being restored, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`state`</b>: A string Tensor.
- Result of a SerializeState of a Reader with matching type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.TextLineReader.serialize_state(name=None)` {#TextLineReader.serialize_state}
-
-Produce a string tensor that encodes the state of a reader.
-
-Not all Readers support being serialized, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A string Tensor.
-
-
-- - -
-
-#### `tf.TextLineReader.supports_serialize` {#TextLineReader.supports_serialize}
-
-Whether the Reader implementation can serialize its state.
-
-
-
-- - -
-
-### `class tf.WholeFileReader` {#WholeFileReader}
-
-A Reader that outputs the entire contents of a file as a value.
-
-To use, enqueue filenames in a Queue. The output of Read will
-be a filename (key) and the contents of that file (value).
-
-See ReaderBase for supported methods.
-- - -
-
-#### `tf.WholeFileReader.__init__(name=None)` {#WholeFileReader.__init__}
-
-Create a WholeFileReader.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-
-- - -
-
-#### `tf.WholeFileReader.num_records_produced(name=None)` {#WholeFileReader.num_records_produced}
-
-Returns the number of records this reader has produced.
-
-This is the same as the number of Read executions that have
-succeeded.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.WholeFileReader.num_work_units_completed(name=None)` {#WholeFileReader.num_work_units_completed}
-
-Returns the number of work units this reader has finished processing.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.WholeFileReader.read(queue, name=None)` {#WholeFileReader.read}
-
-Returns the next record (key, value pair) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g. when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (key, value).
-
-* <b>`key`</b>: A string scalar Tensor.
-* <b>`value`</b>: A string scalar Tensor.
-
-
-- - -
-
-#### `tf.WholeFileReader.read_up_to(queue, num_records, name=None)` {#WholeFileReader.read_up_to}
-
-Returns up to num_records (key, value pairs) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g., when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-It may return less than num_records even before the last batch.
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`num_records`</b>: Number of records to read.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (keys, values).
-
-* <b>`keys`</b>: A 1-D string Tensor.
-* <b>`values`</b>: A 1-D string Tensor.
-
-
-- - -
-
-#### `tf.WholeFileReader.reader_ref` {#WholeFileReader.reader_ref}
-
-Op that implements the reader.
-
-
-- - -
-
-#### `tf.WholeFileReader.reset(name=None)` {#WholeFileReader.reset}
-
-Restore a reader to its initial clean state.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.WholeFileReader.restore_state(state, name=None)` {#WholeFileReader.restore_state}
-
-Restore a reader to a previously saved state.
-
-Not all Readers support being restored, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`state`</b>: A string Tensor.
- Result of a SerializeState of a Reader with matching type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.WholeFileReader.serialize_state(name=None)` {#WholeFileReader.serialize_state}
-
-Produce a string tensor that encodes the state of a reader.
-
-Not all Readers support being serialized, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A string Tensor.
-
-
-- - -
-
-#### `tf.WholeFileReader.supports_serialize` {#WholeFileReader.supports_serialize}
-
-Whether the Reader implementation can serialize its state.
-
-
-
-- - -
-
-### `class tf.IdentityReader` {#IdentityReader}
-
-A Reader that outputs the queued work as both the key and value.
-
-To use, enqueue strings in a Queue. Read will take the front
-work string and output (work, work).
-
-See ReaderBase for supported methods.
-- - -
-
-#### `tf.IdentityReader.__init__(name=None)` {#IdentityReader.__init__}
-
-Create a IdentityReader.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-
-- - -
-
-#### `tf.IdentityReader.num_records_produced(name=None)` {#IdentityReader.num_records_produced}
-
-Returns the number of records this reader has produced.
-
-This is the same as the number of Read executions that have
-succeeded.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.IdentityReader.num_work_units_completed(name=None)` {#IdentityReader.num_work_units_completed}
-
-Returns the number of work units this reader has finished processing.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.IdentityReader.read(queue, name=None)` {#IdentityReader.read}
-
-Returns the next record (key, value pair) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g. when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (key, value).
-
-* <b>`key`</b>: A string scalar Tensor.
-* <b>`value`</b>: A string scalar Tensor.
-
-
-- - -
-
-#### `tf.IdentityReader.read_up_to(queue, num_records, name=None)` {#IdentityReader.read_up_to}
-
-Returns up to num_records (key, value pairs) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g., when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-It may return less than num_records even before the last batch.
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`num_records`</b>: Number of records to read.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (keys, values).
-
-* <b>`keys`</b>: A 1-D string Tensor.
-* <b>`values`</b>: A 1-D string Tensor.
-
-
-- - -
-
-#### `tf.IdentityReader.reader_ref` {#IdentityReader.reader_ref}
-
-Op that implements the reader.
-
-
-- - -
-
-#### `tf.IdentityReader.reset(name=None)` {#IdentityReader.reset}
-
-Restore a reader to its initial clean state.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.IdentityReader.restore_state(state, name=None)` {#IdentityReader.restore_state}
-
-Restore a reader to a previously saved state.
-
-Not all Readers support being restored, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`state`</b>: A string Tensor.
- Result of a SerializeState of a Reader with matching type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.IdentityReader.serialize_state(name=None)` {#IdentityReader.serialize_state}
-
-Produce a string tensor that encodes the state of a reader.
-
-Not all Readers support being serialized, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A string Tensor.
-
-
-- - -
-
-#### `tf.IdentityReader.supports_serialize` {#IdentityReader.supports_serialize}
-
-Whether the Reader implementation can serialize its state.
-
-
-
-- - -
-
-### `class tf.TFRecordReader` {#TFRecordReader}
-
-A Reader that outputs the records from a TFRecords file.
-
-See ReaderBase for supported methods.
-- - -
-
-#### `tf.TFRecordReader.__init__(name=None, options=None)` {#TFRecordReader.__init__}
-
-Create a TFRecordReader.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`options`</b>: A TFRecordOptions object (optional).
-
-
-- - -
-
-#### `tf.TFRecordReader.num_records_produced(name=None)` {#TFRecordReader.num_records_produced}
-
-Returns the number of records this reader has produced.
-
-This is the same as the number of Read executions that have
-succeeded.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.TFRecordReader.num_work_units_completed(name=None)` {#TFRecordReader.num_work_units_completed}
-
-Returns the number of work units this reader has finished processing.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.TFRecordReader.read(queue, name=None)` {#TFRecordReader.read}
-
-Returns the next record (key, value pair) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g. when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (key, value).
-
-* <b>`key`</b>: A string scalar Tensor.
-* <b>`value`</b>: A string scalar Tensor.
-
-
-- - -
-
-#### `tf.TFRecordReader.read_up_to(queue, num_records, name=None)` {#TFRecordReader.read_up_to}
-
-Returns up to num_records (key, value pairs) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g., when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-It may return less than num_records even before the last batch.
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`num_records`</b>: Number of records to read.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (keys, values).
-
-* <b>`keys`</b>: A 1-D string Tensor.
-* <b>`values`</b>: A 1-D string Tensor.
-
-
-- - -
-
-#### `tf.TFRecordReader.reader_ref` {#TFRecordReader.reader_ref}
-
-Op that implements the reader.
-
-
-- - -
-
-#### `tf.TFRecordReader.reset(name=None)` {#TFRecordReader.reset}
-
-Restore a reader to its initial clean state.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.TFRecordReader.restore_state(state, name=None)` {#TFRecordReader.restore_state}
-
-Restore a reader to a previously saved state.
-
-Not all Readers support being restored, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`state`</b>: A string Tensor.
- Result of a SerializeState of a Reader with matching type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.TFRecordReader.serialize_state(name=None)` {#TFRecordReader.serialize_state}
-
-Produce a string tensor that encodes the state of a reader.
-
-Not all Readers support being serialized, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A string Tensor.
-
-
-- - -
-
-#### `tf.TFRecordReader.supports_serialize` {#TFRecordReader.supports_serialize}
-
-Whether the Reader implementation can serialize its state.
-
-
-
-- - -
-
-### `class tf.FixedLengthRecordReader` {#FixedLengthRecordReader}
-
-A Reader that outputs fixed-length records from a file.
-
-See ReaderBase for supported methods.
-- - -
-
-#### `tf.FixedLengthRecordReader.__init__(record_bytes, header_bytes=None, footer_bytes=None, name=None)` {#FixedLengthRecordReader.__init__}
-
-Create a FixedLengthRecordReader.
-
-##### Args:
-
-
-* <b>`record_bytes`</b>: An int.
-* <b>`header_bytes`</b>: An optional int. Defaults to 0.
-* <b>`footer_bytes`</b>: An optional int. Defaults to 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.num_records_produced(name=None)` {#FixedLengthRecordReader.num_records_produced}
-
-Returns the number of records this reader has produced.
-
-This is the same as the number of Read executions that have
-succeeded.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.num_work_units_completed(name=None)` {#FixedLengthRecordReader.num_work_units_completed}
-
-Returns the number of work units this reader has finished processing.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- An int64 Tensor.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.read(queue, name=None)` {#FixedLengthRecordReader.read}
-
-Returns the next record (key, value pair) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g. when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (key, value).
-
-* <b>`key`</b>: A string scalar Tensor.
-* <b>`value`</b>: A string scalar Tensor.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.read_up_to(queue, num_records, name=None)` {#FixedLengthRecordReader.read_up_to}
-
-Returns up to num_records (key, value pairs) produced by a reader.
-
-Will dequeue a work unit from queue if necessary (e.g., when the
-Reader needs to start reading from a new file since it has
-finished with the previous file).
-It may return less than num_records even before the last batch.
-
-##### Args:
-
-
-* <b>`queue`</b>: A Queue or a mutable string Tensor representing a handle
- to a Queue, with string work items.
-* <b>`num_records`</b>: Number of records to read.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of Tensors (keys, values).
-
-* <b>`keys`</b>: A 1-D string Tensor.
-* <b>`values`</b>: A 1-D string Tensor.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.reader_ref` {#FixedLengthRecordReader.reader_ref}
-
-Op that implements the reader.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.reset(name=None)` {#FixedLengthRecordReader.reset}
-
-Restore a reader to its initial clean state.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.restore_state(state, name=None)` {#FixedLengthRecordReader.restore_state}
-
-Restore a reader to a previously saved state.
-
-Not all Readers support being restored, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`state`</b>: A string Tensor.
- Result of a SerializeState of a Reader with matching type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.serialize_state(name=None)` {#FixedLengthRecordReader.serialize_state}
-
-Produce a string tensor that encodes the state of a reader.
-
-Not all Readers support being serialized, so this can produce an
-Unimplemented error.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A string Tensor.
-
-
-- - -
-
-#### `tf.FixedLengthRecordReader.supports_serialize` {#FixedLengthRecordReader.supports_serialize}
-
-Whether the Reader implementation can serialize its state.
-
-
-
-- - -
-
-### `tf.decode_csv(records, record_defaults, field_delim=None, name=None)` {#decode_csv}
-
-Convert CSV records to tensors. Each column maps to one tensor.
-
-RFC 4180 format is expected for the CSV records.
-(https://tools.ietf.org/html/rfc4180)
-Note that we allow leading and trailing spaces with int or float field.
-
-##### Args:
-
-
-* <b>`records`</b>: A `Tensor` of type `string`.
- Each string is a record/row in the csv and all records should have
- the same format.
-* <b>`record_defaults`</b>: A list of `Tensor` objects with types from: `float32`, `int32`, `int64`, `string`.
- One tensor per column of the input record, with either a
- scalar default value for that column or empty if the column is required.
-* <b>`field_delim`</b>: An optional `string`. Defaults to `","`.
- delimiter to separate fields in a record.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A list of `Tensor` objects. Has the same type as `record_defaults`.
- Each tensor will have the same shape as records.
-
-
-- - -
-
-### `tf.decode_raw(bytes, out_type, little_endian=None, name=None)` {#decode_raw}
-
-Reinterpret the bytes of a string as a vector of numbers.
-
-##### Args:
-
-
-* <b>`bytes`</b>: A `Tensor` of type `string`.
- All the elements must have the same length.
-* <b>`out_type`</b>: A `tf.DType` from: `tf.half, tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.int64`.
-* <b>`little_endian`</b>: An optional `bool`. Defaults to `True`.
- Whether the input `bytes` are in little-endian order.
- Ignored for `out_type` values that are stored in a single byte like
- `uint8`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `out_type`.
- A Tensor with one more dimension than the input `bytes`. The
- added dimension will have size equal to the length of the elements
- of `bytes` divided by the number of bytes to represent `out_type`.
-
-
-- - -
-
-### `class tf.VarLenFeature` {#VarLenFeature}
-
-Configuration for parsing a variable-length input feature.
-
-Fields:
- dtype: Data type of input.
-- - -
-
-#### `tf.VarLenFeature.__getnewargs__()` {#VarLenFeature.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.VarLenFeature.__getstate__()` {#VarLenFeature.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.VarLenFeature.__new__(_cls, dtype)` {#VarLenFeature.__new__}
-
-Create new instance of VarLenFeature(dtype,)
-
-
-- - -
-
-#### `tf.VarLenFeature.__repr__()` {#VarLenFeature.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.VarLenFeature.dtype` {#VarLenFeature.dtype}
-
-Alias for field number 0
-
-
-
-- - -
-
-### `class tf.FixedLenFeature` {#FixedLenFeature}
-
-Configuration for parsing a fixed-length input feature.
-
-To treat sparse input as dense, provide a `default_value`; otherwise,
-the parse functions will fail on any examples missing this feature.
-
-Fields:
- shape: Shape of input data.
- dtype: Data type of input.
- default_value: Value to be used if an example is missing this feature. It
- must be compatible with `dtype`.
-- - -
-
-#### `tf.FixedLenFeature.__getnewargs__()` {#FixedLenFeature.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.FixedLenFeature.__getstate__()` {#FixedLenFeature.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.FixedLenFeature.__new__(_cls, shape, dtype, default_value=None)` {#FixedLenFeature.__new__}
-
-Create new instance of FixedLenFeature(shape, dtype, default_value)
-
-
-- - -
-
-#### `tf.FixedLenFeature.__repr__()` {#FixedLenFeature.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.FixedLenFeature.default_value` {#FixedLenFeature.default_value}
-
-Alias for field number 2
-
-
-- - -
-
-#### `tf.FixedLenFeature.dtype` {#FixedLenFeature.dtype}
-
-Alias for field number 1
-
-
-- - -
-
-#### `tf.FixedLenFeature.shape` {#FixedLenFeature.shape}
-
-Alias for field number 0
-
-
-
-- - -
-
-### `class tf.FixedLenSequenceFeature` {#FixedLenSequenceFeature}
-
-Configuration for a dense input feature in a sequence item.
-
-To treat a sparse input as dense, provide `allow_missing=True`; otherwise,
-the parse functions will fail on any examples missing this feature.
-
-Fields:
- shape: Shape of input data.
- dtype: Data type of input.
- allow_missing: Whether to allow this feature to be missing from a feature
- list item.
-- - -
-
-#### `tf.FixedLenSequenceFeature.__getnewargs__()` {#FixedLenSequenceFeature.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.FixedLenSequenceFeature.__getstate__()` {#FixedLenSequenceFeature.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.FixedLenSequenceFeature.__new__(_cls, shape, dtype, allow_missing=False)` {#FixedLenSequenceFeature.__new__}
-
-Create new instance of FixedLenSequenceFeature(shape, dtype, allow_missing)
-
-
-- - -
-
-#### `tf.FixedLenSequenceFeature.__repr__()` {#FixedLenSequenceFeature.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.FixedLenSequenceFeature.allow_missing` {#FixedLenSequenceFeature.allow_missing}
-
-Alias for field number 2
-
-
-- - -
-
-#### `tf.FixedLenSequenceFeature.dtype` {#FixedLenSequenceFeature.dtype}
-
-Alias for field number 1
-
-
-- - -
-
-#### `tf.FixedLenSequenceFeature.shape` {#FixedLenSequenceFeature.shape}
-
-Alias for field number 0
-
-
-
-- - -
-
-### `class tf.SparseFeature` {#SparseFeature}
-
-Configuration for parsing a sparse input feature.
-
-Fields:
- index_key: Name of index feature. The underlying feature's type must
- be `int64` and its length must always match that of the `value_key`
- feature.
- value_key: Name of value feature. The underlying feature's type must
- be `dtype` and its length must always match that of the `index_key`
- feature.
- dtype: Data type of the `value_key` feature.
- size: A Python int to specify a dimension of the dense shape. Each value in
- the `index_key` feature must be in `[0, size)`.
- already_sorted: A Python boolean to specify whether the values in
- `index_key` are already sorted. If so skip sorting.
- False by default (optional).
-- - -
-
-#### `tf.SparseFeature.__getnewargs__()` {#SparseFeature.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.SparseFeature.__getstate__()` {#SparseFeature.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.SparseFeature.__new__(_cls, index_key, value_key, dtype, size, already_sorted=False)` {#SparseFeature.__new__}
-
-Create new instance of SparseFeature(index_key, value_key, dtype, size, already_sorted)
-
-
-- - -
-
-#### `tf.SparseFeature.__repr__()` {#SparseFeature.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.SparseFeature.already_sorted` {#SparseFeature.already_sorted}
-
-Alias for field number 4
-
-
-- - -
-
-#### `tf.SparseFeature.dtype` {#SparseFeature.dtype}
-
-Alias for field number 2
-
-
-- - -
-
-#### `tf.SparseFeature.index_key` {#SparseFeature.index_key}
-
-Alias for field number 0
-
-
-- - -
-
-#### `tf.SparseFeature.size` {#SparseFeature.size}
-
-Alias for field number 3
-
-
-- - -
-
-#### `tf.SparseFeature.value_key` {#SparseFeature.value_key}
-
-Alias for field number 1
-
-
-
-- - -
-
-### `tf.parse_example(serialized, features, name=None, example_names=None)` {#parse_example}
-
-Parses `Example` protos into a `dict` of tensors.
-
-Parses a number of serialized [`Example`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto)
-protos given in `serialized`.
-
-`example_names` may contain descriptive names for the corresponding serialized
-protos. These may be useful for debugging purposes, but they have no effect on
-the output. If not `None`, `example_names` must be the same length as
-`serialized`.
-
-This op parses serialized examples into a dictionary mapping keys to `Tensor`
-and `SparseTensor` objects. `features` is a dict from keys to `VarLenFeature`,
-`SparseFeature`, and `FixedLenFeature` objects. Each `VarLenFeature`
-and `SparseFeature` is mapped to a `SparseTensor`, and each
-`FixedLenFeature` is mapped to a `Tensor`.
-
-Each `VarLenFeature` maps to a `SparseTensor` of the specified type
-representing a ragged matrix. Its indices are `[batch, index]` where `batch`
-is the batch entry the value is from in `serialized`, and `index` is the
-value's index in the list of values associated with that feature and example.
-
-Each `SparseFeature` maps to a `SparseTensor` of the specified type
-representing a sparse matrix of shape
-`(serialized.size(), SparseFeature.size)`. Its indices are `[batch, index]`
-where `batch` is the batch entry the value is from in `serialized`, and
-`index` is the value's index is given by the values in the
-`SparseFeature.index_key` feature column.
-
-Each `FixedLenFeature` `df` maps to a `Tensor` of the specified type (or
-`tf.float32` if not specified) and shape `(serialized.size(),) + df.shape`.
-
-`FixedLenFeature` entries with a `default_value` are optional. With no default
-value, we will fail if that `Feature` is missing from any example in
-`serialized`.
-
-Examples:
-
-For example, if one expects a `tf.float32` sparse feature `ft` and three
-serialized `Example`s are provided:
-
-```
-serialized = [
- features
- { feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } },
- features
- { feature []},
- features
- { feature { key: "ft" value { float_list { value: [3.0] } } }
-]
-```
-
-then the output will look like:
-
-```
-{"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]],
- values=[1.0, 2.0, 3.0],
- dense_shape=(3, 2)) }
-```
-
-Given two `Example` input protos in `serialized`:
-
-```
-[
- features {
- feature { key: "kw" value { bytes_list { value: [ "knit", "big" ] } } }
- feature { key: "gps" value { float_list { value: [] } } }
- },
- features {
- feature { key: "kw" value { bytes_list { value: [ "emmy" ] } } }
- feature { key: "dank" value { int64_list { value: [ 42 ] } } }
- feature { key: "gps" value { } }
- }
-]
-```
-
-And arguments
-
-```
-example_names: ["input0", "input1"],
-features: {
- "kw": VarLenFeature(tf.string),
- "dank": VarLenFeature(tf.int64),
- "gps": VarLenFeature(tf.float32),
-}
-```
-
-Then the output is a dictionary:
-
-```python
-{
- "kw": SparseTensor(
- indices=[[0, 0], [0, 1], [1, 0]],
- values=["knit", "big", "emmy"]
- dense_shape=[2, 2]),
- "dank": SparseTensor(
- indices=[[1, 0]],
- values=[42],
- dense_shape=[2, 1]),
- "gps": SparseTensor(
- indices=[],
- values=[],
- dense_shape=[2, 0]),
-}
-```
-
-For dense results in two serialized `Example`s:
-
-```
-[
- features {
- feature { key: "age" value { int64_list { value: [ 0 ] } } }
- feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
- },
- features {
- feature { key: "age" value { int64_list { value: [] } } }
- feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
- }
-]
-```
-
-We can use arguments:
-
-```
-example_names: ["input0", "input1"],
-features: {
- "age": FixedLenFeature([], dtype=tf.int64, default_value=-1),
- "gender": FixedLenFeature([], dtype=tf.string),
-}
-```
-
-And the expected output is:
-
-```python
-{
- "age": [[0], [-1]],
- "gender": [["f"], ["f"]],
-}
-```
-
-Given two `Example` input protos in `serialized`:
-
-```
-[
- features {
- feature { key: "val" value { float_list { value: [ 0.5, -1.0 ] } } }
- feature { key: "ix" value { int64_list { value: [ 3, 20 ] } } }
- },
- features {
- feature { key: "val" value { float_list { value: [ 0.0 ] } } }
- feature { key: "ix" value { int64_list { value: [ 42 ] } } }
- }
-]
-```
-
-And arguments
-
-```
-example_names: ["input0", "input1"],
-features: {
- "sparse": SparseFeature(
- index_key="ix", value_key="val", dtype=tf.float32, size=100),
-}
-```
-
-Then the output is a dictionary:
-
-```python
-{
- "sparse": SparseTensor(
- indices=[[0, 3], [0, 20], [1, 42]],
- values=[0.5, -1.0, 0.0]
- dense_shape=[2, 100]),
-}
-```
-
-##### Args:
-
-
-* <b>`serialized`</b>: A vector (1-D Tensor) of strings, a batch of binary
- serialized `Example` protos.
-* <b>`features`</b>: A `dict` mapping feature keys to `FixedLenFeature`,
- `VarLenFeature`, and `SparseFeature` values.
-* <b>`name`</b>: A name for this operation (optional).
-* <b>`example_names`</b>: A vector (1-D Tensor) of strings (optional), the names of
- the serialized protos in the batch.
-
-##### Returns:
-
- A `dict` mapping feature keys to `Tensor` and `SparseTensor` values.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if any feature is invalid.
-
-
-- - -
-
-### `tf.parse_single_example(serialized, features, name=None, example_names=None)` {#parse_single_example}
-
-Parses a single `Example` proto.
-
-Similar to `parse_example`, except:
-
-For dense tensors, the returned `Tensor` is identical to the output of
-`parse_example`, except there is no batch dimension, the output shape is the
-same as the shape given in `dense_shape`.
-
-For `SparseTensor`s, the first (batch) column of the indices matrix is removed
-(the indices matrix is a column vector), the values vector is unchanged, and
-the first (`batch_size`) entry of the shape vector is removed (it is now a
-single element vector).
-
-One might see performance advantages by batching `Example` protos with
-`parse_example` instead of using this function directly.
-
-##### Args:
-
-
-* <b>`serialized`</b>: A scalar string Tensor, a single serialized Example.
- See `_parse_single_example_raw` documentation for more details.
-* <b>`features`</b>: A `dict` mapping feature keys to `FixedLenFeature` or
- `VarLenFeature` values.
-* <b>`name`</b>: A name for this operation (optional).
-* <b>`example_names`</b>: (Optional) A scalar string Tensor, the associated name.
- See `_parse_single_example_raw` documentation for more details.
-
-##### Returns:
-
- A `dict` mapping feature keys to `Tensor` and `SparseTensor` values.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if any feature is invalid.
-
-
-- - -
-
-### `tf.parse_tensor(serialized, out_type, name=None)` {#parse_tensor}
-
-Transforms a serialized tensorflow.TensorProto proto into a Tensor.
-
-##### Args:
-
-
-* <b>`serialized`</b>: A `Tensor` of type `string`.
- A scalar string containing a serialized TensorProto proto.
-* <b>`out_type`</b>: A `tf.DType`.
- The type of the serialized tensor. The provided type must match the
- type of the serialized tensor and no implicit conversion will take place.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `out_type`. A Tensor of type `out_type`.
-
-
-- - -
-
-### `tf.decode_json_example(json_examples, name=None)` {#decode_json_example}
-
-Convert JSON-encoded Example records to binary protocol buffer strings.
-
-This op translates a tensor containing Example records, encoded using
-the [standard JSON
-mapping](https://developers.google.com/protocol-buffers/docs/proto3#json),
-into a tensor containing the same records encoded as binary protocol
-buffers. The resulting tensor can then be fed to any of the other
-Example-parsing ops.
-
-##### Args:
-
-
-* <b>`json_examples`</b>: A `Tensor` of type `string`.
- Each string is a JSON object serialized according to the JSON
- mapping of the Example proto.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`.
- Each string is a binary Example protocol buffer corresponding
- to the respective element of `json_examples`.
-
-
-- - -
-
-### `class tf.QueueBase` {#QueueBase}
-
-Base class for queue implementations.
-
-A queue is a TensorFlow data structure that stores tensors across
-multiple steps, and exposes operations that enqueue and dequeue
-tensors.
-
-Each queue element is a tuple of one or more tensors, where each
-tuple component has a static dtype, and may have a static shape. The
-queue implementations support versions of enqueue and dequeue that
-handle single elements, versions that support enqueuing and
-dequeuing a batch of elements at once.
-
-See [`tf.FIFOQueue`](#FIFOQueue) and
-[`tf.RandomShuffleQueue`](#RandomShuffleQueue) for concrete
-implementations of this class, and instructions on how to create
-them.
-- - -
-
-#### `tf.QueueBase.__init__(dtypes, shapes, names, queue_ref)` {#QueueBase.__init__}
-
-Constructs a queue object from a queue reference.
-
-The two optional lists, `shapes` and `names`, must be of the same length
-as `dtypes` if provided. The values at a given index `i` indicate the
-shape and name to use for the corresponding queue component in `dtypes`.
-
-##### Args:
-
-
-* <b>`dtypes`</b>: A list of types. The length of dtypes must equal the number
- of tensors in each element.
-* <b>`shapes`</b>: Constraints on the shapes of tensors in an element:
- A list of shape tuples or None. This list is the same length
- as dtypes. If the shape of any tensors in the element are constrained,
- all must be; shapes can be None if the shapes should not be constrained.
-* <b>`names`</b>: Optional list of names. If provided, the `enqueue()` and
- `dequeue()` methods will use dictionaries with these names as keys.
- Must be None or a list or tuple of the same length as `dtypes`.
-* <b>`queue_ref`</b>: The queue reference, i.e. the output of the queue op.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If one of the arguments is invalid.
-
-
-- - -
-
-#### `tf.QueueBase.close(cancel_pending_enqueues=False, name=None)` {#QueueBase.close}
-
-Closes this queue.
-
-This operation signals that no more elements will be enqueued in
-the given queue. Subsequent `enqueue` and `enqueue_many`
-operations will fail. Subsequent `dequeue` and `dequeue_many`
-operations will continue to succeed if sufficient elements remain
-in the queue. Subsequent `dequeue` and `dequeue_many` operations
-that would block will fail immediately.
-
-If `cancel_pending_enqueues` is `True`, all pending requests will also
-be cancelled.
-
-##### Args:
-
-
-* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
- `False` (described above).
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that closes the queue.
-
-
-- - -
-
-#### `tf.QueueBase.dequeue(name=None)` {#QueueBase.dequeue}
-
-Dequeues one element from this queue.
-
-If the queue is empty when this operation executes, it will block
-until there is an element to dequeue.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue is empty, and there are no pending
-enqueue operations that can fulfill this request,
-`tf.errors.OutOfRangeError` will be raised. If the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of tensors that was dequeued.
-
-
-- - -
-
-#### `tf.QueueBase.dequeue_many(n, name=None)` {#QueueBase.dequeue_many}
-
-Dequeues and concatenates `n` elements from this queue.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. All of the
-components in the dequeued tuple will have size `n` in the 0th dimension.
-
-If the queue is closed and there are less than `n` elements left, then an
-`OutOfRange` exception is raised.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue contains fewer than `n` elements, and
-there are no pending enqueue operations that can fulfill this
-request, `tf.errors.OutOfRangeError` will be raised. If the
-session is [closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.QueueBase.dequeue_up_to(n, name=None)` {#QueueBase.dequeue_up_to}
-
-Dequeues and concatenates `n` elements from this queue.
-
-**Note** This operation is not supported by all queues. If a queue does not
-support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. If the queue
-has not been closed, all of the components in the dequeued tuple
-will have size `n` in the 0th dimension.
-
-If the queue is closed and there are more than `0` but fewer than
-`n` elements remaining, then instead of raising a
-`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
-less than `n` elements are returned immediately. If the queue is
-closed and there are `0` elements left in the queue, then a
-`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
-Otherwise the behavior is identical to `dequeue_many`.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.QueueBase.dtypes` {#QueueBase.dtypes}
-
-The list of dtypes for each component of a queue element.
-
-
-- - -
-
-#### `tf.QueueBase.enqueue(vals, name=None)` {#QueueBase.enqueue}
-
-Enqueues one element to this queue.
-
-If the queue is full when this operation executes, it will block
-until the element has been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
- the values to enqueue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a new tuple of tensors to the queue.
-
-
-- - -
-
-#### `tf.QueueBase.enqueue_many(vals, name=None)` {#QueueBase.enqueue_many}
-
-Enqueues zero or more elements to this queue.
-
-This operation slices each component tensor along the 0th dimension to
-make multiple queue elements. All of the tensors in `vals` must have the
-same size in the 0th dimension.
-
-If the queue is full when this operation executes, it will block
-until all of the elements have been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
- from which the queue elements are taken.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a batch of tuples of tensors to the queue.
-
-
-- - -
-
-#### `tf.QueueBase.from_list(index, queues)` {#QueueBase.from_list}
-
-Create a queue using the queue reference from `queues[index]`.
-
-##### Args:
-
-
-* <b>`index`</b>: An integer scalar tensor that determines the input that gets
- selected.
-* <b>`queues`</b>: A list of `QueueBase` objects.
-
-##### Returns:
-
- A `QueueBase` object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
- or when the data types of `queues` are not all the same.
-
-
-- - -
-
-#### `tf.QueueBase.name` {#QueueBase.name}
-
-The name of the underlying queue.
-
-
-- - -
-
-#### `tf.QueueBase.names` {#QueueBase.names}
-
-The list of names for each component of a queue element.
-
-
-- - -
-
-#### `tf.QueueBase.queue_ref` {#QueueBase.queue_ref}
-
-The underlying queue reference.
-
-
-- - -
-
-#### `tf.QueueBase.shapes` {#QueueBase.shapes}
-
-The list of shapes for each component of a queue element.
-
-
-- - -
-
-#### `tf.QueueBase.size(name=None)` {#QueueBase.size}
-
-Compute the number of elements in this queue.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A scalar tensor containing the number of elements in this queue.
-
-
-
-- - -
-
-### `class tf.FIFOQueue` {#FIFOQueue}
-
-A queue implementation that dequeues elements in first-in first-out order.
-
-See [`tf.QueueBase`](#QueueBase) for a description of the methods on
-this class.
-- - -
-
-#### `tf.FIFOQueue.__init__(capacity, dtypes, shapes=None, names=None, shared_name=None, name='fifo_queue')` {#FIFOQueue.__init__}
-
-Creates a queue that dequeues elements in a first-in first-out order.
-
-A `FIFOQueue` has bounded capacity; supports multiple concurrent
-producers and consumers; and provides exactly-once delivery.
-
-A `FIFOQueue` holds a list of up to `capacity` elements. Each
-element is a fixed-length tuple of tensors whose dtypes are
-described by `dtypes`, and whose shapes are optionally described
-by the `shapes` argument.
-
-If the `shapes` argument is specified, each component of a queue
-element must have the respective fixed shape. If it is
-unspecified, different queue elements may have different shapes,
-but the use of `dequeue_many` is disallowed.
-
-##### Args:
-
-
-* <b>`capacity`</b>: An integer. The upper bound on the number of elements
- that may be stored in this queue.
-* <b>`dtypes`</b>: A list of `DType` objects. The length of `dtypes` must equal
- the number of tensors in each queue element.
-* <b>`shapes`</b>: (Optional.) A list of fully-defined `TensorShape` objects
- with the same length as `dtypes`, or `None`.
-* <b>`names`</b>: (Optional.) A list of string naming the components in the queue
- with the same length as `dtypes`, or `None`. If specified the dequeue
- methods return a dictionary with the names as keys.
-* <b>`shared_name`</b>: (Optional.) If non-empty, this queue will be shared under
- the given name across multiple sessions.
-* <b>`name`</b>: Optional name for the queue operation.
-
-
-- - -
-
-#### `tf.FIFOQueue.close(cancel_pending_enqueues=False, name=None)` {#FIFOQueue.close}
-
-Closes this queue.
-
-This operation signals that no more elements will be enqueued in
-the given queue. Subsequent `enqueue` and `enqueue_many`
-operations will fail. Subsequent `dequeue` and `dequeue_many`
-operations will continue to succeed if sufficient elements remain
-in the queue. Subsequent `dequeue` and `dequeue_many` operations
-that would block will fail immediately.
-
-If `cancel_pending_enqueues` is `True`, all pending requests will also
-be cancelled.
-
-##### Args:
-
-
-* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
- `False` (described above).
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that closes the queue.
-
-
-- - -
-
-#### `tf.FIFOQueue.dequeue(name=None)` {#FIFOQueue.dequeue}
-
-Dequeues one element from this queue.
-
-If the queue is empty when this operation executes, it will block
-until there is an element to dequeue.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue is empty, and there are no pending
-enqueue operations that can fulfill this request,
-`tf.errors.OutOfRangeError` will be raised. If the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of tensors that was dequeued.
-
-
-- - -
-
-#### `tf.FIFOQueue.dequeue_many(n, name=None)` {#FIFOQueue.dequeue_many}
-
-Dequeues and concatenates `n` elements from this queue.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. All of the
-components in the dequeued tuple will have size `n` in the 0th dimension.
-
-If the queue is closed and there are less than `n` elements left, then an
-`OutOfRange` exception is raised.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue contains fewer than `n` elements, and
-there are no pending enqueue operations that can fulfill this
-request, `tf.errors.OutOfRangeError` will be raised. If the
-session is [closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.FIFOQueue.dequeue_up_to(n, name=None)` {#FIFOQueue.dequeue_up_to}
-
-Dequeues and concatenates `n` elements from this queue.
-
-**Note** This operation is not supported by all queues. If a queue does not
-support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. If the queue
-has not been closed, all of the components in the dequeued tuple
-will have size `n` in the 0th dimension.
-
-If the queue is closed and there are more than `0` but fewer than
-`n` elements remaining, then instead of raising a
-`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
-less than `n` elements are returned immediately. If the queue is
-closed and there are `0` elements left in the queue, then a
-`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
-Otherwise the behavior is identical to `dequeue_many`.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.FIFOQueue.dtypes` {#FIFOQueue.dtypes}
-
-The list of dtypes for each component of a queue element.
-
-
-- - -
-
-#### `tf.FIFOQueue.enqueue(vals, name=None)` {#FIFOQueue.enqueue}
-
-Enqueues one element to this queue.
-
-If the queue is full when this operation executes, it will block
-until the element has been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
- the values to enqueue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a new tuple of tensors to the queue.
-
-
-- - -
-
-#### `tf.FIFOQueue.enqueue_many(vals, name=None)` {#FIFOQueue.enqueue_many}
-
-Enqueues zero or more elements to this queue.
-
-This operation slices each component tensor along the 0th dimension to
-make multiple queue elements. All of the tensors in `vals` must have the
-same size in the 0th dimension.
-
-If the queue is full when this operation executes, it will block
-until all of the elements have been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
- from which the queue elements are taken.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a batch of tuples of tensors to the queue.
-
-
-- - -
-
-#### `tf.FIFOQueue.from_list(index, queues)` {#FIFOQueue.from_list}
-
-Create a queue using the queue reference from `queues[index]`.
-
-##### Args:
-
-
-* <b>`index`</b>: An integer scalar tensor that determines the input that gets
- selected.
-* <b>`queues`</b>: A list of `QueueBase` objects.
-
-##### Returns:
-
- A `QueueBase` object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
- or when the data types of `queues` are not all the same.
-
-
-- - -
-
-#### `tf.FIFOQueue.name` {#FIFOQueue.name}
-
-The name of the underlying queue.
-
-
-- - -
-
-#### `tf.FIFOQueue.names` {#FIFOQueue.names}
-
-The list of names for each component of a queue element.
-
-
-- - -
-
-#### `tf.FIFOQueue.queue_ref` {#FIFOQueue.queue_ref}
-
-The underlying queue reference.
-
-
-- - -
-
-#### `tf.FIFOQueue.shapes` {#FIFOQueue.shapes}
-
-The list of shapes for each component of a queue element.
-
-
-- - -
-
-#### `tf.FIFOQueue.size(name=None)` {#FIFOQueue.size}
-
-Compute the number of elements in this queue.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A scalar tensor containing the number of elements in this queue.
-
-
-
-- - -
-
-### `class tf.PaddingFIFOQueue` {#PaddingFIFOQueue}
-
-A FIFOQueue that supports batching variable-sized tensors by padding.
-
-A `PaddingFIFOQueue` may contain components with dynamic shape, while also
-supporting `dequeue_many`. See the constructor for more details.
-
-See [`tf.QueueBase`](#QueueBase) for a description of the methods on
-this class.
-- - -
-
-#### `tf.PaddingFIFOQueue.__init__(capacity, dtypes, shapes, names=None, shared_name=None, name='padding_fifo_queue')` {#PaddingFIFOQueue.__init__}
-
-Creates a queue that dequeues elements in a first-in first-out order.
-
-A `PaddingFIFOQueue` has bounded capacity; supports multiple concurrent
-producers and consumers; and provides exactly-once delivery.
-
-A `PaddingFIFOQueue` holds a list of up to `capacity` elements. Each
-element is a fixed-length tuple of tensors whose dtypes are
-described by `dtypes`, and whose shapes are described by the `shapes`
-argument.
-
-The `shapes` argument must be specified; each component of a queue
-element must have the respective shape. Shapes of fixed
-rank but variable size are allowed by setting any shape dimension to None.
-In this case, the inputs' shape may vary along the given dimension, and
-`dequeue_many` will pad the given dimension with zeros up to the maximum
-shape of all elements in the given batch.
-
-##### Args:
-
-
-* <b>`capacity`</b>: An integer. The upper bound on the number of elements
- that may be stored in this queue.
-* <b>`dtypes`</b>: A list of `DType` objects. The length of `dtypes` must equal
- the number of tensors in each queue element.
-* <b>`shapes`</b>: A list of `TensorShape` objects, with the same length as
- `dtypes`. Any dimension in the `TensorShape` containing value
- `None` is dynamic and allows values to be enqueued with
- variable size in that dimension.
-* <b>`names`</b>: (Optional.) A list of string naming the components in the queue
- with the same length as `dtypes`, or `None`. If specified the dequeue
- methods return a dictionary with the names as keys.
-* <b>`shared_name`</b>: (Optional.) If non-empty, this queue will be shared under
- the given name across multiple sessions.
-* <b>`name`</b>: Optional name for the queue operation.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If shapes is not a list of shapes, or the lengths of dtypes
- and shapes do not match, or if names is specified and the lengths of
- dtypes and names do not match.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.close(cancel_pending_enqueues=False, name=None)` {#PaddingFIFOQueue.close}
-
-Closes this queue.
-
-This operation signals that no more elements will be enqueued in
-the given queue. Subsequent `enqueue` and `enqueue_many`
-operations will fail. Subsequent `dequeue` and `dequeue_many`
-operations will continue to succeed if sufficient elements remain
-in the queue. Subsequent `dequeue` and `dequeue_many` operations
-that would block will fail immediately.
-
-If `cancel_pending_enqueues` is `True`, all pending requests will also
-be cancelled.
-
-##### Args:
-
-
-* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
- `False` (described above).
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that closes the queue.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.dequeue(name=None)` {#PaddingFIFOQueue.dequeue}
-
-Dequeues one element from this queue.
-
-If the queue is empty when this operation executes, it will block
-until there is an element to dequeue.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue is empty, and there are no pending
-enqueue operations that can fulfill this request,
-`tf.errors.OutOfRangeError` will be raised. If the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of tensors that was dequeued.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.dequeue_many(n, name=None)` {#PaddingFIFOQueue.dequeue_many}
-
-Dequeues and concatenates `n` elements from this queue.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. All of the
-components in the dequeued tuple will have size `n` in the 0th dimension.
-
-If the queue is closed and there are less than `n` elements left, then an
-`OutOfRange` exception is raised.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue contains fewer than `n` elements, and
-there are no pending enqueue operations that can fulfill this
-request, `tf.errors.OutOfRangeError` will be raised. If the
-session is [closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.dequeue_up_to(n, name=None)` {#PaddingFIFOQueue.dequeue_up_to}
-
-Dequeues and concatenates `n` elements from this queue.
-
-**Note** This operation is not supported by all queues. If a queue does not
-support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. If the queue
-has not been closed, all of the components in the dequeued tuple
-will have size `n` in the 0th dimension.
-
-If the queue is closed and there are more than `0` but fewer than
-`n` elements remaining, then instead of raising a
-`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
-less than `n` elements are returned immediately. If the queue is
-closed and there are `0` elements left in the queue, then a
-`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
-Otherwise the behavior is identical to `dequeue_many`.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.dtypes` {#PaddingFIFOQueue.dtypes}
-
-The list of dtypes for each component of a queue element.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.enqueue(vals, name=None)` {#PaddingFIFOQueue.enqueue}
-
-Enqueues one element to this queue.
-
-If the queue is full when this operation executes, it will block
-until the element has been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
- the values to enqueue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a new tuple of tensors to the queue.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.enqueue_many(vals, name=None)` {#PaddingFIFOQueue.enqueue_many}
-
-Enqueues zero or more elements to this queue.
-
-This operation slices each component tensor along the 0th dimension to
-make multiple queue elements. All of the tensors in `vals` must have the
-same size in the 0th dimension.
-
-If the queue is full when this operation executes, it will block
-until all of the elements have been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
- from which the queue elements are taken.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a batch of tuples of tensors to the queue.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.from_list(index, queues)` {#PaddingFIFOQueue.from_list}
-
-Create a queue using the queue reference from `queues[index]`.
-
-##### Args:
-
-
-* <b>`index`</b>: An integer scalar tensor that determines the input that gets
- selected.
-* <b>`queues`</b>: A list of `QueueBase` objects.
-
-##### Returns:
-
- A `QueueBase` object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
- or when the data types of `queues` are not all the same.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.name` {#PaddingFIFOQueue.name}
-
-The name of the underlying queue.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.names` {#PaddingFIFOQueue.names}
-
-The list of names for each component of a queue element.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.queue_ref` {#PaddingFIFOQueue.queue_ref}
-
-The underlying queue reference.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.shapes` {#PaddingFIFOQueue.shapes}
-
-The list of shapes for each component of a queue element.
-
-
-- - -
-
-#### `tf.PaddingFIFOQueue.size(name=None)` {#PaddingFIFOQueue.size}
-
-Compute the number of elements in this queue.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A scalar tensor containing the number of elements in this queue.
-
-
-
-- - -
-
-### `class tf.RandomShuffleQueue` {#RandomShuffleQueue}
-
-A queue implementation that dequeues elements in a random order.
-
-See [`tf.QueueBase`](#QueueBase) for a description of the methods on
-this class.
-- - -
-
-#### `tf.RandomShuffleQueue.__init__(capacity, min_after_dequeue, dtypes, shapes=None, names=None, seed=None, shared_name=None, name='random_shuffle_queue')` {#RandomShuffleQueue.__init__}
-
-Create a queue that dequeues elements in a random order.
-
-A `RandomShuffleQueue` has bounded capacity; supports multiple
-concurrent producers and consumers; and provides exactly-once
-delivery.
-
-A `RandomShuffleQueue` holds a list of up to `capacity`
-elements. Each element is a fixed-length tuple of tensors whose
-dtypes are described by `dtypes`, and whose shapes are optionally
-described by the `shapes` argument.
-
-If the `shapes` argument is specified, each component of a queue
-element must have the respective fixed shape. If it is
-unspecified, different queue elements may have different shapes,
-but the use of `dequeue_many` is disallowed.
-
-The `min_after_dequeue` argument allows the caller to specify a
-minimum number of elements that will remain in the queue after a
-`dequeue` or `dequeue_many` operation completes, to ensure a
-minimum level of mixing of elements. This invariant is maintained
-by blocking those operations until sufficient elements have been
-enqueued. The `min_after_dequeue` argument is ignored after the
-queue has been closed.
-
-##### Args:
-
-
-* <b>`capacity`</b>: An integer. The upper bound on the number of elements
- that may be stored in this queue.
-* <b>`min_after_dequeue`</b>: An integer (described above).
-* <b>`dtypes`</b>: A list of `DType` objects. The length of `dtypes` must equal
- the number of tensors in each queue element.
-* <b>`shapes`</b>: (Optional.) A list of fully-defined `TensorShape` objects
- with the same length as `dtypes`, or `None`.
-* <b>`names`</b>: (Optional.) A list of string naming the components in the queue
- with the same length as `dtypes`, or `None`. If specified the dequeue
- methods return a dictionary with the names as keys.
-* <b>`seed`</b>: A Python integer. Used to create a random seed. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`shared_name`</b>: (Optional.) If non-empty, this queue will be shared under
- the given name across multiple sessions.
-* <b>`name`</b>: Optional name for the queue operation.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.close(cancel_pending_enqueues=False, name=None)` {#RandomShuffleQueue.close}
-
-Closes this queue.
-
-This operation signals that no more elements will be enqueued in
-the given queue. Subsequent `enqueue` and `enqueue_many`
-operations will fail. Subsequent `dequeue` and `dequeue_many`
-operations will continue to succeed if sufficient elements remain
-in the queue. Subsequent `dequeue` and `dequeue_many` operations
-that would block will fail immediately.
-
-If `cancel_pending_enqueues` is `True`, all pending requests will also
-be cancelled.
-
-##### Args:
-
-
-* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
- `False` (described above).
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that closes the queue.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.dequeue(name=None)` {#RandomShuffleQueue.dequeue}
-
-Dequeues one element from this queue.
-
-If the queue is empty when this operation executes, it will block
-until there is an element to dequeue.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue is empty, and there are no pending
-enqueue operations that can fulfill this request,
-`tf.errors.OutOfRangeError` will be raised. If the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of tensors that was dequeued.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.dequeue_many(n, name=None)` {#RandomShuffleQueue.dequeue_many}
-
-Dequeues and concatenates `n` elements from this queue.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. All of the
-components in the dequeued tuple will have size `n` in the 0th dimension.
-
-If the queue is closed and there are less than `n` elements left, then an
-`OutOfRange` exception is raised.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue contains fewer than `n` elements, and
-there are no pending enqueue operations that can fulfill this
-request, `tf.errors.OutOfRangeError` will be raised. If the
-session is [closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.dequeue_up_to(n, name=None)` {#RandomShuffleQueue.dequeue_up_to}
-
-Dequeues and concatenates `n` elements from this queue.
-
-**Note** This operation is not supported by all queues. If a queue does not
-support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. If the queue
-has not been closed, all of the components in the dequeued tuple
-will have size `n` in the 0th dimension.
-
-If the queue is closed and there are more than `0` but fewer than
-`n` elements remaining, then instead of raising a
-`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
-less than `n` elements are returned immediately. If the queue is
-closed and there are `0` elements left in the queue, then a
-`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
-Otherwise the behavior is identical to `dequeue_many`.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.dtypes` {#RandomShuffleQueue.dtypes}
-
-The list of dtypes for each component of a queue element.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.enqueue(vals, name=None)` {#RandomShuffleQueue.enqueue}
-
-Enqueues one element to this queue.
-
-If the queue is full when this operation executes, it will block
-until the element has been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
- the values to enqueue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a new tuple of tensors to the queue.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.enqueue_many(vals, name=None)` {#RandomShuffleQueue.enqueue_many}
-
-Enqueues zero or more elements to this queue.
-
-This operation slices each component tensor along the 0th dimension to
-make multiple queue elements. All of the tensors in `vals` must have the
-same size in the 0th dimension.
-
-If the queue is full when this operation executes, it will block
-until all of the elements have been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
- from which the queue elements are taken.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a batch of tuples of tensors to the queue.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.from_list(index, queues)` {#RandomShuffleQueue.from_list}
-
-Create a queue using the queue reference from `queues[index]`.
-
-##### Args:
-
-
-* <b>`index`</b>: An integer scalar tensor that determines the input that gets
- selected.
-* <b>`queues`</b>: A list of `QueueBase` objects.
-
-##### Returns:
-
- A `QueueBase` object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
- or when the data types of `queues` are not all the same.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.name` {#RandomShuffleQueue.name}
-
-The name of the underlying queue.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.names` {#RandomShuffleQueue.names}
-
-The list of names for each component of a queue element.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.queue_ref` {#RandomShuffleQueue.queue_ref}
-
-The underlying queue reference.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.shapes` {#RandomShuffleQueue.shapes}
-
-The list of shapes for each component of a queue element.
-
-
-- - -
-
-#### `tf.RandomShuffleQueue.size(name=None)` {#RandomShuffleQueue.size}
-
-Compute the number of elements in this queue.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A scalar tensor containing the number of elements in this queue.
-
-
-
-- - -
-
-### `class tf.PriorityQueue` {#PriorityQueue}
-
-A queue implementation that dequeues elements in prioritized order.
-
-See [`tf.QueueBase`](#QueueBase) for a description of the methods on
-this class.
-- - -
-
-#### `tf.PriorityQueue.__init__(capacity, types, shapes=None, names=None, shared_name=None, name='priority_queue')` {#PriorityQueue.__init__}
-
-Creates a queue that dequeues elements in a first-in first-out order.
-
-A `PriorityQueue` has bounded capacity; supports multiple concurrent
-producers and consumers; and provides exactly-once delivery.
-
-A `PriorityQueue` holds a list of up to `capacity` elements. Each
-element is a fixed-length tuple of tensors whose dtypes are
-described by `types`, and whose shapes are optionally described
-by the `shapes` argument.
-
-If the `shapes` argument is specified, each component of a queue
-element must have the respective fixed shape. If it is
-unspecified, different queue elements may have different shapes,
-but the use of `dequeue_many` is disallowed.
-
-Enqueues and Dequeues to the `PriorityQueue` must include an additional
-tuple entry at the beginning: the `priority`. The priority must be
-an int64 scalar (for `enqueue`) or an int64 vector (for `enqueue_many`).
-
-##### Args:
-
-
-* <b>`capacity`</b>: An integer. The upper bound on the number of elements
- that may be stored in this queue.
-* <b>`types`</b>: A list of `DType` objects. The length of `types` must equal
- the number of tensors in each queue element, except the first priority
- element. The first tensor in each element is the priority,
- which must be type int64.
-* <b>`shapes`</b>: (Optional.) A list of fully-defined `TensorShape` objects,
- with the same length as `types`, or `None`.
-* <b>`names`</b>: (Optional.) A list of strings naming the components in the queue
- with the same length as `dtypes`, or `None`. If specified, the dequeue
- methods return a dictionary with the names as keys.
-* <b>`shared_name`</b>: (Optional.) If non-empty, this queue will be shared under
- the given name across multiple sessions.
-* <b>`name`</b>: Optional name for the queue operation.
-
-
-- - -
-
-#### `tf.PriorityQueue.close(cancel_pending_enqueues=False, name=None)` {#PriorityQueue.close}
-
-Closes this queue.
-
-This operation signals that no more elements will be enqueued in
-the given queue. Subsequent `enqueue` and `enqueue_many`
-operations will fail. Subsequent `dequeue` and `dequeue_many`
-operations will continue to succeed if sufficient elements remain
-in the queue. Subsequent `dequeue` and `dequeue_many` operations
-that would block will fail immediately.
-
-If `cancel_pending_enqueues` is `True`, all pending requests will also
-be cancelled.
-
-##### Args:
-
-
-* <b>`cancel_pending_enqueues`</b>: (Optional.) A boolean, defaulting to
- `False` (described above).
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that closes the queue.
-
-
-- - -
-
-#### `tf.PriorityQueue.dequeue(name=None)` {#PriorityQueue.dequeue}
-
-Dequeues one element from this queue.
-
-If the queue is empty when this operation executes, it will block
-until there is an element to dequeue.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue is empty, and there are no pending
-enqueue operations that can fulfill this request,
-`tf.errors.OutOfRangeError` will be raised. If the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of tensors that was dequeued.
-
-
-- - -
-
-#### `tf.PriorityQueue.dequeue_many(n, name=None)` {#PriorityQueue.dequeue_many}
-
-Dequeues and concatenates `n` elements from this queue.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. All of the
-components in the dequeued tuple will have size `n` in the 0th dimension.
-
-If the queue is closed and there are less than `n` elements left, then an
-`OutOfRange` exception is raised.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed, the queue contains fewer than `n` elements, and
-there are no pending enqueue operations that can fulfill this
-request, `tf.errors.OutOfRangeError` will be raised. If the
-session is [closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.PriorityQueue.dequeue_up_to(n, name=None)` {#PriorityQueue.dequeue_up_to}
-
-Dequeues and concatenates `n` elements from this queue.
-
-**Note** This operation is not supported by all queues. If a queue does not
-support DequeueUpTo, then a `tf.errors.UnimplementedError` is raised.
-
-This operation concatenates queue-element component tensors along
-the 0th dimension to make a single component tensor. If the queue
-has not been closed, all of the components in the dequeued tuple
-will have size `n` in the 0th dimension.
-
-If the queue is closed and there are more than `0` but fewer than
-`n` elements remaining, then instead of raising a
-`tf.errors.OutOfRangeError` like [`dequeue_many`](#QueueBase.dequeue_many),
-less than `n` elements are returned immediately. If the queue is
-closed and there are `0` elements left in the queue, then a
-`tf.errors.OutOfRangeError` is raised just like in `dequeue_many`.
-Otherwise the behavior is identical to `dequeue_many`.
-
-##### Args:
-
-
-* <b>`n`</b>: A scalar `Tensor` containing the number of elements to dequeue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tuple of concatenated tensors that was dequeued.
-
-
-- - -
-
-#### `tf.PriorityQueue.dtypes` {#PriorityQueue.dtypes}
-
-The list of dtypes for each component of a queue element.
-
-
-- - -
-
-#### `tf.PriorityQueue.enqueue(vals, name=None)` {#PriorityQueue.enqueue}
-
-Enqueues one element to this queue.
-
-If the queue is full when this operation executes, it will block
-until the element has been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary containing
- the values to enqueue.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a new tuple of tensors to the queue.
-
-
-- - -
-
-#### `tf.PriorityQueue.enqueue_many(vals, name=None)` {#PriorityQueue.enqueue_many}
-
-Enqueues zero or more elements to this queue.
-
-This operation slices each component tensor along the 0th dimension to
-make multiple queue elements. All of the tensors in `vals` must have the
-same size in the 0th dimension.
-
-If the queue is full when this operation executes, it will block
-until all of the elements have been enqueued.
-
-At runtime, this operation may raise an error if the queue is
-[closed](#QueueBase.close) before or during its execution. If the
-queue is closed before this operation runs,
-`tf.errors.CancelledError` will be raised. If this operation is
-blocked, and either (i) the queue is closed by a close operation
-with `cancel_pending_enqueues=True`, or (ii) the session is
-[closed](../../api_docs/python/client.md#Session.close),
-`tf.errors.CancelledError` will be raised.
-
-##### Args:
-
-
-* <b>`vals`</b>: A tensor, a list or tuple of tensors, or a dictionary
- from which the queue elements are taken.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The operation that enqueues a batch of tuples of tensors to the queue.
-
-
-- - -
-
-#### `tf.PriorityQueue.from_list(index, queues)` {#PriorityQueue.from_list}
-
-Create a queue using the queue reference from `queues[index]`.
-
-##### Args:
-
-
-* <b>`index`</b>: An integer scalar tensor that determines the input that gets
- selected.
-* <b>`queues`</b>: A list of `QueueBase` objects.
-
-##### Returns:
-
- A `QueueBase` object.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: When `queues` is not a list of `QueueBase` objects,
- or when the data types of `queues` are not all the same.
-
-
-- - -
-
-#### `tf.PriorityQueue.name` {#PriorityQueue.name}
-
-The name of the underlying queue.
-
-
-- - -
-
-#### `tf.PriorityQueue.names` {#PriorityQueue.names}
-
-The list of names for each component of a queue element.
-
-
-- - -
-
-#### `tf.PriorityQueue.queue_ref` {#PriorityQueue.queue_ref}
-
-The underlying queue reference.
-
-
-- - -
-
-#### `tf.PriorityQueue.shapes` {#PriorityQueue.shapes}
-
-The list of shapes for each component of a queue element.
-
-
-- - -
-
-#### `tf.PriorityQueue.size(name=None)` {#PriorityQueue.size}
-
-Compute the number of elements in this queue.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A scalar tensor containing the number of elements in this queue.
-
-
-
-- - -
-
-### `class tf.ConditionalAccumulatorBase` {#ConditionalAccumulatorBase}
-
-A conditional accumulator for aggregating gradients.
-
-Up-to-date gradients (i.e., time step at which gradient was computed is
-equal to the accumulator's time step) are added to the accumulator.
-
-Extraction of the average gradient is blocked until the required number of
-gradients has been accumulated.
-- - -
-
-#### `tf.ConditionalAccumulatorBase.__init__(dtype, shape, accumulator_ref)` {#ConditionalAccumulatorBase.__init__}
-
-Creates a new ConditionalAccumulator.
-
-##### Args:
-
-
-* <b>`dtype`</b>: Datatype of the accumulated gradients.
-* <b>`shape`</b>: Shape of the accumulated gradients.
-* <b>`accumulator_ref`</b>: A handle to the conditional accumulator, created by sub-
- classes
-
-
-- - -
-
-#### `tf.ConditionalAccumulatorBase.accumulator_ref` {#ConditionalAccumulatorBase.accumulator_ref}
-
-The underlying accumulator reference.
-
-
-- - -
-
-#### `tf.ConditionalAccumulatorBase.dtype` {#ConditionalAccumulatorBase.dtype}
-
-The datatype of the gradients accumulated by this accumulator.
-
-
-- - -
-
-#### `tf.ConditionalAccumulatorBase.name` {#ConditionalAccumulatorBase.name}
-
-The name of the underlying accumulator.
-
-
-- - -
-
-#### `tf.ConditionalAccumulatorBase.num_accumulated(name=None)` {#ConditionalAccumulatorBase.num_accumulated}
-
-Number of gradients that have currently been aggregated in accumulator.
-
-##### Args:
-
-
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- Number of accumulated gradients currently in accumulator.
-
-
-- - -
-
-#### `tf.ConditionalAccumulatorBase.set_global_step(new_global_step, name=None)` {#ConditionalAccumulatorBase.set_global_step}
-
-Sets the global time step of the accumulator.
-
-The operation logs a warning if we attempt to set to a time step that is
-lower than the accumulator's own time step.
-
-##### Args:
-
-
-* <b>`new_global_step`</b>: Value of new time step. Can be a variable or a constant
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- Operation that sets the accumulator's time step.
-
-
-
-- - -
-
-### `class tf.ConditionalAccumulator` {#ConditionalAccumulator}
-
-A conditional accumulator for aggregating gradients.
-
-Up-to-date gradients (i.e., time step at which gradient was computed is
-equal to the accumulator's time step) are added to the accumulator.
-
-Extraction of the average gradient is blocked until the required number of
-gradients has been accumulated.
-- - -
-
-#### `tf.ConditionalAccumulator.__init__(dtype, shape=None, shared_name=None, name='conditional_accumulator')` {#ConditionalAccumulator.__init__}
-
-Creates a new ConditionalAccumulator.
-
-##### Args:
-
-
-* <b>`dtype`</b>: Datatype of the accumulated gradients.
-* <b>`shape`</b>: Shape of the accumulated gradients.
-* <b>`shared_name`</b>: Optional. If non-empty, this accumulator will be shared under
- the given name across multiple sessions.
-* <b>`name`</b>: Optional name for the accumulator.
-
-
-- - -
-
-#### `tf.ConditionalAccumulator.accumulator_ref` {#ConditionalAccumulator.accumulator_ref}
-
-The underlying accumulator reference.
-
-
-- - -
-
-#### `tf.ConditionalAccumulator.apply_grad(grad, local_step=0, name=None)` {#ConditionalAccumulator.apply_grad}
-
-Attempts to apply a gradient to the accumulator.
-
-The attempt is silently dropped if the gradient is stale, i.e., local_step
-is less than the accumulator's global time step.
-
-##### Args:
-
-
-* <b>`grad`</b>: The gradient tensor to be applied.
-* <b>`local_step`</b>: Time step at which the gradient was computed.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- The operation that (conditionally) applies a gradient to the accumulator.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If grad is of the wrong shape
-
-
-- - -
-
-#### `tf.ConditionalAccumulator.dtype` {#ConditionalAccumulator.dtype}
-
-The datatype of the gradients accumulated by this accumulator.
-
-
-- - -
-
-#### `tf.ConditionalAccumulator.name` {#ConditionalAccumulator.name}
-
-The name of the underlying accumulator.
-
-
-- - -
-
-#### `tf.ConditionalAccumulator.num_accumulated(name=None)` {#ConditionalAccumulator.num_accumulated}
-
-Number of gradients that have currently been aggregated in accumulator.
-
-##### Args:
-
-
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- Number of accumulated gradients currently in accumulator.
-
-
-- - -
-
-#### `tf.ConditionalAccumulator.set_global_step(new_global_step, name=None)` {#ConditionalAccumulator.set_global_step}
-
-Sets the global time step of the accumulator.
-
-The operation logs a warning if we attempt to set to a time step that is
-lower than the accumulator's own time step.
-
-##### Args:
-
-
-* <b>`new_global_step`</b>: Value of new time step. Can be a variable or a constant
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- Operation that sets the accumulator's time step.
-
-
-- - -
-
-#### `tf.ConditionalAccumulator.take_grad(num_required, name=None)` {#ConditionalAccumulator.take_grad}
-
-Attempts to extract the average gradient from the accumulator.
-
-The operation blocks until sufficient number of gradients have been
-successfully applied to the accumulator.
-
-Once successful, the following actions are also triggered:
-- Counter of accumulated gradients is reset to 0.
-- Aggregated gradient is reset to 0 tensor.
-- Accumulator's internal time step is incremented by 1.
-
-##### Args:
-
-
-* <b>`num_required`</b>: Number of gradients that needs to have been aggregated
-* <b>`name`</b>: Optional name for the operation
-
-##### Returns:
-
- A tensor holding the value of the average gradient.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: If num_required < 1
-
-
-
-- - -
-
-### `class tf.SparseConditionalAccumulator` {#SparseConditionalAccumulator}
-
-A conditional accumulator for aggregating sparse gradients.
-
-Sparse gradients are represented by IndexedSlices.
-
-Up-to-date gradients (i.e., time step at which gradient was computed is
-equal to the accumulator's time step) are added to the accumulator.
-
-Extraction of the average gradient is blocked until the required number of
-gradients has been accumulated.
-
-Args:
- dtype: Datatype of the accumulated gradients.
- shape: Shape of the accumulated gradients.
- shared_name: Optional. If non-empty, this accumulator will be shared under
- the given name across multiple sessions.
- name: Optional name for the accumulator.
-- - -
-
-#### `tf.SparseConditionalAccumulator.__init__(dtype, shape=None, shared_name=None, name='sparse_conditional_accumulator')` {#SparseConditionalAccumulator.__init__}
-
-
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.accumulator_ref` {#SparseConditionalAccumulator.accumulator_ref}
-
-The underlying accumulator reference.
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.apply_grad(grad_indices, grad_values, grad_shape=None, local_step=0, name=None)` {#SparseConditionalAccumulator.apply_grad}
-
-Attempts to apply a sparse gradient to the accumulator.
-
-The attempt is silently dropped if the gradient is stale, i.e., local_step
-is less than the accumulator's global time step.
-
-A sparse gradient is represented by its indices, values and possibly empty
-or None shape. Indices must be a vector representing the locations of
-non-zero entries in the tensor. Values are the non-zero slices of the
-gradient, and must have the same first dimension as indices, i.e., the nnz
-represented by indices and values must be consistent. Shape, if not empty or
-None, must be consistent with the accumulator's shape (if also provided).
-
-##### Example:
-
- A tensor [[0, 0], [0. 1], [2, 3]] can be represented
-
-* <b>`indices`</b>: [1,2]
-* <b>`values`</b>: [[0,1],[2,3]]
-* <b>`shape`</b>: [3, 2]
-
-##### Args:
-
-
-* <b>`grad_indices`</b>: Indices of the sparse gradient to be applied.
-* <b>`grad_values`</b>: Values of the sparse gradient to be applied.
-* <b>`grad_shape`</b>: Shape of the sparse gradient to be applied.
-* <b>`local_step`</b>: Time step at which the gradient was computed.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- The operation that (conditionally) applies a gradient to the accumulator.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: If grad is of the wrong shape
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.apply_indexed_slices_grad(grad, local_step=0, name=None)` {#SparseConditionalAccumulator.apply_indexed_slices_grad}
-
-Attempts to apply a gradient to the accumulator.
-
-The attempt is silently dropped if the gradient is stale, i.e., local_step
-is less than the accumulator's global time step.
-
-##### Args:
-
-
-* <b>`grad`</b>: The gradient IndexedSlices to be applied.
-* <b>`local_step`</b>: Time step at which the gradient was computed.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- The operation that (conditionally) applies a gradient to the accumulator.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: If grad is of the wrong shape
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.dtype` {#SparseConditionalAccumulator.dtype}
-
-The datatype of the gradients accumulated by this accumulator.
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.name` {#SparseConditionalAccumulator.name}
-
-The name of the underlying accumulator.
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.num_accumulated(name=None)` {#SparseConditionalAccumulator.num_accumulated}
-
-Number of gradients that have currently been aggregated in accumulator.
-
-##### Args:
-
-
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- Number of accumulated gradients currently in accumulator.
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.set_global_step(new_global_step, name=None)` {#SparseConditionalAccumulator.set_global_step}
-
-Sets the global time step of the accumulator.
-
-The operation logs a warning if we attempt to set to a time step that is
-lower than the accumulator's own time step.
-
-##### Args:
-
-
-* <b>`new_global_step`</b>: Value of new time step. Can be a variable or a constant
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- Operation that sets the accumulator's time step.
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.take_grad(num_required, name=None)` {#SparseConditionalAccumulator.take_grad}
-
-Attempts to extract the average gradient from the accumulator.
-
-The operation blocks until sufficient number of gradients have been
-successfully applied to the accumulator.
-
-Once successful, the following actions are also triggered:
-- Counter of accumulated gradients is reset to 0.
-- Aggregated gradient is reset to 0 tensor.
-- Accumulator's internal time step is incremented by 1.
-
-##### Args:
-
-
-* <b>`num_required`</b>: Number of gradients that needs to have been aggregated
-* <b>`name`</b>: Optional name for the operation
-
-##### Returns:
-
- A tuple of indices, values, and shape representing the average gradient.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: If num_required < 1
-
-
-- - -
-
-#### `tf.SparseConditionalAccumulator.take_indexed_slices_grad(num_required, name=None)` {#SparseConditionalAccumulator.take_indexed_slices_grad}
-
-Attempts to extract the average gradient from the accumulator.
-
-The operation blocks until sufficient number of gradients have been
-successfully applied to the accumulator.
-
-Once successful, the following actions are also triggered:
-- Counter of accumulated gradients is reset to 0.
-- Aggregated gradient is reset to 0 tensor.
-- Accumulator's internal time step is incremented by 1.
-
-##### Args:
-
-
-* <b>`num_required`</b>: Number of gradients that needs to have been aggregated
-* <b>`name`</b>: Optional name for the operation
-
-##### Returns:
-
- An IndexedSlices holding the value of the average gradient.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: If num_required < 1
-
-
-
-- - -
-
-### `tf.matching_files(pattern, name=None)` {#matching_files}
-
-Returns the set of files matching one or more glob patterns.
-
-Note that this routine only supports wildcard characters in the
-basename portion of the pattern, not in the directory portion.
-
-##### Args:
-
-
-* <b>`pattern`</b>: A `Tensor` of type `string`.
- Shell wildcard pattern(s). Scalar or vector of type string.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`. A vector of matching filenames.
-
-
-- - -
-
-### `tf.read_file(filename, name=None)` {#read_file}
-
-Reads and outputs the entire contents of the input filename.
-
-##### Args:
-
-
-* <b>`filename`</b>: A `Tensor` of type `string`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`.
-
-
-- - -
-
-### `tf.write_file(filename, contents, name=None)` {#write_file}
-
-Writes contents to the file at input filename. Creates file if not existing.
-
-##### Args:
-
-
-* <b>`filename`</b>: A `Tensor` of type `string`.
- scalar. The name of the file to which we write the contents.
-* <b>`contents`</b>: A `Tensor` of type `string`.
- scalar. The content to be written to the output file.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The created Operation.
-
-
-- - -
-
-### `tf.train.match_filenames_once(pattern, name=None)` {#match_filenames_once}
-
-Save the list of files matching pattern, so it is only computed once.
-
-##### Args:
-
-
-* <b>`pattern`</b>: A file pattern (glob), or 1D tensor of file patterns.
-* <b>`name`</b>: A name for the operations (optional).
-
-##### Returns:
-
- A variable that is initialized to the list of files matching the pattern(s).
-
-
-- - -
-
-### `tf.train.limit_epochs(tensor, num_epochs=None, name=None)` {#limit_epochs}
-
-Returns tensor `num_epochs` times and then raises an `OutOfRange` error.
-
-Note: creates local counter `epochs`. Use `local_variables_initializer()` to
-initialize local variables.
-
-##### Args:
-
-
-* <b>`tensor`</b>: Any `Tensor`.
-* <b>`num_epochs`</b>: A positive integer (optional). If specified, limits the number
- of steps the output tensor may be evaluated.
-* <b>`name`</b>: A name for the operations (optional).
-
-##### Returns:
-
- tensor or `OutOfRange`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `num_epochs` is invalid.
-
-
-- - -
-
-### `tf.train.input_producer(input_tensor, element_shape=None, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, summary_name=None, name=None, cancel_op=None)` {#input_producer}
-
-Output the rows of `input_tensor` to a queue for an input pipeline.
-
-Note: if `num_epochs` is not `None`, this function creates local counter
-`epochs`. Use `local_variables_initializer()` to initialize local variables.
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: A tensor with the rows to produce. Must be at least
- one-dimensional. Must either have a fully-defined shape, or
- `element_shape` must be defined.
-* <b>`element_shape`</b>: (Optional.) A `TensorShape` representing the shape of a
- row of `input_tensor`, if it cannot be inferred.
-* <b>`num_epochs`</b>: (Optional.) An integer. If specified `input_producer` produces
- each row of `input_tensor` `num_epochs` times before generating an
- `OutOfRange` error. If not specified, `input_producer` can cycle through
- the rows of `input_tensor` an unlimited number of times.
-* <b>`shuffle`</b>: (Optional.) A boolean. If true, the rows are randomly shuffled
- within each epoch.
-* <b>`seed`</b>: (Optional.) An integer. The seed to use if `shuffle` is true.
-* <b>`capacity`</b>: (Optional.) The capacity of the queue to be used for buffering
- the input.
-* <b>`shared_name`</b>: (Optional.) If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`summary_name`</b>: (Optional.) If set, a scalar summary for the current queue
- size will be generated, using this name as part of the tag.
-* <b>`name`</b>: (Optional.) A name for queue.
-* <b>`cancel_op`</b>: (Optional.) Cancel op for the queue
-
-##### Returns:
-
- A queue with the output rows. A `QueueRunner` for the queue is
- added to the current `QUEUE_RUNNER` collection of the current
- graph.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shape of the input cannot be inferred from the arguments.
-
-
-- - -
-
-### `tf.train.range_input_producer(limit, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None)` {#range_input_producer}
-
-Produces the integers from 0 to limit-1 in a queue.
-
-Note: if `num_epochs` is not `None`, this function creates local counter
-`epochs`. Use `local_variables_initializer()` to initialize local variables.
-
-##### Args:
-
-
-* <b>`limit`</b>: An int32 scalar tensor.
-* <b>`num_epochs`</b>: An integer (optional). If specified, `range_input_producer`
- produces each integer `num_epochs` times before generating an
- OutOfRange error. If not specified, `range_input_producer` can cycle
- through the integers an unlimited number of times.
-* <b>`shuffle`</b>: Boolean. If true, the integers are randomly shuffled within each
- epoch.
-* <b>`seed`</b>: An integer (optional). Seed used if shuffle == True.
-* <b>`capacity`</b>: An integer. Sets the queue capacity.
-* <b>`shared_name`</b>: (optional). If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: A name for the operations (optional).
-
-##### Returns:
-
- A Queue with the output integers. A `QueueRunner` for the Queue
- is added to the current `Graph`'s `QUEUE_RUNNER` collection.
-
-
-- - -
-
-### `tf.train.slice_input_producer(tensor_list, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None)` {#slice_input_producer}
-
-Produces a slice of each `Tensor` in `tensor_list`.
-
-Implemented using a Queue -- a `QueueRunner` for the Queue
-is added to the current `Graph`'s `QUEUE_RUNNER` collection.
-
-##### Args:
-
-
-* <b>`tensor_list`</b>: A list of `Tensor` objects. Every `Tensor` in
- `tensor_list` must have the same size in the first dimension.
-* <b>`num_epochs`</b>: An integer (optional). If specified, `slice_input_producer`
- produces each slice `num_epochs` times before generating
- an `OutOfRange` error. If not specified, `slice_input_producer` can cycle
- through the slices an unlimited number of times.
-* <b>`shuffle`</b>: Boolean. If true, the integers are randomly shuffled within each
- epoch.
-* <b>`seed`</b>: An integer (optional). Seed used if shuffle == True.
-* <b>`capacity`</b>: An integer. Sets the queue capacity.
-* <b>`shared_name`</b>: (optional). If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: A name for the operations (optional).
-
-##### Returns:
-
- A list of tensors, one for each element of `tensor_list`. If the tensor
- in `tensor_list` has shape `[N, a, b, .., z]`, then the corresponding output
- tensor will have shape `[a, b, ..., z]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `slice_input_producer` produces nothing from `tensor_list`.
-
-
-- - -
-
-### `tf.train.string_input_producer(string_tensor, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None, cancel_op=None)` {#string_input_producer}
-
-Output strings (e.g. filenames) to a queue for an input pipeline.
-
-Note: if `num_epochs` is not `None`, this function creates local counter
-`epochs`. Use `local_variables_initializer()` to initialize local variables.
-
-##### Args:
-
-
-* <b>`string_tensor`</b>: A 1-D string tensor with the strings to produce.
-* <b>`num_epochs`</b>: An integer (optional). If specified, `string_input_producer`
- produces each string from `string_tensor` `num_epochs` times before
- generating an `OutOfRange` error. If not specified,
- `string_input_producer` can cycle through the strings in `string_tensor`
- an unlimited number of times.
-* <b>`shuffle`</b>: Boolean. If true, the strings are randomly shuffled within each
- epoch.
-* <b>`seed`</b>: An integer (optional). Seed used if shuffle == True.
-* <b>`capacity`</b>: An integer. Sets the queue capacity.
-* <b>`shared_name`</b>: (optional). If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: A name for the operations (optional).
-* <b>`cancel_op`</b>: Cancel op for the queue (optional).
-
-##### Returns:
-
- A queue with the output strings. A `QueueRunner` for the Queue
- is added to the current `Graph`'s `QUEUE_RUNNER` collection.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the string_tensor is a null Python list. At runtime,
- will fail with an assertion if string_tensor becomes a null tensor.
-
-
-- - -
-
-### `tf.train.batch(tensors, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#batch}
-
-Creates batches of tensors in `tensors`.
-
-The argument `tensors` can be a list or a dictionary of tensors.
-The value returned by the function will be of the same type
-as `tensors`.
-
-This function is implemented using a queue. A `QueueRunner` for the
-queue is added to the current `Graph`'s `QUEUE_RUNNER` collection.
-
-If `enqueue_many` is `False`, `tensors` is assumed to represent a single
-example. An input tensor with shape `[x, y, z]` will be output as a tensor
-with shape `[batch_size, x, y, z]`.
-
-If `enqueue_many` is `True`, `tensors` is assumed to represent a batch of
-examples, where the first dimension is indexed by example, and all members of
-`tensors` should have the same size in the first dimension. If an input
-tensor has shape `[*, x, y, z]`, the output will have shape `[batch_size, x,
-y, z]`. The `capacity` argument controls the how long the prefetching is
-allowed to grow the queues.
-
-The returned operation is a dequeue operation and will throw
-`tf.errors.OutOfRangeError` if the input queue is exhausted. If this
-operation is feeding another input queue, its queue runner will catch
-this exception, however, if this operation is used in your main thread
-you are responsible for catching this yourself.
-
-*N.B.:* If `dynamic_pad` is `False`, you must ensure that either
-(i) the `shapes` argument is passed, or (ii) all of the tensors in
-`tensors` must have fully-defined shapes. `ValueError` will be
-raised if neither of these conditions holds.
-
-If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
-tensors is known, but individual dimensions may have shape `None`.
-In this case, for each enqueue the dimensions with value `None`
-may have a variable length; upon dequeue, the output tensors will be padded
-on the right to the maximum shape of the tensors in the current minibatch.
-For numbers, this padding takes value 0. For strings, this padding is
-the empty string. See `PaddingFIFOQueue` for more info.
-
-If `allow_smaller_final_batch` is `True`, a smaller batch value than
-`batch_size` is returned when the queue is closed and there are not enough
-elements to fill the batch, otherwise the pending elements are discarded.
-In addition, all output tensors' static shapes, as accessed via the
-`get_shape` method will have a first `Dimension` value of `None`, and
-operations that depend on fixed batch_size would fail.
-
-Note: if `num_epochs` is not `None`, this function creates local counter
-`epochs`. Use `local_variables_initializer()` to initialize local variables.
-
-##### Args:
-
-
-* <b>`tensors`</b>: The list or dictionary of tensors to enqueue.
-* <b>`batch_size`</b>: The new batch size pulled from the queue.
-* <b>`num_threads`</b>: The number of threads enqueuing `tensors`.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensors` is a single example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensors`.
-* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
- The given dimensions are padded upon dequeue so that tensors within a
- batch have the same shapes.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (Optional). If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the same types as `tensors` (except if
- the input is a list of one element, then it returns a tensor, not a list).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensors`.
-
-
-- - -
-
-### `tf.train.maybe_batch(tensors, keep_input, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_batch}
-
-Conditionally creates batches of tensors based on `keep_input`.
-
-See docstring in `batch` for more details.
-
-##### Args:
-
-
-* <b>`tensors`</b>: The list or dictionary of tensors to enqueue.
-* <b>`keep_input`</b>: A `bool` Tensor. This tensor controls whether the input is
- added to the queue or not. If it is a scalar and evaluates `True`, then
- `tensors` are all added to the queue. If it is a vector and `enqueue_many`
- is `True`, then each example is added to the queue only if the
- corresonding value in `keep_input` is `True`. This tensor essentially acts
- as a filtering mechanism.
-* <b>`batch_size`</b>: The new batch size pulled from the queue.
-* <b>`num_threads`</b>: The number of threads enqueuing `tensors`.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensors` is a single example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensors`.
-* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
- The given dimensions are padded upon dequeue so that tensors within a
- batch have the same shapes.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (Optional). If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the same types as `tensors`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensors`.
-
-
-- - -
-
-### `tf.train.batch_join(tensors_list, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#batch_join}
-
-Runs a list of tensors to fill a queue to create batches of examples.
-
-The `tensors_list` argument is a list of tuples of tensors, or a list of
-dictionaries of tensors. Each element in the list is treated similarly
-to the `tensors` argument of `tf.train.batch()`.
-
-Enqueues a different list of tensors in different threads.
-Implemented using a queue -- a `QueueRunner` for the queue
-is added to the current `Graph`'s `QUEUE_RUNNER` collection.
-
-`len(tensors_list)` threads will be started,
-with thread `i` enqueuing the tensors from
-`tensors_list[i]`. `tensors_list[i1][j]` must match
-`tensors_list[i2][j]` in type and shape, except in the first
-dimension if `enqueue_many` is true.
-
-If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
-to represent a single example. An input tensor `x` will be output as a
-tensor with shape `[batch_size] + x.shape`.
-
-If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
-represent a batch of examples, where the first dimension is indexed
-by example, and all members of `tensors_list[i]` should have the
-same size in the first dimension. The slices of any input tensor
-`x` are treated as examples, and the output tensors will have shape
-`[batch_size] + x.shape[1:]`.
-
-The `capacity` argument controls the how long the prefetching is allowed to
-grow the queues.
-
-The returned operation is a dequeue operation and will throw
-`tf.errors.OutOfRangeError` if the input queue is exhausted. If this
-operation is feeding another input queue, its queue runner will catch
-this exception, however, if this operation is used in your main thread
-you are responsible for catching this yourself.
-
-*N.B.:* If `dynamic_pad` is `False`, you must ensure that either
-(i) the `shapes` argument is passed, or (ii) all of the tensors in
-`tensors_list` must have fully-defined shapes. `ValueError` will be
-raised if neither of these conditions holds.
-
-If `dynamic_pad` is `True`, it is sufficient that the *rank* of the
-tensors is known, but individual dimensions may have value `None`.
-In this case, for each enqueue the dimensions with value `None`
-may have a variable length; upon dequeue, the output tensors will be padded
-on the right to the maximum shape of the tensors in the current minibatch.
-For numbers, this padding takes value 0. For strings, this padding is
-the empty string. See `PaddingFIFOQueue` for more info.
-
-If `allow_smaller_final_batch` is `True`, a smaller batch value than
-`batch_size` is returned when the queue is closed and there are not enough
-elements to fill the batch, otherwise the pending elements are discarded.
-In addition, all output tensors' static shapes, as accessed via the
-`get_shape` method will have a first `Dimension` value of `None`, and
-operations that depend on fixed batch_size would fail.
-
-##### Args:
-
-
-* <b>`tensors_list`</b>: A list of tuples or dictionaries of tensors to enqueue.
-* <b>`batch_size`</b>: An integer. The new batch size pulled from the queue.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list_list` is a single
- example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensor_list_list[i]`.
-* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
- The given dimensions are padded upon dequeue so that tensors within a
- batch have the same shapes.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (Optional) If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the same number and types as
- `tensors_list[i]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensor_list_list`.
-
-
-- - -
-
-### `tf.train.maybe_batch_join(tensors_list, keep_input, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_batch_join}
-
-Runs a list of tensors to conditionally fill a queue to create batches.
-
-See docstring in `batch_join` for more details.
-
-##### Args:
-
-
-* <b>`tensors_list`</b>: A list of tuples or dictionaries of tensors to enqueue.
-* <b>`keep_input`</b>: A `bool` Tensor. This tensor controls whether the input is
- added to the queue or not. If it is a scalar and evaluates `True`, then
- `tensors` are all added to the queue. If it is a vector and `enqueue_many`
- is `True`, then each example is added to the queue only if the
- corresonding value in `keep_input` is `True`. This tensor essentially acts
- as a filtering mechanism.
-* <b>`batch_size`</b>: An integer. The new batch size pulled from the queue.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list_list` is a single
- example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensor_list_list[i]`.
-* <b>`dynamic_pad`</b>: Boolean. Allow variable dimensions in input shapes.
- The given dimensions are padded upon dequeue so that tensors within a
- batch have the same shapes.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (Optional) If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the same number and types as
- `tensors_list[i]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensor_list_list`.
-
-
-- - -
-
-### `tf.train.shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#shuffle_batch}
-
-Creates batches by randomly shuffling tensors.
-
-This function adds the following to the current `Graph`:
-
-* A shuffling queue into which tensors from `tensors` are enqueued.
-* A `dequeue_many` operation to create batches from the queue.
-* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
- from `tensors`.
-
-If `enqueue_many` is `False`, `tensors` is assumed to represent a
-single example. An input tensor with shape `[x, y, z]` will be output
-as a tensor with shape `[batch_size, x, y, z]`.
-
-If `enqueue_many` is `True`, `tensors` is assumed to represent a
-batch of examples, where the first dimension is indexed by example,
-and all members of `tensors` should have the same size in the
-first dimension. If an input tensor has shape `[*, x, y, z]`, the
-output will have shape `[batch_size, x, y, z]`.
-
-The `capacity` argument controls the how long the prefetching is allowed to
-grow the queues.
-
-The returned operation is a dequeue operation and will throw
-`tf.errors.OutOfRangeError` if the input queue is exhausted. If this
-operation is feeding another input queue, its queue runner will catch
-this exception, however, if this operation is used in your main thread
-you are responsible for catching this yourself.
-
-For example:
-
-```python
-# Creates batches of 32 images and 32 labels.
-image_batch, label_batch = tf.train.shuffle_batch(
- [single_image, single_label],
- batch_size=32,
- num_threads=4,
- capacity=50000,
- min_after_dequeue=10000)
-```
-
-*N.B.:* You must ensure that either (i) the `shapes` argument is
-passed, or (ii) all of the tensors in `tensors` must have
-fully-defined shapes. `ValueError` will be raised if neither of
-these conditions holds.
-
-If `allow_smaller_final_batch` is `True`, a smaller batch value than
-`batch_size` is returned when the queue is closed and there are not enough
-elements to fill the batch, otherwise the pending elements are discarded.
-In addition, all output tensors' static shapes, as accessed via the
-`get_shape` method will have a first `Dimension` value of `None`, and
-operations that depend on fixed batch_size would fail.
-
-Note: if `num_epochs` is not `None`, this function creates local counter
-`epochs`. Use `local_variables_initializer()` to initialize local variables.
-
-##### Args:
-
-
-* <b>`tensors`</b>: The list or dictionary of tensors to enqueue.
-* <b>`batch_size`</b>: The new batch size pulled from the queue.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`min_after_dequeue`</b>: Minimum number elements in the queue after a
- dequeue, used to ensure a level of mixing of elements.
-* <b>`num_threads`</b>: The number of threads enqueuing `tensor_list`.
-* <b>`seed`</b>: Seed for the random shuffling within the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list` is a single example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensor_list`.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (Optional) If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the types as `tensors`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensors`.
-
-
-- - -
-
-### `tf.train.maybe_shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, keep_input, num_threads=1, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_shuffle_batch}
-
-Creates batches by randomly shuffling conditionally-enqueued tensors.
-
-See docstring in `shuffle_batch` for more details.
-
-##### Args:
-
-
-* <b>`tensors`</b>: The list or dictionary of tensors to enqueue.
-* <b>`batch_size`</b>: The new batch size pulled from the queue.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`min_after_dequeue`</b>: Minimum number elements in the queue after a
- dequeue, used to ensure a level of mixing of elements.
-* <b>`keep_input`</b>: A `bool` Tensor. This tensor controls whether the input is
- added to the queue or not. If it is a scalar and evaluates `True`, then
- `tensors` are all added to the queue. If it is a vector and `enqueue_many`
- is `True`, then each example is added to the queue only if the
- corresonding value in `keep_input` is `True`. This tensor essentially acts
- as a filtering mechanism.
-* <b>`num_threads`</b>: The number of threads enqueuing `tensor_list`.
-* <b>`seed`</b>: Seed for the random shuffling within the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list` is a single example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensor_list`.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (Optional) If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the types as `tensors`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensors`.
-
-
-- - -
-
-### `tf.train.shuffle_batch_join(tensors_list, batch_size, capacity, min_after_dequeue, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#shuffle_batch_join}
-
-Create batches by randomly shuffling tensors.
-
-The `tensors_list` argument is a list of tuples of tensors, or a list of
-dictionaries of tensors. Each element in the list is treated similarly
-to the `tensors` argument of `tf.train.shuffle_batch()`.
-
-This version enqueues a different list of tensors in different threads.
-It adds the following to the current `Graph`:
-
-* A shuffling queue into which tensors from `tensors_list` are enqueued.
-* A `dequeue_many` operation to create batches from the queue.
-* A `QueueRunner` to `QUEUE_RUNNER` collection, to enqueue the tensors
- from `tensors_list`.
-
-`len(tensors_list)` threads will be started, with thread `i` enqueuing
-the tensors from `tensors_list[i]`. `tensors_list[i1][j]` must match
-`tensors_list[i2][j]` in type and shape, except in the first dimension if
-`enqueue_many` is true.
-
-If `enqueue_many` is `False`, each `tensors_list[i]` is assumed
-to represent a single example. An input tensor with shape `[x, y, z]`
-will be output as a tensor with shape `[batch_size, x, y, z]`.
-
-If `enqueue_many` is `True`, `tensors_list[i]` is assumed to
-represent a batch of examples, where the first dimension is indexed
-by example, and all members of `tensors_list[i]` should have the
-same size in the first dimension. If an input tensor has shape `[*, x,
-y, z]`, the output will have shape `[batch_size, x, y, z]`.
-
-The `capacity` argument controls the how long the prefetching is allowed to
-grow the queues.
-
-The returned operation is a dequeue operation and will throw
-`tf.errors.OutOfRangeError` if the input queue is exhausted. If this
-operation is feeding another input queue, its queue runner will catch
-this exception, however, if this operation is used in your main thread
-you are responsible for catching this yourself.
-
-If `allow_smaller_final_batch` is `True`, a smaller batch value than
-`batch_size` is returned when the queue is closed and there are not enough
-elements to fill the batch, otherwise the pending elements are discarded.
-In addition, all output tensors' static shapes, as accessed via the
-`get_shape` method will have a first `Dimension` value of `None`, and
-operations that depend on fixed batch_size would fail.
-
-##### Args:
-
-
-* <b>`tensors_list`</b>: A list of tuples or dictionaries of tensors to enqueue.
-* <b>`batch_size`</b>: An integer. The new batch size pulled from the queue.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`min_after_dequeue`</b>: Minimum number elements in the queue after a
- dequeue, used to ensure a level of mixing of elements.
-* <b>`seed`</b>: Seed for the random shuffling within the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list_list` is a single
- example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensors_list[i]`.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (optional). If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the same number and types as
- `tensors_list[i]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensors_list`.
-
-
-- - -
-
-### `tf.train.maybe_shuffle_batch_join(tensors_list, batch_size, capacity, min_after_dequeue, keep_input, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)` {#maybe_shuffle_batch_join}
-
-Create batches by randomly shuffling conditionally-enqueued tensors.
-
-See docstring in `shuffle_batch_join` for more details.
-
-##### Args:
-
-
-* <b>`tensors_list`</b>: A list of tuples or dictionaries of tensors to enqueue.
-* <b>`batch_size`</b>: An integer. The new batch size pulled from the queue.
-* <b>`capacity`</b>: An integer. The maximum number of elements in the queue.
-* <b>`min_after_dequeue`</b>: Minimum number elements in the queue after a
- dequeue, used to ensure a level of mixing of elements.
-* <b>`keep_input`</b>: A `bool` Tensor. This tensor controls whether the input is
- added to the queue or not. If it is a scalar and evaluates `True`, then
- `tensors` are all added to the queue. If it is a vector and `enqueue_many`
- is `True`, then each example is added to the queue only if the
- corresonding value in `keep_input` is `True`. This tensor essentially acts
- as a filtering mechanism.
-* <b>`seed`</b>: Seed for the random shuffling within the queue.
-* <b>`enqueue_many`</b>: Whether each tensor in `tensor_list_list` is a single
- example.
-* <b>`shapes`</b>: (Optional) The shapes for each example. Defaults to the
- inferred shapes for `tensors_list[i]`.
-* <b>`allow_smaller_final_batch`</b>: (Optional) Boolean. If `True`, allow the final
- batch to be smaller if there are insufficient items left in the queue.
-* <b>`shared_name`</b>: (optional). If set, this queue will be shared under the given
- name across multiple sessions.
-* <b>`name`</b>: (Optional) A name for the operations.
-
-##### Returns:
-
- A list or dictionary of tensors with the same number and types as
- `tensors_list[i]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `shapes` are not specified, and cannot be
- inferred from the elements of `tensors_list`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/math_ops.md b/tensorflow/g3doc/api_docs/python/math_ops.md
deleted file mode 100644
index c6e68117db..0000000000
--- a/tensorflow/g3doc/api_docs/python/math_ops.md
+++ /dev/null
@@ -1,3672 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Math
-
-Note: Functions taking `Tensor` arguments can also take anything accepted by
-[`tf.convert_to_tensor`](framework.md#convert_to_tensor).
-
-[TOC]
-
-Basic arithmetic operators. See the @{$python/math_ops} guide.
-
-- - -
-
-### `tf.add(x, y, name=None)` {#add}
-
-Returns x + y element-wise.
-
-*NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.subtract(x, y, name=None)` {#subtract}
-
-Returns x - y element-wise.
-
-*NOTE*: `tf.subtract` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.multiply(x, y, name=None)` {#multiply}
-
-Returns x * y element-wise.
-
-*NOTE*: ``tf.multiply`` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.scalar_mul(scalar, x)` {#scalar_mul}
-
-Multiplies a scalar times a `Tensor` or `IndexedSlices` object.
-
-Intended for use in gradient code which might deal with `IndexedSlices`
-objects, which are easy to multiply by a scalar but more expensive to
-multiply with arbitrary tensors.
-
-##### Args:
-
-
-* <b>`scalar`</b>: A 0-D scalar `Tensor`. Must have known shape.
-* <b>`x`</b>: A `Tensor` or `IndexedSlices` to be scaled.
-
-##### Returns:
-
- `scalar * x` of the same type (`Tensor` or `IndexedSlices`) as `x`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if scalar is not a 0-D `scalar`.
-
-
-- - -
-
-### `tf.div(x, y, name=None)` {#div}
-
-Divides x / y elementwise (using Python 2 division operator semantics).
-
-NOTE: Prefer using the Tensor division operator or tf.divide which obey Python
-division operator semantics.
-
-This function divides `x` and `y`, forcing Python 2.7 semantics. That is,
-if one of `x` or `y` is a float, then the result will be a float.
-Otherwise, the output will be an integer type. Flooring semantics are used
-for integer division.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` returns the quotient of x and y.
-
-
-- - -
-
-### `tf.divide(x, y, name=None)` {#divide}
-
-Computes Python style division of `x` by `y`.
-
-
-- - -
-
-### `tf.truediv(x, y, name=None)` {#truediv}
-
-Divides x / y elementwise (using Python 3 division operator semantics).
-
-NOTE: Prefer using the Tensor operator or tf.divide which obey Python
-division operator semantics.
-
-This function forces Python 3 division operator semantics where all integer
-arguments are cast to floating types first. This op is generated by normal
-`x / y` division in Python 3 and in Python 2.7 with
-`from __future__ import division`. If you want integer division that rounds
-down, use `x // y` or `tf.floordiv`.
-
-`x` and `y` must have the same numeric type. If the inputs are floating
-point, the output will have the same type. If the inputs are integral, the
-inputs are cast to `float32` for `int8` and `int16` and `float64` for `int32`
-and `int64` (matching the behavior of Numpy).
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of numeric type.
-* <b>`y`</b>: `Tensor` denominator of numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` evaluated in floating point.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` and `y` have different dtypes.
-
-
-- - -
-
-### `tf.floordiv(x, y, name=None)` {#floordiv}
-
-Divides `x / y` elementwise, rounding toward the most negative integer.
-
-The same as `tf.div(x,y)` for integers, but uses `tf.floor(tf.div(x,y))` for
-floating point arguments so that the result is always an integer (though
-possibly an integer represented as floating point). This op is generated by
-`x // y` floor division in Python 3 and in Python 2.7 with
-`from __future__ import division`.
-
-Note that for efficiency, `floordiv` uses C semantics for negative numbers
-(unlike Python and Numpy).
-
-`x` and `y` must have the same type, and the result will have the same type
-as well.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` rounded down (except possibly towards zero for negative integers).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the inputs are complex.
-
-
-- - -
-
-### `tf.realdiv(x, y, name=None)` {#realdiv}
-
-Returns x / y element-wise for real types.
-
-If `x` and `y` are reals, this will return the floating-point division.
-
-*NOTE*: `Div` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.truncatediv(x, y, name=None)` {#truncatediv}
-
-Returns x / y element-wise for integer types.
-
-Truncation designates that negative numbers will round fractional quantities
-toward zero. I.e. -7 / 5 = 1. This matches C semantics but it is different
-than Python semantics. See `FloorDiv` for a division function that matches
-Python Semantics.
-
-*NOTE*: `TruncateDiv` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.floor_div(x, y, name=None)` {#floor_div}
-
-Returns x // y element-wise.
-
-*NOTE*: `FloorDiv` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `uint16`, `int16`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.truncatemod(x, y, name=None)` {#truncatemod}
-
-Returns element-wise remainder of division. This emulates C semantics where
-
-true, this follows C semantics in that the result here is consistent
-with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.
-
-*NOTE*: `Mod` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.floormod(x, y, name=None)` {#floormod}
-
-Returns element-wise remainder of division. When `x < 0` xor `y < 0` is
-
-true, this follows Python semantics in that the result here is consistent
-with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.
-
-*NOTE*: `FloorMod` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.mod(x, y, name=None)` {#mod}
-
-Returns element-wise remainder of division. When `x < 0` xor `y < 0` is
-
-true, this follows Python semantics in that the result here is consistent
-with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.
-
-*NOTE*: `FloorMod` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.cross(a, b, name=None)` {#cross}
-
-Compute the pairwise cross product.
-
-`a` and `b` must be the same shape; they can either be simple 3-element vectors,
-or any shape where the innermost dimension is 3. In the latter case, each pair
-of corresponding 3-element vectors is cross-multiplied independently.
-
-##### Args:
-
-
-* <b>`a`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
- A tensor containing 3-element vectors.
-* <b>`b`</b>: A `Tensor`. Must have the same type as `a`.
- Another tensor, of same type and shape as `a`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `a`.
- Pairwise cross product of the vectors in `a` and `b`.
-
-
-- - -
-
-### `tf.add_n(inputs, name=None)` {#add_n}
-
-Adds all input tensors element-wise.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A list of `Tensor` objects, each with same shape and type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of same shape and type as the elements of `inputs`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `inputs` don't all have same shape and dtype or the shape
- cannot be inferred.
-
-
-- - -
-
-### `tf.abs(x, name=None)` {#abs}
-
-Computes the absolute value of a tensor.
-
-Given a tensor of real numbers `x`, this operation returns a tensor
-containing the absolute value of each element in `x`. For example, if x is
-an input element and y is an output element, this operation computes
-\\(y = |x|\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor` of type `float32`, `float64`, `int32`, or
- `int64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` the same size and type as `x` with absolute
- values.
-
-
-- - -
-
-### `tf.negative(x, name=None)` {#negative}
-
-Computes numerical negative value element-wise.
-
-I.e., \(y = -x\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,
- `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.
-
-
-- - -
-
-### `tf.sign(x, name=None)` {#sign}
-
-Returns an element-wise indication of the sign of a number.
-
-`y = sign(x) = -1` if `x < 0`; 0 if `x == 0`; 1 if `x > 0`.
-
-For complex numbers, `y = sign(x) = x / |x|` if `x != 0`, otherwise `y = 0`.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,
- `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.
-
-
-- - -
-
-### `tf.reciprocal(x, name=None)` {#reciprocal}
-
-Computes the reciprocal of x element-wise.
-
-I.e., \\(y = 1 / x\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.square(x, name=None)` {#square}
-
-Computes square of x element-wise.
-
-I.e., \(y = x * x = x^2\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,
- `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.round(x, name=None)` {#round}
-
-Rounds the values of a tensor to the nearest integer, element-wise.
-
-Rounds half to even. Also known as bankers rounding. If you want to round
-according to the current system rounding mode use tf::cint.
-For example:
-
-```python
-# 'a' is [0.9, 2.5, 2.3, 1.5, -4.5]
-tf.round(a) ==> [ 1.0, 2.0, 2.0, 2.0, -4.0 ]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `float32` or `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of same shape and type as `x`.
-
-
-- - -
-
-### `tf.sqrt(x, name=None)` {#sqrt}
-
-Computes square root of x element-wise.
-
-I.e., \(y = \sqrt{x} = x^{1/2}\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`,
- `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.
-
-
-- - -
-
-### `tf.rsqrt(x, name=None)` {#rsqrt}
-
-Computes reciprocal of square root of x element-wise.
-
-I.e., \\(y = 1 / \sqrt{x}\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.pow(x, y, name=None)` {#pow}
-
-Computes the power of one value to another.
-
-Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for
-corresponding elements in `x` and `y`. For example:
-
-```
-# tensor 'x' is [[2, 2], [3, 3]]
-# tensor 'y' is [[8, 16], [2, 3]]
-tf.pow(x, y) ==> [[256, 65536], [9, 27]]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`y`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`.
-
-
-- - -
-
-### `tf.exp(x, name=None)` {#exp}
-
-Computes exponential of x element-wise. \\(y = e^x\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.expm1(x, name=None)` {#expm1}
-
-Computes exponential of x - 1 element-wise.
-
-I.e., \\(y = (\exp x) - 1\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.log(x, name=None)` {#log}
-
-Computes natural logarithm of x element-wise.
-
-I.e., \\(y = \log_e x\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.log1p(x, name=None)` {#log1p}
-
-Computes natural logarithm of (1 + x) element-wise.
-
-I.e., \\(y = \log_e (1 + x)\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.ceil(x, name=None)` {#ceil}
-
-Returns element-wise smallest integer in not less than x.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.floor(x, name=None)` {#floor}
-
-Returns element-wise largest integer not greater than x.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.maximum(x, y, name=None)` {#maximum}
-
-Returns the max of x and y (i.e. x > y ? x : y) element-wise.
-
-*NOTE*: `Maximum` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.minimum(x, y, name=None)` {#minimum}
-
-Returns the min of x and y (i.e. x < y ? x : y) element-wise.
-
-*NOTE*: `Minimum` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.cos(x, name=None)` {#cos}
-
-Computes cos of x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.sin(x, name=None)` {#sin}
-
-Computes sin of x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.lbeta(x, name='lbeta')` {#lbeta}
-
-Computes `ln(|Beta(x)|)`, reducing along the last dimension.
-
-Given one-dimensional `z = [z_0,...,z_{K-1}]`, we define
-
-```Beta(z) = \prod_j Gamma(z_j) / Gamma(\sum_j z_j)```
-
-And for `n + 1` dimensional `x` with shape `[N1, ..., Nn, K]`, we define
-`lbeta(x)[i1, ..., in] = Log(|Beta(x[i1, ..., in, :])|)`. In other words,
-the last dimension is treated as the `z` vector.
-
-Note that if `z = [u, v]`, then
-`Beta(z) = int_0^1 t^{u-1} (1 - t)^{v-1} dt`, which defines the traditional
-bivariate beta function.
-
-##### Args:
-
-
-* <b>`x`</b>: A rank `n + 1` `Tensor` with type `float`, or `double`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The logarithm of `|Beta(x)|` reducing along the last dimension.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `x` is empty with rank one or less.
-
-
-- - -
-
-### `tf.tan(x, name=None)` {#tan}
-
-Computes tan of x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.acos(x, name=None)` {#acos}
-
-Computes acos of x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.asin(x, name=None)` {#asin}
-
-Computes asin of x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.atan(x, name=None)` {#atan}
-
-Computes atan of x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.lgamma(x, name=None)` {#lgamma}
-
-Computes the log of the absolute value of `Gamma(x)` element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.digamma(x, name=None)` {#digamma}
-
-Computes Psi, the derivative of Lgamma (the log of the absolute value of
-
-`Gamma(x)`), element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.erf(x, name=None)` {#erf}
-
-Computes the Gauss error function of `x` element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of `SparseTensor`. Must be one of the following types: `half`,
- `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`.
-
-
-- - -
-
-### `tf.erfc(x, name=None)` {#erfc}
-
-Computes the complementary error function of `x` element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.squared_difference(x, y, name=None)` {#squared_difference}
-
-Returns (x - y)(x - y) element-wise.
-
-*NOTE*: `SquaredDifference` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.igamma(a, x, name=None)` {#igamma}
-
-Compute the lower regularized incomplete Gamma function `Q(a, x)`.
-
-The lower regularized incomplete Gamma function is defined as:
-
-```
-P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x)
-```
-where
-```
-gamma(a, x) = int_{0}^{x} t^{a-1} exp(-t) dt
-```
-is the lower incomplete Gamma function.
-
-Note, above `Q(a, x)` (`Igammac`) is the upper regularized complete
-Gamma function.
-
-##### Args:
-
-
-* <b>`a`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`x`</b>: A `Tensor`. Must have the same type as `a`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `a`.
-
-
-- - -
-
-### `tf.igammac(a, x, name=None)` {#igammac}
-
-Compute the upper regularized incomplete Gamma function `Q(a, x)`.
-
-The upper regularized incomplete Gamma function is defined as:
-
-```
-Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)
-```
-where
-```
-Gamma(a, x) = int_{x}^{\infty} t^{a-1} exp(-t) dt
-```
-is the upper incomplete Gama function.
-
-Note, above `P(a, x)` (`Igamma`) is the lower regularized complete
-Gamma function.
-
-##### Args:
-
-
-* <b>`a`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`x`</b>: A `Tensor`. Must have the same type as `a`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `a`.
-
-
-- - -
-
-### `tf.zeta(x, q, name=None)` {#zeta}
-
-Compute the Hurwitz zeta function \\(\zeta(x, q)\\).
-
-The Hurwitz zeta function is defined as:
-
-```
-\zeta(x, q) = \sum_{n=0}^{\infty} (q + n)^{-x}
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`q`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.polygamma(a, x, name=None)` {#polygamma}
-
-Compute the polygamma function \\(\psi^{(n)}(x)\\).
-
-The polygamma function is defined as:
-
-```
-\psi^{(n)}(x) = \frac{d^n}{dx^n} \psi(x)
-```
-where \\(\psi(x)\\) is the digamma function.
-
-##### Args:
-
-
-* <b>`a`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`x`</b>: A `Tensor`. Must have the same type as `a`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `a`.
-
-
-- - -
-
-### `tf.betainc(a, b, x, name=None)` {#betainc}
-
-Compute the regularized incomplete beta integral \\(I_x(a, b)\\).
-
-The regularized incomplete beta integral is defined as:
-
-```
-I_x(a, b) = \frac{B(x; a, b)}{B(a, b)}
-```
-where
-
-```
-B(x; a, b) = \int_0^x t^{a-1} (1 - t)^{b-1} dt
-```
-
-is the incomplete beta function and \\(B(a, b)\\) is the *complete*
-beta function.
-
-##### Args:
-
-
-* <b>`a`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`b`</b>: A `Tensor`. Must have the same type as `a`.
-* <b>`x`</b>: A `Tensor`. Must have the same type as `a`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `a`.
-
-
-- - -
-
-### `tf.rint(x, name=None)` {#rint}
-
-Returns element-wise integer closest to x.
-
-If the result is midway between two representable values,
-the even representable is chosen.
-For example:
-
-```
-rint(-1.5) ==> -2.0
-rint(0.5000001) ==> 1.0
-rint([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) ==> [-2., -2., -0., 0., 2., 2., 2.]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.diag(diagonal, name=None)` {#diag}
-
-Returns a diagonal tensor with a given diagonal values.
-
-Given a `diagonal`, this operation returns a tensor with the `diagonal` and
-everything else padded with zeros. The diagonal is computed as follows:
-
-Assume `diagonal` has dimensions [D1,..., Dk], then the output is a tensor of
-rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:
-
-`output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]` and 0 everywhere else.
-
-For example:
-
-```prettyprint
-# 'diagonal' is [1, 2, 3, 4]
-tf.diag(diagonal) ==> [[1, 0, 0, 0]
- [0, 2, 0, 0]
- [0, 0, 3, 0]
- [0, 0, 0, 4]]
-```
-
-##### Args:
-
-
-* <b>`diagonal`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
- Rank k tensor where k is at most 3.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `diagonal`.
-
-
-- - -
-
-### `tf.diag_part(input, name=None)` {#diag_part}
-
-Returns the diagonal part of the tensor.
-
-This operation returns a tensor with the `diagonal` part
-of the `input`. The `diagonal` part is computed as follows:
-
-Assume `input` has dimensions `[D1,..., Dk, D1,..., Dk]`, then the output is a
-tensor of rank `k` with dimensions `[D1,..., Dk]` where:
-
-`diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]`.
-
-For example:
-
-```prettyprint
-# 'input' is [[1, 0, 0, 0]
- [0, 2, 0, 0]
- [0, 0, 3, 0]
- [0, 0, 0, 4]]
-
-tf.diag_part(input) ==> [1, 2, 3, 4]
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
- Rank k tensor where k is 2, 4, or 6.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`. The extracted diagonal.
-
-
-- - -
-
-### `tf.trace(x, name=None)` {#trace}
-
-Compute the trace of a tensor `x`.
-
-`trace(x)` returns the sum along the main diagonal of each inner-most matrix
-in x. If x is of rank `k` with shape `[I, J, K, ..., L, M, N]`, then output
-is a tensor of rank `k-2` with dimensions `[I, J, K, ..., L]` where
-
-`output[i, j, k, ..., l] = trace(x[i, j, i, ..., l, :, :])`
-
-For example:
-
-```python
-# 'x' is [[1, 2],
-# [3, 4]]
-tf.trace(x) ==> 5
-
-# 'x' is [[1,2,3],
-# [4,5,6],
-# [7,8,9]]
-tf.trace(x) ==> 15
-
-# 'x' is [[[1,2,3],
-# [4,5,6],
-# [7,8,9]],
-# [[-1,-2,-3],
-# [-4,-5,-6],
-# [-7,-8,-9]]]
-tf.trace(x) ==> [15,-15]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The trace of input tensor.
-
-
-- - -
-
-### `tf.transpose(a, perm=None, name='transpose')` {#transpose}
-
-Transposes `a`. Permutes the dimensions according to `perm`.
-
-The returned tensor's dimension i will correspond to the input dimension
-`perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is
-the rank of the input tensor. Hence by default, this operation performs a
-regular matrix transpose on 2-D input Tensors.
-
-For example:
-
-```python
-# 'x' is [[1 2 3]
-# [4 5 6]]
-tf.transpose(x) ==> [[1 4]
- [2 5]
- [3 6]]
-
-# Equivalently
-tf.transpose(x, perm=[1, 0]) ==> [[1 4]
- [2 5]
- [3 6]]
-
-# 'perm' is more useful for n-dimensional tensors, for n > 2
-# 'x' is [[[1 2 3]
-# [4 5 6]]
-# [[7 8 9]
-# [10 11 12]]]
-# Take the transpose of the matrices in dimension-0
-tf.transpose(x, perm=[0, 2, 1]) ==> [[[1 4]
- [2 5]
- [3 6]]
-
- [[7 10]
- [8 11]
- [9 12]]]
-```
-
-##### Args:
-
-
-* <b>`a`</b>: A `Tensor`.
-* <b>`perm`</b>: A permutation of the dimensions of `a`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A transposed `Tensor`.
-
-
-- - -
-
-### `tf.eye(num_rows, num_columns=None, batch_shape=None, dtype=tf.float32, name=None)` {#eye}
-
-Construct an identity matrix, or a batch of matrices.
-
-```python
-# Construct one identity matrix.
-tf.eye(2)
-==> [[1., 0.],
- [0., 1.]]
-
-# Construct a batch of 3 identity matricies, each 2 x 2.
-# batch_identity[i, :, :] is a 2 x 2 identity matrix, i = 0, 1, 2.
-batch_identity = tf.eye(2, batch_shape=[3])
-
-# Construct one 2 x 3 "identity" matrix
-tf.eye(2, num_columns=3)
-==> [[ 1., 0., 0.],
- [ 0., 1., 0.]]
-```
-
-##### Args:
-
-
-* <b>`num_rows`</b>: Non-negative `int32` scalar `Tensor` giving the number of rows
- in each batch matrix.
-* <b>`num_columns`</b>: Optional non-negative `int32` scalar `Tensor` giving the number
- of columns in each batch matrix. Defaults to `num_rows`.
-* <b>`batch_shape`</b>: `int32` `Tensor`. If provided, returned `Tensor` will have
- leading batch dimensions of this shape.
-* <b>`dtype`</b>: The type of an element in the resulting `Tensor`
-* <b>`name`</b>: A name for this `Op`. Defaults to "eye".
-
-##### Returns:
-
- A `Tensor` of shape `batch_shape + [num_rows, num_columns]`
-
-
-- - -
-
-### `tf.matrix_diag(diagonal, name=None)` {#matrix_diag}
-
-Returns a batched diagonal tensor with a given batched diagonal values.
-
-Given a `diagonal`, this operation returns a tensor with the `diagonal` and
-everything else padded with zeros. The diagonal is computed as follows:
-
-Assume `diagonal` has `k` dimensions `[I, J, K, ..., N]`, then the output is a
-tensor of rank `k+1` with dimensions [I, J, K, ..., N, N]` where:
-
-`output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n]`.
-
-For example:
-
-```prettyprint
-# 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]
-
-and diagonal.shape = (2, 4)
-
-tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0]
- [0, 2, 0, 0]
- [0, 0, 3, 0]
- [0, 0, 0, 4]],
- [[5, 0, 0, 0]
- [0, 6, 0, 0]
- [0, 0, 7, 0]
- [0, 0, 0, 8]]]
-
-which has shape (2, 4, 4)
-```
-
-##### Args:
-
-
-* <b>`diagonal`</b>: A `Tensor`. Rank `k`, where `k >= 1`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `diagonal`.
- Rank `k+1`, with `output.shape = diagonal.shape + [diagonal.shape[-1]]`.
-
-
-- - -
-
-### `tf.matrix_diag_part(input, name=None)` {#matrix_diag_part}
-
-Returns the batched diagonal part of a batched tensor.
-
-This operation returns a tensor with the `diagonal` part
-of the batched `input`. The `diagonal` part is computed as follows:
-
-Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a
-tensor of rank `k - 1` with dimensions `[I, J, K, ..., min(M, N)]` where:
-
-`diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n]`.
-
-The input must be at least a matrix.
-
-For example:
-
-```prettyprint
-# 'input' is [[[1, 0, 0, 0]
- [0, 2, 0, 0]
- [0, 0, 3, 0]
- [0, 0, 0, 4]],
- [[5, 0, 0, 0]
- [0, 6, 0, 0]
- [0, 0, 7, 0]
- [0, 0, 0, 8]]]
-
-and input.shape = (2, 4, 4)
-
-tf.matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]]
-
-which has shape (2, 4)
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Rank `k` tensor where `k >= 2`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- The extracted diagonal(s) having shape
- `diagonal.shape = input.shape[:-2] + [min(input.shape[-2:])]`.
-
-
-- - -
-
-### `tf.matrix_band_part(input, num_lower, num_upper, name=None)` {#matrix_band_part}
-
-Copy a tensor setting everything outside a central band in each innermost matrix
-
-to zero.
-
-The `band` part is computed as follows:
-Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a
-tensor with the same shape where
-
-`band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]`.
-
-The indicator function
-
-`in_band(m, n) = (num_lower < 0 || (m-n) <= num_lower)) &&
- (num_upper < 0 || (n-m) <= num_upper)`.
-
-For example:
-
-```prettyprint
-# if 'input' is [[ 0, 1, 2, 3]
- [-1, 0, 1, 2]
- [-2, -1, 0, 1]
- [-3, -2, -1, 0]],
-
-tf.matrix_band_part(input, 1, -1) ==> [[ 0, 1, 2, 3]
- [-1, 0, 1, 2]
- [ 0, -1, 0, 1]
- [ 0, 0, -1, 0]],
-
-tf.matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0]
- [-1, 0, 1, 0]
- [-2, -1, 0, 1]
- [ 0, -2, -1, 0]]
-```
-
-Useful special cases:
-
-```prettyprint
- tf.matrix_band_part(input, 0, -1) ==> Upper triangular part.
- tf.matrix_band_part(input, -1, 0) ==> Lower triangular part.
- tf.matrix_band_part(input, 0, 0) ==> Diagonal.
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Rank `k` tensor.
-* <b>`num_lower`</b>: A `Tensor` of type `int64`.
- 0-D tensor. Number of subdiagonals to keep. If negative, keep entire
- lower triangle.
-* <b>`num_upper`</b>: A `Tensor` of type `int64`.
- 0-D tensor. Number of superdiagonals to keep. If negative, keep
- entire upper triangle.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- Rank `k` tensor of the same shape as input. The extracted banded tensor.
-
-
-- - -
-
-### `tf.matrix_set_diag(input, diagonal, name=None)` {#matrix_set_diag}
-
-Returns a batched matrix tensor with new batched diagonal values.
-
-Given `input` and `diagonal`, this operation returns a tensor with the
-same shape and values as `input`, except for the main diagonal of the
-innermost matrices. These will be overwritten by the values in `diagonal`.
-
-The output is computed as follows:
-
-Assume `input` has `k+1` dimensions `[I, J, K, ..., M, N]` and `diagonal` has
-`k` dimensions `[I, J, K, ..., min(M, N)]`. Then the output is a
-tensor of rank `k+1` with dimensions `[I, J, K, ..., M, N]` where:
-
- * `output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n]` for `m == n`.
- * `output[i, j, k, ..., m, n] = input[i, j, k, ..., m, n]` for `m != n`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Rank `k+1`, where `k >= 1`.
-* <b>`diagonal`</b>: A `Tensor`. Must have the same type as `input`.
- Rank `k`, where `k >= 1`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- Rank `k+1`, with `output.shape = input.shape`.
-
-
-- - -
-
-### `tf.matrix_transpose(a, name='matrix_transpose')` {#matrix_transpose}
-
-Transposes last two dimensions of tensor `a`.
-
-For example:
-
-```python
-# Matrix with no batch dimension.
-# 'x' is [[1 2 3]
-# [4 5 6]]
-tf.matrix_transpose(x) ==> [[1 4]
- [2 5]
- [3 6]]
-
-# Matrix with two batch dimensions.
-# x.shape is [1, 2, 3, 4]
-# tf.matrix_transpose(x) is shape [1, 2, 4, 3]
-```
-
-##### Args:
-
-
-* <b>`a`</b>: A `Tensor` with `rank >= 2`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A transposed batch matrix `Tensor`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `a` is determined statically to have `rank < 2`.
-
-
-- - -
-
-### `tf.matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_is_sparse=False, b_is_sparse=False, name=None)` {#matmul}
-
-Multiplies matrix `a` by matrix `b`, producing `a` * `b`.
-
-The inputs must be matrices (or tensors of rank > 2, representing batches of
-matrices), with matching inner dimensions, possibly after transposition.
-
-Both matrices must be of the same type. The supported types are:
-`float16`, `float32`, `float64`, `int32`, `complex64`, `complex128`.
-
-Either matrix can be transposed or adjointed (conjugated and transposed) on
-the fly by setting one of the corresponding flag to `True`. These are `False`
-by default.
-
-If one or both of the matrices contain a lot of zeros, a more efficient
-multiplication algorithm can be used by setting the corresponding
-`a_is_sparse` or `b_is_sparse` flag to `True`. These are `False` by default.
-This optimization is only available for plain matrices (rank-2 tensors) with
-datatypes `bfloat16` or `float32`.
-
-For example:
-
-```python
-# 2-D tensor `a`
-a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) => [[1. 2. 3.]
- [4. 5. 6.]]
-# 2-D tensor `b`
-b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) => [[7. 8.]
- [9. 10.]
- [11. 12.]]
-c = tf.matmul(a, b) => [[58 64]
- [139 154]]
-
-
-# 3-D tensor `a`
-a = tf.constant(np.arange(1, 13, dtype=np.int32),
- shape=[2, 2, 3]) => [[[ 1. 2. 3.]
- [ 4. 5. 6.]],
- [[ 7. 8. 9.]
- [10. 11. 12.]]]
-
-# 3-D tensor `b`
-b = tf.constant(np.arange(13, 25, dtype=np.int32),
- shape=[2, 3, 2]) => [[[13. 14.]
- [15. 16.]
- [17. 18.]],
- [[19. 20.]
- [21. 22.]
- [23. 24.]]]
-c = tf.matmul(a, b) => [[[ 94 100]
- [229 244]],
- [[508 532]
- [697 730]]]
-```
-
-##### Args:
-
-
-* <b>`a`</b>: `Tensor` of type `float16`, `float32`, `float64`, `int32`, `complex64`,
- `complex128` and rank > 1.
-* <b>`b`</b>: `Tensor` with same type and rank as `a`.
-* <b>`transpose_a`</b>: If `True`, `a` is transposed before multiplication.
-* <b>`transpose_b`</b>: If `True`, `b` is transposed before multiplication.
-* <b>`adjoint_a`</b>: If `True`, `a` is conjugated and transposed before
- multiplication.
-* <b>`adjoint_b`</b>: If `True`, `b` is conjugated and transposed before
- multiplication.
-* <b>`a_is_sparse`</b>: If `True`, `a` is treated as a sparse matrix.
-* <b>`b_is_sparse`</b>: If `True`, `b` is treated as a sparse matrix.
-* <b>`name`</b>: Name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of the same type as `a` and `b` where each inner-most matrix is
- the product of the corresponding matrices in `a` and `b`, e.g. if all
- transpose or adjoint attributes are `False`:
-
- `output`[..., i, j] = sum_k (`a`[..., i, k] * `b`[..., k, j]),
- for all indices i, j.
-
-
-* <b>`Note`</b>: This is matrix product, not element-wise product.
-
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If transpose_a and adjoint_a, or transpose_b and adjoint_b
- are both set to True.
-
-
-- - -
-
-### `tf.norm(tensor, ord='euclidean', axis=None, keep_dims=False, name=None)` {#norm}
-
-Computes the norm of vectors, matrices, and tensors.
-
-This function can compute 3 different matrix norms (Frobenius, 1-norm, and
-inf-norm) and up to 9218868437227405311 different vectors norms.
-
-##### Args:
-
-
-* <b>`tensor`</b>: `Tensor` of types `float32`, `float64`, `complex64`, `complex128`
-* <b>`ord`</b>: Order of the norm. Supported values are 'fro', 'euclidean', `0`,
- `1, `2`, `np.inf` and any positive real number yielding the corresponding
- p-norm. Default is 'euclidean' which is equivalent to Frobenius norm if
- `tensor` is a matrix and equivalent to 2-norm for vectors.
- Some restrictions apply,
- a) The Frobenius norm `fro` is not defined for vectors,
- b) If axis is a 2-tuple (matrix-norm), only 'euclidean', 'fro', `1`,
- `np.inf` are supported.
- See the description of `axis` on how to compute norms for a batch of
- vectors or matrices stored in a tensor.
-* <b>`axis`</b>: If `axis` is `None` (the default), the input is considered a vector
- and a single vector norm is computed over the entire set of values in the
- tensor, i.e. `norm(tensor, ord=ord)` is equivalent to
- `norm(reshape(tensor, [-1]), ord=ord)`.
- If `axis` is a Python integer, the input is considered a batch of vectors,
- and `axis`t determines the axis in `tensor` over which to compute vector
- norms.
- If `axis` is a 2-tuple of Python integers it is considered a batch of
- matrices and `axis` determines the axes in `tensor` over which to compute
- a matrix norm.
- Negative indices are supported. Example: If you are passing a tensor that
- can be either a matrix or a batch of matrices at runtime, pass
- `axis=[-2,-1]` instead of `axis=None` to make sure that matrix norms are
- computed.
-* <b>`keep_dims`</b>: If True, the axis indicated in `axis` are kept with size 1.
- Otherwise, the dimensions in `axis` are removed from the output shape.
-* <b>`name`</b>: The name of the op.
-
-##### Returns:
-
-
-* <b>`output`</b>: A `Tensor` of the same type as tensor, containing the vector or
- matrix norms. If `keep_dims` is True then the rank of output is equal to
- the rank of `tensor`. Otherwise, if `axis` is none the output is a scalar,
- if `axis` is an integer, the rank of `output` is one less than the rank
- of `tensor`, if `axis` is a 2-tuple the rank of `output` is two less
- than the rank of `tensor`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `ord` or `axis` is invalid.
-
-@compatibility(numpy)
-Mostly equivalent to numpy.linalg.norm.
-Not supported: ord <= 0, 2-norm for matrices, nuclear norm.
-
-##### Other differences:
-
- a) If axis is `None`, treats the the flattened `tensor` as a vector
- regardless of rank.
- b) Explicitly supports 'euclidean' norm as the default, including for
- higher order tensors.
-@end_compatibility
-
-
-- - -
-
-### `tf.matrix_determinant(input, name=None)` {#matrix_determinant}
-
-Computes the determinant of one ore more square matrices.
-
-The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
-form square matrices. The output is a tensor containing the determinants
-for all input submatrices `[..., :, :]`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
- Shape is `[..., M, M]`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`. Shape is `[...]`.
-
-
-- - -
-
-### `tf.matrix_inverse(input, adjoint=None, name=None)` {#matrix_inverse}
-
-Computes the inverse of one or more square invertible matrices or their
-
-adjoints (conjugate transposes).
-
-The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
-form square matrices. The output is a tensor of the same shape as the input
-containing the inverse for all input submatrices `[..., :, :]`.
-
-The op uses LU decomposition with partial pivoting to compute the inverses.
-
-If a matrix is not invertible there is no guarantee what the op does. It
-may detect the condition and raise an exception or it may simply return a
-garbage result.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`.
- Shape is `[..., M, M]`.
-* <b>`adjoint`</b>: An optional `bool`. Defaults to `False`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`. Shape is `[..., M, M]`.
-
- @compatibility(numpy)
- Equivalent to np.linalg.inv
- @end_compatibility
-
-
-- - -
-
-### `tf.cholesky(input, name=None)` {#cholesky}
-
-Computes the Cholesky decomposition of one or more square matrices.
-
-The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
-form square matrices, with the same constraints as the single matrix Cholesky
-decomposition above. The output is a tensor of the same shape as the input
-containing the Cholesky decompositions for all input submatrices `[..., :, :]`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`.
- Shape is `[..., M, M]`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`. Shape is `[..., M, M]`.
-
-
-- - -
-
-### `tf.cholesky_solve(chol, rhs, name=None)` {#cholesky_solve}
-
-Solves systems of linear eqns `A X = RHS`, given Cholesky factorizations.
-
-```python
-# Solve 10 separate 2x2 linear systems:
-A = ... # shape 10 x 2 x 2
-RHS = ... # shape 10 x 2 x 1
-chol = tf.cholesky(A) # shape 10 x 2 x 2
-X = tf.cholesky_solve(chol, RHS) # shape 10 x 2 x 1
-# tf.matmul(A, X) ~ RHS
-X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0]
-
-# Solve five linear systems (K = 5) for every member of the length 10 batch.
-A = ... # shape 10 x 2 x 2
-RHS = ... # shape 10 x 2 x 5
-...
-X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2]
-```
-
-##### Args:
-
-
-* <b>`chol`</b>: A `Tensor`. Must be `float32` or `float64`, shape is `[..., M, M]`.
- Cholesky factorization of `A`, e.g. `chol = tf.cholesky(A)`.
- For that reason, only the lower triangular parts (including the diagonal)
- of the last two dimensions of `chol` are used. The strictly upper part is
- assumed to be zero and not accessed.
-* <b>`rhs`</b>: A `Tensor`, same type as `chol`, shape is `[..., M, K]`.
-* <b>`name`</b>: A name to give this `Op`. Defaults to `cholesky_solve`.
-
-##### Returns:
-
- Solution to `A x = rhs`, shape `[..., M, K]`.
-
-
-- - -
-
-### `tf.matrix_solve(matrix, rhs, adjoint=None, name=None)` {#matrix_solve}
-
-Solves systems of linear equations.
-
-`Matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions
-form square matrices. `Rhs` is a tensor of shape `[..., M, K]`. The `output` is
-a tensor shape `[..., M, K]`. If `adjoint` is `False` then each output matrix
-satisfies `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`.
-If `adjoint` is `True` then each output matrix satisfies
-`adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]`.
-
-##### Args:
-
-
-* <b>`matrix`</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`, `complex64`, `complex128`.
- Shape is `[..., M, M]`.
-* <b>`rhs`</b>: A `Tensor`. Must have the same type as `matrix`.
- Shape is `[..., M, K]`.
-* <b>`adjoint`</b>: An optional `bool`. Defaults to `False`.
- Boolean indicating whether to solve with `matrix` or its (block-wise)
- adjoint.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `matrix`. Shape is `[..., M, K]`.
-
-
-- - -
-
-### `tf.matrix_triangular_solve(matrix, rhs, lower=None, adjoint=None, name=None)` {#matrix_triangular_solve}
-
-Solves systems of linear equations with upper or lower triangular matrices by
-
-backsubstitution.
-
-`matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form
-square matrices. If `lower` is `True` then the strictly upper triangular part
-of each inner-most matrix is assumed to be zero and not accessed.
-If `lower` is False then the strictly lower triangular part of each inner-most
-matrix is assumed to be zero and not accessed.
-`rhs` is a tensor of shape `[..., M, K]`.
-
-The output is a tensor of shape `[..., M, K]`. If `adjoint` is
-`True` then the innermost matrices in output` satisfy matrix equations
-`matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`.
-If `adjoint` is `False` then the strictly then the innermost matrices in
-`output` satisfy matrix equations
-`adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`.
-
-##### Args:
-
-
-* <b>`matrix`</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`.
- Shape is `[..., M, M]`.
-* <b>`rhs`</b>: A `Tensor`. Must have the same type as `matrix`.
- Shape is `[..., M, K]`.
-* <b>`lower`</b>: An optional `bool`. Defaults to `True`.
- Boolean indicating whether the innermost matrices in `matrix` are
- lower or upper triangular.
-* <b>`adjoint`</b>: An optional `bool`. Defaults to `False`.
- Boolean indicating whether to solve with `matrix` or its (block-wise)
- adjoint.
-
- @compatibility(numpy)
- Equivalent to np.linalg.triangular_solve
- @end_compatibility
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `matrix`. Shape is `[..., M, K]`.
-
-
-- - -
-
-### `tf.matrix_solve_ls(matrix, rhs, l2_regularizer=0.0, fast=True, name=None)` {#matrix_solve_ls}
-
-Solves one or more linear least-squares problems.
-
-`matrix` is a tensor of shape `[..., M, N]` whose inner-most 2 dimensions
-form `M`-by-`N` matrices. Rhs is a tensor of shape `[..., M, K]` whose
-inner-most 2 dimensions form `M`-by-`K` matrices. The computed output is a
-`Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form `M`-by-`K`
-matrices that solve the equations
-`matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least squares
-sense.
-
-Below we will use the following notation for each pair of matrix and
-right-hand sides in the batch:
-
-`matrix`=\\(A \in \Re^{m \times n}\\),
-`rhs`=\\(B \in \Re^{m \times k}\\),
-`output`=\\(X \in \Re^{n \times k}\\),
-`l2_regularizer`=\\(\lambda\\).
-
-If `fast` is `True`, then the solution is computed by solving the normal
-equations using Cholesky decomposition. Specifically, if \\(m \ge n\\) then
-\\(X = (A^T A + \lambda I)^{-1} A^T B\\), which solves the least-squares
-problem \\(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} ||A Z - B||_F^2 +
-\lambda ||Z||_F^2\\). If \\(m \lt n\\) then `output` is computed as
-\\(X = A^T (A A^T + \lambda I)^{-1} B\\), which (for \\(\lambda = 0\\)) is
-the minimum-norm solution to the under-determined linear system, i.e.
-\\(X = \mathrm{argmin}_{Z \in \Re^{n \times k}} ||Z||_F^2 \\), subject to
-\\(A Z = B\\). Notice that the fast path is only numerically stable when
-\\(A\\) is numerically full rank and has a condition number
-\\(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon_{mach}}}\\) or\\(\lambda\\)
-is sufficiently large.
-
-If `fast` is `False` an algorithm based on the numerically robust complete
-orthogonal decomposition is used. This computes the minimum-norm
-least-squares solution, even when \\(A\\) is rank deficient. This path is
-typically 6-7 times slower than the fast path. If `fast` is `False` then
-`l2_regularizer` is ignored.
-
-##### Args:
-
-
-* <b>`matrix`</b>: `Tensor` of shape `[..., M, N]`.
-* <b>`rhs`</b>: `Tensor` of shape `[..., M, K]`.
-* <b>`l2_regularizer`</b>: 0-D `double` `Tensor`. Ignored if `fast=False`.
-* <b>`fast`</b>: bool. Defaults to `True`.
-* <b>`name`</b>: string, optional name of the operation.
-
-##### Returns:
-
-
-* <b>`output`</b>: `Tensor` of shape `[..., N, K]` whose inner-most 2 dimensions form
- `M`-by-`K` matrices that solve the equations
- `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]` in the least
- squares sense.
-
-
-- - -
-
-### `tf.qr(input, full_matrices=None, name=None)` {#qr}
-
-Computes the QR decompositions of one or more matrices.
-
-Computes the QR decomposition of each inner matrix in `tensor` such that
-`tensor[..., :, :] = q[..., :, :] * r[..., :,:])`
-
-```prettyprint
-# a is a tensor.
-# q is a tensor of orthonormal matrices.
-# r is a tensor of upper triangular matrices.
-q, r = qr(a)
-q_full, r_full = qr(a, full_matrices=True)
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float64`, `float32`, `complex64`, `complex128`.
- A tensor of shape `[..., M, N]` whose inner-most 2 dimensions
- form matrices of size `[M, N]`. Let `P` be the minimum of `M` and `N`.
-* <b>`full_matrices`</b>: An optional `bool`. Defaults to `False`.
- If true, compute full-sized `q` and `r`. If false
- (the default), compute only the leading `P` columns of `q`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (q, r).
-
-* <b>`q`</b>: A `Tensor`. Has the same type as `input`. Orthonormal basis for range of `a`. If `full_matrices` is `False` then
- shape is `[..., M, P]`; if `full_matrices` is `True` then shape is
- `[..., M, M]`.
-* <b>`r`</b>: A `Tensor`. Has the same type as `input`. Triangular factor. If `full_matrices` is `False` then shape is
- `[..., P, N]`. If `full_matrices` is `True` then shape is `[..., M, N]`.
-
-
-- - -
-
-### `tf.self_adjoint_eig(tensor, name=None)` {#self_adjoint_eig}
-
-Computes the eigen decomposition of a batch of self-adjoint matrices.
-
-Computes the eigenvalues and eigenvectors of the innermost N-by-N matrices
-in `tensor` such that
-`tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i]`, for i=0...N-1.
-
-##### Args:
-
-
-* <b>`tensor`</b>: `Tensor` of shape `[..., N, N]`. Only the lower triangular part of
- each inner inner matrix is referenced.
-* <b>`name`</b>: string, optional name of the operation.
-
-##### Returns:
-
-
-* <b>`e`</b>: Eigenvalues. Shape is `[..., N]`.
-* <b>`v`</b>: Eigenvectors. Shape is `[..., N, N]`. The columns of the inner most
- matrices contain eigenvectors of the corresponding matrices in `tensor`
-
-
-- - -
-
-### `tf.self_adjoint_eigvals(tensor, name=None)` {#self_adjoint_eigvals}
-
-Computes the eigenvalues of one or more self-adjoint matrices.
-
-##### Args:
-
-
-* <b>`tensor`</b>: `Tensor` of shape `[..., N, N]`.
-* <b>`name`</b>: string, optional name of the operation.
-
-##### Returns:
-
-
-* <b>`e`</b>: Eigenvalues. Shape is `[..., N]`. The vector `e[..., :]` contains the `N`
- eigenvalues of `tensor[..., :, :]`.
-
-
-- - -
-
-### `tf.svd(tensor, full_matrices=False, compute_uv=True, name=None)` {#svd}
-
-Computes the singular value decompositions of one or more matrices.
-
-Computes the SVD of each inner matrix in `tensor` such that
-`tensor[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * transpose(v[..., :,
-:])`
-
-```prettyprint
-# a is a tensor.
-# s is a tensor of singular values.
-# u is a tensor of left singular vectors.
-#v is a tensor of right singular vectors.
-s, u, v = svd(a)
-s = svd(a, compute_uv=False)
-```
-
-##### Args:
-
-
-* <b>`tensor`</b>: `Tensor` of shape `[..., M, N]`. Let `P` be the minimum of `M` and
- `N`.
-* <b>`full_matrices`</b>: If true, compute full-sized `u` and `v`. If false
- (the default), compute only the leading `P` singular vectors.
- Ignored if `compute_uv` is `False`.
-* <b>`compute_uv`</b>: If `True` then left and right singular vectors will be
- computed and returned in `u` and `v`, respectively. Otherwise, only the
- singular values will be computed, which can be significantly faster.
-* <b>`name`</b>: string, optional name of the operation.
-
-##### Returns:
-
-
-* <b>`s`</b>: Singular values. Shape is `[..., P]`.
-* <b>`u`</b>: Right singular vectors. If `full_matrices` is `False` (default) then
- shape is `[..., M, P]`; if `full_matrices` is `True` then shape is
- `[..., M, M]`. Not returned if `compute_uv` is `False`.
-* <b>`v`</b>: Left singular vectors. If `full_matrices` is `False` (default) then
- shape is `[..., N, P]`. If `full_matrices` is `True` then shape is
- `[..., N, N]`. Not returned if `compute_uv` is `False`.
-
-@compatibility(numpy)
-Mostly equivalent to numpy.linalg.svd, except that the order of output
-arguments here is `s`, `u`, `v` when `compute_uv` is `True`, as opposed to
-`u`, `s`, `v` for numpy.linalg.svd.
-@end_compatibility
-
-
-- - -
-
-### `tf.tensordot(a, b, axes, name=None)` {#tensordot}
-
-Tensor contraction of a and b along specified axes.
-
-Tensordot (also known as tensor contraction) sums the product of elements
-from `a` and `b` over the indices specified by `a_axes` and `b_axes`.
-The lists `a_axes` and `b_axes` specify those pairs of axes along which to
-contract the tensors. The axis `a_axes[i]` of `a` must have the same dimension
-as axis `b_axes[i]` of `b` for all `i` in `range(0, len(a_axes))`. The lists
-`a_axes` and `b_axes` must have identical length and consist of unique
-integers that specify valid axes for each of the tensors.
-
-This operation corresponds to `numpy.tensordot(a, b, axes)`.
-
-Example 1: When `a` and `b` are matrices (order 2), the case `axes = 1`
-is equivalent to matrix multiplication.
-
-Example 2: When `a` and `b` are matrices (order 2), the case
-`axes = [[1], [0]]` is equivalent to matrix multiplication.
-
-Example 3: Suppose that \\(a_ijk\\) and \\(b_lmn\\) represent two
-tensors of order 3. Then, `contract(a, b, [0], [2])` is the order 4 tensor
-\\(c_{jklm}\\) whose entry
-corresponding to the indices \\((j,k,l,m)\\) is given by:
-
-\\( c_{jklm} = \sum_i a_{ijk} b_{lmi} \\).
-
-In general, `order(c) = order(a) + order(b) - 2*len(axes[0])`.
-
-##### Args:
-
-
-* <b>`a`</b>: `Tensor` of type `float32` or `float64`.
-* <b>`b`</b>: `Tensor` with the same type as `a`.
-* <b>`axes`</b>: Either a scalar `N`, or a list or an `int32` `Tensor` of shape [2, k].
- If axes is a scalar, sum over the last N axes of a and the first N axes
- of b in order.
- If axes is a list or `Tensor` the first and second row contain the set of
- unique integers specifying axes along which the contraction is computed,
- for `a` and `b`, respectively. The number of axes for `a` and `b` must
- be equal.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` with the same type as `a`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the shapes of `a`, `b`, and `axes` are incompatible.
-* <b>`IndexError`</b>: If the values in axes exceed the rank of the corresponding
- tensor.
-
-
-- - -
-
-### `tf.complex(real, imag, name=None)` {#complex}
-
-Converts two real numbers to a complex number.
-
-Given a tensor `real` representing the real part of a complex number, and a
-tensor `imag` representing the imaginary part of a complex number, this
-operation returns complex numbers elementwise of the form \\(a + bj\\), where
-*a* represents the `real` part and *b* represents the `imag` part.
-
-The input tensors `real` and `imag` must have the same shape.
-
-For example:
-
-```
-# tensor 'real' is [2.25, 3.25]
-# tensor `imag` is [4.75, 5.75]
-tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]]
-```
-
-##### Args:
-
-
-* <b>`real`</b>: A `Tensor`. Must be one of the following types: `float32`,
- `float64`.
-* <b>`imag`</b>: A `Tensor`. Must have the same type as `real`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `complex64` or `complex128`.
-
-
-- - -
-
-### `tf.conj(x, name=None)` {#conj}
-
-Returns the complex conjugate of a complex number.
-
-Given a tensor `input` of complex numbers, this operation returns a tensor of
-complex numbers that are the complex conjugate of each element in `input`. The
-complex numbers in `input` must be of the form \\(a + bj\\), where *a* is the
-real part and *b* is the imaginary part.
-
-The complex conjugate returned by this operation is of the form \\(a - bj\\).
-
-For example:
-
- # tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
- tf.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j]
-
-If `x` is real, it is returned unchanged.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` to conjugate. Must have numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` that is the conjugate of `x` (with the same type).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `x` is not a numeric tensor.
-
-
-- - -
-
-### `tf.imag(input, name=None)` {#imag}
-
-Returns the imaginary part of a complex number.
-
-Given a tensor `input` of complex numbers, this operation returns a tensor of
-type `float32` or `float64` that is the imaginary part of each element in
-`input`. All elements in `input` must be complex numbers of the form \(a +
-bj\), where *a* is the real part and *b* is the imaginary part returned by
-this operation.
-
-For example:
-
-```
-# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
-tf.imag(input) ==> [4.75, 5.75]
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `complex64`,
- `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32` or `float64`.
-
-
-- - -
-
-### `tf.real(input, name=None)` {#real}
-
-Returns the real part of a complex number.
-
-Given a tensor `input` of complex numbers, this operation returns a tensor of
-type `float32` or `float64` that is the real part of each element in `input`.
-All elements in `input` must be complex numbers of the form \\(a + bj\\),
-where *a* is the real part returned by this operation and *b* is the
-imaginary part.
-
-For example:
-
-```
-# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
-tf.real(input) ==> [-2.25, 3.25]
-```
-
-If `input` is already real, it is returned unchanged.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must have numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `float32` or `float64`.
-
-
-- - -
-
-### `tf.fft(input, name=None)` {#fft}
-
-Compute the 1-dimensional discrete Fourier Transform over the inner-most
-
-dimension of `input`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `complex64`. A complex64 tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `complex64`.
- A complex64 tensor of the same shape as `input`. The inner-most
- dimension of `input` is replaced with its 1D Fourier Transform.
-
-
-- - -
-
-### `tf.ifft(input, name=None)` {#ifft}
-
-Compute the inverse 1-dimensional discrete Fourier Transform over the inner-most
-
-dimension of `input`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `complex64`. A complex64 tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `complex64`.
- A complex64 tensor of the same shape as `input`. The inner-most
- dimension of `input` is replaced with its inverse 1D Fourier Transform.
-
-
-- - -
-
-### `tf.fft2d(input, name=None)` {#fft2d}
-
-Compute the 2-dimensional discrete Fourier Transform over the inner-most
-
-2 dimensions of `input`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `complex64`. A complex64 tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `complex64`.
- A complex64 tensor of the same shape as `input`. The inner-most 2
- dimensions of `input` are replaced with their 2D Fourier Transform.
-
- @compatibility(numpy)
- Equivalent to np.fft2
- @end_compatibility
-
-
-- - -
-
-### `tf.ifft2d(input, name=None)` {#ifft2d}
-
-Compute the inverse 2-dimensional discrete Fourier Transform over the inner-most
-
-2 dimensions of `input`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `complex64`. A complex64 tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `complex64`.
- A complex64 tensor of the same shape as `input`. The inner-most 2
- dimensions of `input` are replaced with their inverse 2D Fourier Transform.
-
- @compatibility(numpy)
- Equivalent to np.ifft2
- @end_compatibility
-
-
-- - -
-
-### `tf.fft3d(input, name=None)` {#fft3d}
-
-Compute the 3-dimensional discrete Fourier Transform over the inner-most 3
-
-dimensions of `input`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `complex64`. A complex64 tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `complex64`.
- A complex64 tensor of the same shape as `input`. The inner-most 3
- dimensions of `input` are replaced with their 3D Fourier Transform.
-
- @compatibility(numpy)
- Equivalent to np.fft3
- @end_compatibility
-
-
-- - -
-
-### `tf.ifft3d(input, name=None)` {#ifft3d}
-
-Compute the inverse 3-dimensional discrete Fourier Transform over the inner-most
-
-3 dimensions of `input`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `complex64`. A complex64 tensor.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `complex64`.
- A complex64 tensor of the same shape as `input`. The inner-most 3
- dimensions of `input` are replaced with their inverse 3D Fourier Transform.
-
- @compatibility(numpy)
- Equivalent to np.fft3
- @end_compatibility
-
-
-- - -
-
-### `tf.reduce_sum(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_sum}
-
-Computes the sum of elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-For example:
-
-```python
-# 'x' is [[1, 1, 1]
-# [1, 1, 1]]
-tf.reduce_sum(x) ==> 6
-tf.reduce_sum(x, 0) ==> [2, 2, 2]
-tf.reduce_sum(x, 1) ==> [3, 3]
-tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]]
-tf.reduce_sum(x, [0, 1]) ==> 6
-```
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The tensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
-@compatibility(numpy)
-Equivalent to np.sum
-@end_compatibility
-
-
-- - -
-
-### `tf.reduce_prod(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_prod}
-
-Computes the product of elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The tensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
-@compatibility(numpy)
-Equivalent to np.prod
-@end_compatibility
-
-
-- - -
-
-### `tf.reduce_min(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_min}
-
-Computes the minimum of elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The tensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
-@compatibility(numpy)
-Equivalent to np.min
-@end_compatibility
-
-
-- - -
-
-### `tf.reduce_max(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_max}
-
-Computes the maximum of elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The tensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
-@compatibility(numpy)
-Equivalent to np.max
-@end_compatibility
-
-
-- - -
-
-### `tf.reduce_mean(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_mean}
-
-Computes the mean of elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-For example:
-
-```python
-# 'x' is [[1., 1.]
-# [2., 2.]]
-tf.reduce_mean(x) ==> 1.5
-tf.reduce_mean(x, 0) ==> [1.5, 1.5]
-tf.reduce_mean(x, 1) ==> [1., 2.]
-```
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The tensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
-@compatibility(numpy)
-Equivalent to np.mean
-@end_compatibility
-
-
-- - -
-
-### `tf.reduce_all(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_all}
-
-Computes the "logical and" of elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-For example:
-
-```python
-# 'x' is [[True, True]
-# [False, False]]
-tf.reduce_all(x) ==> False
-tf.reduce_all(x, 0) ==> [False, False]
-tf.reduce_all(x, 1) ==> [True, False]
-```
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The boolean tensor to reduce.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
-@compatibility(numpy)
-Equivalent to np.all
-@end_compatibility
-
-
-- - -
-
-### `tf.reduce_any(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_any}
-
-Computes the "logical or" of elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-For example:
-
-```python
-# 'x' is [[True, True]
-# [False, False]]
-tf.reduce_any(x) ==> True
-tf.reduce_any(x, 0) ==> [True, True]
-tf.reduce_any(x, 1) ==> [True, False]
-```
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The boolean tensor to reduce.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
-@compatibility(numpy)
-Equivalent to np.any
-@end_compatibility
-
-
-- - -
-
-### `tf.reduce_logsumexp(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)` {#reduce_logsumexp}
-
-Computes log(sum(exp(elements across dimensions of a tensor))).
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-This function is more numerically stable than log(sum(exp(input))). It avoids
-overflows caused by taking the exp of large inputs and underflows caused by
-taking the log of small inputs.
-
-For example:
-
-```python
-# 'x' is [[0, 0, 0]]
-# [0, 0, 0]]
-tf.reduce_logsumexp(x) ==> log(6)
-tf.reduce_logsumexp(x, 0) ==> [log(2), log(2), log(2)]
-tf.reduce_logsumexp(x, 1) ==> [log(3), log(3)]
-tf.reduce_logsumexp(x, 1, keep_dims=True) ==> [[log(3)], [log(3)]]
-tf.reduce_logsumexp(x, [0, 1]) ==> log(6)
-```
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The tensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor.
-
-
-- - -
-
-### `tf.count_nonzero(input_tensor, axis=None, keep_dims=False, dtype=tf.int64, name=None, reduction_indices=None)` {#count_nonzero}
-
-Computes number of nonzero elements across dimensions of a tensor.
-
-Reduces `input_tensor` along the dimensions given in `axis`.
-Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each
-entry in `axis`. If `keep_dims` is true, the reduced dimensions
-are retained with length 1.
-
-If `axis` has no entries, all dimensions are reduced, and a
-tensor with a single element is returned.
-
-**NOTE** Floating point comparison to zero is done by exact floating point
-equality check. Small values are **not** rounded to zero for purposes of
-the nonzero check.
-
-For example:
-
-```python
-# 'x' is [[0, 1, 0]
-# [1, 1, 0]]
-tf.count_nonzero(x) ==> 3
-tf.count_nonzero(x, 0) ==> [1, 2, 0]
-tf.count_nonzero(x, 1) ==> [1, 2]
-tf.count_nonzero(x, 1, keep_dims=True) ==> [[1], [2]]
-tf.count_nonzero(x, [0, 1]) ==> 3
-```
-
-##### Args:
-
-
-* <b>`input_tensor`</b>: The tensor to reduce. Should be of numeric type, or `bool`.
-* <b>`axis`</b>: The dimensions to reduce. If `None` (the default),
- reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retains reduced dimensions with length 1.
-* <b>`dtype`</b>: The output dtype; defaults to `tf.int64`.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`reduction_indices`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- The reduced tensor (number of nonzero values).
-
-
-- - -
-
-### `tf.accumulate_n(inputs, shape=None, tensor_dtype=None, name=None)` {#accumulate_n}
-
-Returns the element-wise sum of a list of tensors.
-
-Optionally, pass `shape` and `tensor_dtype` for shape and type checking,
-otherwise, these are inferred.
-
-NOTE: This operation is not differentiable and cannot be used if inputs depend
-on trainable variables. Please use `tf.add_n` for such cases.
-
-For example:
-
-```python
-# tensor 'a' is [[1, 2], [3, 4]]
-# tensor `b` is [[5, 0], [0, 6]]
-tf.accumulate_n([a, b, a]) ==> [[7, 4], [6, 14]]
-
-# Explicitly pass shape and type
-tf.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32)
- ==> [[7, 4], [6, 14]]
-```
-
-##### Args:
-
-
-* <b>`inputs`</b>: A list of `Tensor` objects, each with same shape and type.
-* <b>`shape`</b>: Shape of elements of `inputs`.
-* <b>`tensor_dtype`</b>: The type of `inputs`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of same shape and type as the elements of `inputs`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `inputs` don't all have same shape and dtype or the shape
- cannot be inferred.
-
-
-- - -
-
-### `tf.einsum(equation, *inputs)` {#einsum}
-
-A generalized contraction between tensors of arbitrary dimension.
-
-This function returns a tensor whose elements are defined by `equation`,
-which is written in a shorthand form inspired by the Einstein summation
-convention. As an example, consider multiplying two matrices
-A and B to form a matrix C. The elements of C are given by:
-
-```
- C[i,k] = sum_j A[i,j] * B[j,k]
-```
-
-The corresponding `equation` is:
-
-```
- ij,jk->ik
-```
-
-In general, the `equation` is obtained from the more familiar element-wise
-equation by
- 1. removing variable names, brackets, and commas,
- 2. replacing "*" with ",",
- 3. dropping summation signs, and
- 4. moving the output to the right, and replacing "=" with "->".
-
-Many common operations can be expressed in this way. For example:
-
-```python
-# Matrix multiplication
->>> einsum('ij,jk->ik', m0, m1) # output[i,k] = sum_j m0[i,j] * m1[j, k]
-
-# Dot product
->>> einsum('i,i->', u, v) # output = sum_i u[i]*v[i]
-
-# Outer product
->>> einsum('i,j->ij', u, v) # output[i,j] = u[i]*v[j]
-
-# Transpose
->>> einsum('ij->ji', m) # output[j,i] = m[i,j]
-
-# Batch matrix multiplication
->>> einsum('aij,ajk->aik', s, t) # out[a,i,k] = sum_j s[a,i,j] * t[a, j, k]
-```
-
-This function behaves like `numpy.einsum`, but does not support:
-* Ellipses (subscripts like `ij...,jk...->ik...`)
-* Subscripts where an axis appears more than once for a single input
- (e.g. `ijj,k->ik`).
-* Subscripts that are summed across multiple inputs (e.g., `ij,ij,jk->ik`).
-
-##### Args:
-
-
-* <b>`equation`</b>: a `str` describing the contraction, in the same format as
- `numpy.einsum`.
-* <b>`inputs`</b>: the inputs to contract (each one a `Tensor`), whose shapes should
- be consistent with `equation`.
-
-##### Returns:
-
- The contracted `Tensor`, with shape determined by `equation`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If
- - the format of `equation` is incorrect,
- - the number of inputs implied by `equation` does not match `len(inputs)`,
- - an axis appears in the output subscripts but not in any of the inputs,
- - the number of dimensions of an input differs from the number of
- indices in its subscript, or
- - the input shapes are inconsistent along a particular axis.
-
-
-- - -
-
-### `tf.cumsum(x, axis=0, exclusive=False, reverse=False, name=None)` {#cumsum}
-
-Compute the cumulative sum of the tensor `x` along `axis`.
-
-By default, this op performs an inclusive cumsum, which means that the first
-element of the input is identical to the first element of the output:
-```prettyprint
-tf.cumsum([a, b, c]) ==> [a, a + b, a + b + c]
-```
-
-By setting the `exclusive` kwarg to `True`, an exclusive cumsum is performed
-instead:
-```prettyprint
-tf.cumsum([a, b, c], exclusive=True) ==> [0, a, a + b]
-```
-
-By setting the `reverse` kwarg to `True`, the cumsum is performed in the
-opposite direction:
-```prettyprint
-tf.cumsum([a, b, c], reverse=True) ==> [a + b + c, b + c, c]
-```
-This is more efficient than using separate `tf.reverse` ops.
-
-The `reverse` and `exclusive` kwargs can also be combined:
-```prettyprint
-tf.cumsum([a, b, c], exclusive=True, reverse=True) ==> [b + c, c, 0]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`,
- `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,
- `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`axis`</b>: A `Tensor` of type `int32` (default: 0).
-* <b>`reverse`</b>: A `bool` (default: False).
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.cumprod(x, axis=0, exclusive=False, reverse=False, name=None)` {#cumprod}
-
-Compute the cumulative product of the tensor `x` along `axis`.
-
-By default, this op performs an inclusive cumprod, which means that the
-first
-element of the input is identical to the first element of the output:
-```prettyprint
-tf.cumprod([a, b, c]) ==> [a, a * b, a * b * c]
-```
-
-By setting the `exclusive` kwarg to `True`, an exclusive cumprod is
-performed
-instead:
-```prettyprint
-tf.cumprod([a, b, c], exclusive=True) ==> [1, a, a * b]
-```
-
-By setting the `reverse` kwarg to `True`, the cumprod is performed in the
-opposite direction:
-```prettyprint
-tf.cumprod([a, b, c], reverse=True) ==> [a * b * c, b * c, c]
-```
-This is more efficient than using separate `tf.reverse` ops.
-
-The `reverse` and `exclusive` kwargs can also be combined:
-```prettyprint
-tf.cumprod([a, b, c], exclusive=True, reverse=True) ==> [b * c, c, 1]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`,
- `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`,
- `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`axis`</b>: A `Tensor` of type `int32` (default: 0).
-* <b>`reverse`</b>: A `bool` (default: False).
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-### `tf.segment_sum(data, segment_ids, name=None)` {#segment_sum}
-
-Computes the sum along segments of a tensor.
-
-Read [the section on Segmentation](../../api_docs/python/math_ops.md#segmentation)
-for an explanation of segments.
-
-Computes a tensor such that
-\\(output_i = \sum_j data_j\\) where sum is over `j` such
-that `segment_ids[j] == i`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/SegmentSum.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`segment_ids`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor whose rank is equal to the rank of `data`'s
- first dimension. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
-
-- - -
-
-### `tf.segment_prod(data, segment_ids, name=None)` {#segment_prod}
-
-Computes the product along segments of a tensor.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-Computes a tensor such that
-\\(output_i = \prod_j data_j\\) where the product is over `j` such
-that `segment_ids[j] == i`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/SegmentProd.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`segment_ids`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor whose rank is equal to the rank of `data`'s
- first dimension. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
-
-- - -
-
-### `tf.segment_min(data, segment_ids, name=None)` {#segment_min}
-
-Computes the minimum along segments of a tensor.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-Computes a tensor such that
-\\(output_i = \min_j(data_j)\\) where `min` is over `j` such
-that `segment_ids[j] == i`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/SegmentMin.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`segment_ids`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor whose rank is equal to the rank of `data`'s
- first dimension. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
-
-- - -
-
-### `tf.segment_max(data, segment_ids, name=None)` {#segment_max}
-
-Computes the maximum along segments of a tensor.
-
-Read [the section on Segmentation](../../api_docs/python/math_ops.md#segmentation)
-for an explanation of segments.
-
-Computes a tensor such that
-\\(output_i = \max_j(data_j)\\) where `max` is over `j` such
-that `segment_ids[j] == i`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/SegmentMax.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`segment_ids`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor whose rank is equal to the rank of `data`'s
- first dimension. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
-
-- - -
-
-### `tf.segment_mean(data, segment_ids, name=None)` {#segment_mean}
-
-Computes the mean along segments of a tensor.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-Computes a tensor such that
-\\(output_i = \frac{\sum_j data_j}{N}\\) where `mean` is
-over `j` such that `segment_ids[j] == i` and `N` is the total number of
-values summed.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/SegmentMean.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`segment_ids`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor whose rank is equal to the rank of `data`'s
- first dimension. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
-
-- - -
-
-### `tf.unsorted_segment_sum(data, segment_ids, num_segments, name=None)` {#unsorted_segment_sum}
-
-Computes the sum along segments of a tensor.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-Computes a tensor such that
-`(output[i] = sum_{j...} data[j...]` where the sum is over tuples `j...` such
-that `segment_ids[j...] == i`. Unlike `SegmentSum`, `segment_ids`
-need not be sorted and need not cover all values in the full
-range of valid values.
-
-If the sum is empty for a given segment ID `i`, `output[i] = 0`.
-
-`num_segments` should equal the number of distinct segment IDs.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/UnsortedSegmentSum.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`segment_ids`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A tensor whose shape is a prefix of `data.shape`.
-* <b>`num_segments`</b>: A `Tensor` of type `int32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for the first `segment_ids.rank`
- dimensions, which are replaced with a single dimension which has size
- `num_segments`.
-
-
-- - -
-
-### `tf.unsorted_segment_max(data, segment_ids, num_segments, name=None)` {#unsorted_segment_max}
-
-Computes the Max along segments of a tensor.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-This operator is similar to the [unsorted segment sum operator](../../api_docs/python/math_ops.md#UnsortedSegmentSum).
-Instead of computing the sum over segments, it computes the maximum
-such that:
-
-\\(output_i = \max_j data_j\\) where max is over `j` such
-that `segment_ids[j] == i`.
-
-If the maximum is empty for a given segment ID `i`, it outputs the smallest possible value for specific numeric type,
- `output[i] = numeric_limits<T>::min()`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/UnsortedSegmentSum.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`segment_ids`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor whose rank is equal to the rank of `data`'s
- first dimension.
-* <b>`num_segments`</b>: A `Tensor` of type `int32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `num_segments`.
-
-
-- - -
-
-### `tf.sparse_segment_sum(data, indices, segment_ids, name=None)` {#sparse_segment_sum}
-
-Computes the sum along sparse segments of a tensor.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-Like `SegmentSum`, but `segment_ids` can have rank less than `data`'s first
-dimension, selecting a subset of dimension 0, specified by `indices`.
-
-For example:
-
-```prettyprint
-c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])
-
-# Select two rows, one segment.
-tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0]))
- ==> [[0 0 0 0]]
-
-# Select two rows, two segment.
-tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1]))
- ==> [[ 1 2 3 4]
- [-1 -2 -3 -4]]
-
-# Select all rows, two segments.
-tf.sparse_segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1]))
- ==> [[0 0 0 0]
- [5 6 7 8]]
-
-# Which is equivalent to:
-tf.segment_sum(c, tf.constant([0, 0, 1]))
-```
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor. Has same rank as `segment_ids`.
-* <b>`segment_ids`</b>: A `Tensor` of type `int32`.
- A 1-D tensor. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
-
-- - -
-
-### `tf.sparse_segment_mean(data, indices, segment_ids, name=None)` {#sparse_segment_mean}
-
-Computes the mean along sparse segments of a tensor.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-Like `SegmentMean`, but `segment_ids` can have rank less than `data`'s first
-dimension, selecting a subset of dimension 0, specified by `indices`.
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor. Has same rank as `segment_ids`.
-* <b>`segment_ids`</b>: A `Tensor` of type `int32`.
- A 1-D tensor. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
-
-- - -
-
-### `tf.sparse_segment_sqrt_n(data, indices, segment_ids, name=None)` {#sparse_segment_sqrt_n}
-
-Computes the sum along sparse segments of a tensor divided by the sqrt of N.
-
-N is the size of the segment being reduced.
-
-Read [the section on
-Segmentation](../../api_docs/python/math_ops.md#segmentation) for an explanation
-of segments.
-
-##### Args:
-
-
-* <b>`data`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A 1-D tensor. Has same rank as `segment_ids`.
-* <b>`segment_ids`</b>: A `Tensor` of type `int32`.
- A 1-D tensor. Values should be sorted and can be repeated.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `data`.
- Has same shape as data, except for dimension 0 which
- has size `k`, the number of segments.
-
-
-- - -
-
-### `tf.argmin(input, axis=None, name=None, dimension=None)` {#argmin}
-
-Returns the index with the smallest value across axes of a tensor.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`axis`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- int32, 0 <= axis < rank(input). Describes which axis
- of the input Tensor to reduce across. For vectors, use axis = 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `int64`.
-
-
-- - -
-
-### `tf.argmax(input, axis=None, name=None, dimension=None)` {#argmax}
-
-Returns the index with the largest value across axes of a tensor.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
-* <b>`axis`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- int32, 0 <= axis < rank(input). Describes which axis
- of the input Tensor to reduce across. For vectors, use axis = 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `int64`.
-
-
-- - -
-
-### `tf.setdiff1d(x, y, index_dtype=tf.int32, name=None)` {#setdiff1d}
-
-Computes the difference between two lists of numbers or strings.
-
-Given a list `x` and a list `y`, this operation returns a list `out` that
-represents all values that are in `x` but not in `y`. The returned list `out`
-is sorted in the same order that the numbers appear in `x` (duplicates are
-preserved). This operation also returns a list `idx` that represents the
-position of each `out` element in `x`. In other words:
-
-`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]`
-
-For example, given this input:
-
-```prettyprint
-x = [1, 2, 3, 4, 5, 6]
-y = [1, 3, 5]
-```
-
-This operation would return:
-
-```prettyprint
-out ==> [2, 4, 6]
-idx ==> [1, 3, 5]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. 1-D. Values to keep.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`. 1-D. Values to remove.
-* <b>`out_idx`</b>: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (out, idx).
-
-* <b>`out`</b>: A `Tensor`. Has the same type as `x`. 1-D. Values present in `x` but not in `y`.
-* <b>`idx`</b>: A `Tensor` of type `out_idx`. 1-D. Positions of `x` values preserved in `out`.
-
-
-- - -
-
-### `tf.where(condition, x=None, y=None, name=None)` {#where}
-
-Return the elements, either from `x` or `y`, depending on the `condition`.
-
-If both `x` and `y` are None, then this operation returns the coordinates of
-true elements of `condition`. The coordinates are returned in a 2-D tensor
-where the first dimension (rows) represents the number of true elements, and
-the second dimension (columns) represents the coordinates of the true
-elements. Keep in mind, the shape of the output tensor can vary depending on
-how many true values there are in input. Indices are output in row-major
-order.
-
-If both non-None, `x` and `y` must have the same shape.
-The `condition` tensor must be a scalar if `x` and `y` are scalar.
-If `x` and `y` are vectors or higher rank, then `condition` must be either a
-vector with size matching the first dimension of `x`, or must have the same
-shape as `x`.
-
-The `condition` tensor acts as a mask that chooses, based on the value at each
-element, whether the corresponding element / row in the output should be taken
-from `x` (if true) or `y` (if false).
-
-If `condition` is a vector and `x` and `y` are higher rank matrices, then it
-chooses which row (outer dimension) to copy from `x` and `y`. If `condition`
-has the same shape as `x` and `y`, then it chooses which element to copy from
-`x` and `y`.
-
-##### Args:
-
-
-* <b>`condition`</b>: A `Tensor` of type `bool`
-* <b>`x`</b>: A Tensor which may have the same shape as `condition`. If `condition` is
- rank 1, `x` may have higher rank, but its first dimension must match the
- size of `condition`.
-* <b>`y`</b>: A `tensor` with the same shape and type as `x`.
-* <b>`name`</b>: A name of the operation (optional)
-
-##### Returns:
-
- A `Tensor` with the same type and shape as `x`, `y` if they are non-None.
- A `Tensor` with shape `(num_true, dim_size(condition))`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: When exactly one of `x` or `y` is non-None.
-
-
-- - -
-
-### `tf.unique(x, out_idx=None, name=None)` {#unique}
-
-Finds unique elements in a 1-D tensor.
-
-This operation returns a tensor `y` containing all of the unique elements of `x`
-sorted in the same order that they occur in `x`. This operation also returns a
-tensor `idx` the same size as `x` that contains the index of each value of `x`
-in the unique output `y`. In other words:
-
-`y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]`
-
-For example:
-
-```prettyprint
-# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
-y, idx = unique(x)
-y ==> [1, 2, 4, 7, 8]
-idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. 1-D.
-* <b>`out_idx`</b>: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (y, idx).
-
-* <b>`y`</b>: A `Tensor`. Has the same type as `x`. 1-D.
-* <b>`idx`</b>: A `Tensor` of type `out_idx`. 1-D.
-
-
-- - -
-
-### `tf.edit_distance(hypothesis, truth, normalize=True, name='edit_distance')` {#edit_distance}
-
-Computes the Levenshtein distance between sequences.
-
-This operation takes variable-length sequences (`hypothesis` and `truth`),
-each provided as a `SparseTensor`, and computes the Levenshtein distance.
-You can normalize the edit distance by length of `truth` by setting
-`normalize` to true.
-
-For example, given the following input:
-
-```python
-# 'hypothesis' is a tensor of shape `[2, 1]` with variable-length values:
-# (0,0) = ["a"]
-# (1,0) = ["b"]
-hypothesis = tf.SparseTensor(
- [[0, 0, 0],
- [1, 0, 0]],
- ["a", "b"]
- (2, 1, 1))
-
-# 'truth' is a tensor of shape `[2, 2]` with variable-length values:
-# (0,0) = []
-# (0,1) = ["a"]
-# (1,0) = ["b", "c"]
-# (1,1) = ["a"]
-truth = tf.SparseTensor(
- [[0, 1, 0],
- [1, 0, 0],
- [1, 0, 1],
- [1, 1, 0]]
- ["a", "b", "c", "a"],
- (2, 2, 2))
-
-normalize = True
-```
-
-This operation would return the following:
-
-```python
-# 'output' is a tensor of shape `[2, 2]` with edit distances normalized
-# by 'truth' lengths.
-output ==> [[inf, 1.0], # (0,0): no truth, (0,1): no hypothesis
- [0.5, 1.0]] # (1,0): addition, (1,1): no hypothesis
-```
-
-##### Args:
-
-
-* <b>`hypothesis`</b>: A `SparseTensor` containing hypothesis sequences.
-* <b>`truth`</b>: A `SparseTensor` containing truth sequences.
-* <b>`normalize`</b>: A `bool`. If `True`, normalizes the Levenshtein distance by
- length of `truth.`
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A dense `Tensor` with rank `R - 1`, where R is the rank of the
- `SparseTensor` inputs `hypothesis` and `truth`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If either `hypothesis` or `truth` are not a `SparseTensor`.
-
-
-- - -
-
-### `tf.invert_permutation(x, name=None)` {#invert_permutation}
-
-Computes the inverse permutation of a tensor.
-
-This operation computes the inverse of an index permutation. It takes a 1-D
-integer tensor `x`, which represents the indices of a zero-based array, and
-swaps each value with its index position. In other words, for an output tensor
-`y` and an input tensor `x`, this operation computes the following:
-
-`y[x[i]] = i for i in [0, 1, ..., len(x) - 1]`
-
-The values must include 0. There can be no duplicate values or negative values.
-
-For example:
-
-```prettyprint
-# tensor `x` is [3, 4, 0, 2, 1]
-invert_permutation(x) ==> [2, 4, 3, 0, 1]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`. 1-D.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`. 1-D.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/nn.md b/tensorflow/g3doc/api_docs/python/nn.md
deleted file mode 100644
index cda8959391..0000000000
--- a/tensorflow/g3doc/api_docs/python/nn.md
+++ /dev/null
@@ -1,3634 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Neural Network
-
-Note: Functions taking `Tensor` arguments can also take anything accepted by
-[`tf.convert_to_tensor`](framework.md#convert_to_tensor).
-
-[TOC]
-
-## Neural network support. See the @{$python/nn} guide.
-
-- - -
-
-### `tf.nn.relu(features, name=None)` {#relu}
-
-Computes rectified linear: `max(features, 0)`.
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `features`.
-
-
-- - -
-
-### `tf.nn.relu6(features, name=None)` {#relu6}
-
-Computes Rectified Linear 6: `min(max(features, 0), 6)`.
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`,
- `int16`, or `int8`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` with the same type as `features`.
-
-
-- - -
-
-### `tf.nn.crelu(features, name=None)` {#crelu}
-
-Computes Concatenated ReLU.
-
-Concatenates a ReLU which selects only the positive part of the activation
-with a ReLU which selects only the *negative* part of the activation.
-Note that as a result this non-linearity doubles the depth of the activations.
-Source: https://arxiv.org/abs/1603.05201
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor` with type `float`, `double`, `int32`, `int64`, `uint8`,
- `int16`, or `int8`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` with the same type as `features`.
-
-
-- - -
-
-### `tf.nn.elu(features, name=None)` {#elu}
-
-Computes exponential linear: `exp(features) - 1` if < 0, `features` otherwise.
-
-See [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
-](http://arxiv.org/abs/1511.07289)
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `features`.
-
-
-- - -
-
-### `tf.nn.softplus(features, name=None)` {#softplus}
-
-Computes softplus: `log(exp(features) + 1)`.
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `features`.
-
-
-- - -
-
-### `tf.nn.softsign(features, name=None)` {#softsign}
-
-Computes softsign: `features / (abs(features) + 1)`.
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `features`.
-
-
-- - -
-
-### `tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None)` {#dropout}
-
-Computes dropout.
-
-With probability `keep_prob`, outputs the input element scaled up by
-`1 / keep_prob`, otherwise outputs `0`. The scaling is so that the expected
-sum is unchanged.
-
-By default, each element is kept or dropped independently. If `noise_shape`
-is specified, it must be
-[broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]`
-will make independent decisions. For example, if `shape(x) = [k, l, m, n]`
-and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be
-kept independently and each row and column will be kept or not kept together.
-
-##### Args:
-
-
-* <b>`x`</b>: A tensor.
-* <b>`keep_prob`</b>: A scalar `Tensor` with the same type as x. The probability
- that each element is kept.
-* <b>`noise_shape`</b>: A 1-D `Tensor` of type `int32`, representing the
- shape for randomly generated keep/drop flags.
-* <b>`seed`</b>: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A Tensor of the same shape of `x`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `keep_prob` is not in `(0, 1]`.
-
-
-- - -
-
-### `tf.nn.bias_add(value, bias, data_format=None, name=None)` {#bias_add}
-
-Adds `bias` to `value`.
-
-This is (mostly) a special case of `tf.add` where `bias` is restricted to 1-D.
-Broadcasting is supported, so `value` may have any number of dimensions.
-Unlike `tf.add`, the type of `bias` is allowed to differ from `value` in the
-case where both types are quantized.
-
-##### Args:
-
-
-* <b>`value`</b>: A `Tensor` with type `float`, `double`, `int64`, `int32`, `uint8`,
- `int16`, `int8`, `complex64`, or `complex128`.
-* <b>`bias`</b>: A 1-D `Tensor` with size matching the last dimension of `value`.
- Must be the same type as `value` unless `value` is a quantized type,
- in which case a different quantized type may be used.
-* <b>`data_format`</b>: A string. 'NHWC' and 'NCHW' are supported.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` with the same type as `value`.
-
-
-- - -
-
-### `tf.sigmoid(x, name=None)` {#sigmoid}
-
-Computes sigmoid of `x` element-wise.
-
-Specifically, `y = 1 / (1 + exp(-x))`.
-
-##### Args:
-
-
-* <b>`x`</b>: A Tensor with type `float32`, `float64`, `int32`, `complex64`, `int64`,
- or `qint32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A Tensor with the same type as `x` if `x.dtype != qint32`
- otherwise the return type is `quint8`.
-
-@compatibility(numpy)
-Equivalent to np.scipy.special.expit
-@end_compatibility
-
-
-- - -
-
-### `tf.tanh(x, name=None)` {#tanh}
-
-Computes hyperbolic tangent of `x` element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A Tensor or SparseTensor with type `float`, `double`, `int32`,
- `complex64`, `int64`, or `qint32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A Tensor or SparseTensor respectively with the same type as `x` if
- `x.dtype != qint32` otherwise the return type is `quint8`.
-
-
-- - -
-
-### `tf.nn.convolution(input, filter, padding, strides=None, dilation_rate=None, name=None, data_format=None)` {#convolution}
-
-Computes sums of N-D convolutions (actually cross-correlation).
-
-This also supports either output striding via the optional `strides` parameter
-or atrous convolution (also known as convolution with holes or dilated
-convolution, based on the French word "trous" meaning holes in English) via
-the optional `dilation_rate` parameter. Currently, however, output striding
-is not supported for atrous convolutions.
-
-Specifically, in the case that `data_format` does not start with "NC", given
-a rank (N+2) `input` Tensor of shape
-
- [num_batches,
- input_spatial_shape[0],
- ...,
- input_spatial_shape[N-1],
- num_input_channels],
-
-a rank (N+2) `filter` Tensor of shape
-
- [spatial_filter_shape[0],
- ...,
- spatial_filter_shape[N-1],
- num_input_channels,
- num_output_channels],
-
-an optional `dilation_rate` tensor of shape [N] (defaulting to [1]*N)
-specifying the filter upsampling/input downsampling rate, and an optional list
-of N `strides` (defaulting [1]*N), this computes for each N-D spatial output
-position (x[0], ..., x[N-1]):
-
- output[b, x[0], ..., x[N-1], k] =
-
- sum_{z[0], ..., z[N-1], q}
-
- filter[z[0], ..., z[N-1], q, k] *
- padded_input[b,
- x[0]*strides[0] + dilation_rate[0]*z[0],
- ...,
- x[N-1]*strides[N-1] + dilation_rate[N-1]*z[N-1],
- q]
-
-where `padded_input` is obtained by zero padding the input using an effective
-spatial filter shape of `(spatial_filter_shape-1) * dilation_rate + 1` and
-output striding `strides` as described in the
-[comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution).
-
-In the case that `data_format` does start with `"NC"`, the `input` and output
-(but not the `filter`) are simply transposed as follows:
-
- convolution(input, data_format, **kwargs) =
- tf.transpose(convolution(tf.transpose(input, [0] + range(2,N+2) + [1]),
- **kwargs),
- [0, N+1] + range(1, N+1))
-
-It is required that 1 <= N <= 3.
-
-##### Args:
-
-
-* <b>`input`</b>: An N-D `Tensor` of type `T`, of shape
- `[batch_size] + input_spatial_shape + [in_channels]` if data_format does
- not start with "NC" (default), or
- `[batch_size, in_channels] + input_spatial_shape` if data_format starts
- with "NC".
-* <b>`filter`</b>: An N-D `Tensor` with the same type as `input` and shape
- `spatial_filter_shape + [in_channels, out_channels]`.
-* <b>`padding`</b>: A string, either `"VALID"` or `"SAME"`. The padding algorithm.
-* <b>`strides`</b>: Optional. Sequence of N ints >= 1. Specifies the output stride.
- Defaults to [1]*N. If any value of strides is > 1, then all values of
- dilation_rate must be 1.
-* <b>`dilation_rate`</b>: Optional. Sequence of N ints >= 1. Specifies the filter
- upsampling/input downsampling rate. In the literature, the same parameter
- is sometimes called `input stride` or `dilation`. The effective filter
- size used for the convolution will be `spatial_filter_shape +
- (spatial_filter_shape - 1) * (rate - 1)`, obtained by inserting
- (dilation_rate[i]-1) zeros between consecutive elements of the original
- filter in each spatial dimension i. If any value of dilation_rate is > 1,
- then all values of strides must be 1.
-* <b>`name`</b>: Optional name for the returned tensor.
-* <b>`data_format`</b>: A string or None. Specifies whether the channel dimension of
- the `input` and output is the last dimension (default, or if `data_format`
- does not start with "NC"), or the second dimension (if `data_format`
- starts with "NC"). For N=1, the valid values are "NWC" (default) and
- "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For
- N=3, the valid value is "NDHWC".
-
-##### Returns:
-
- A `Tensor` with the same type as `input` of shape
-
- `[batch_size] + output_spatial_shape + [out_channels]`
-
- if data_format is None or does not start with "NC", or
-
- `[batch_size, out_channels] + output_spatial_shape`
-
- if data_format starts with "NC",
- where `output_spatial_shape` depends on the value of `padding`.
-
- If padding == "SAME":
- output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])
-
- If padding == "VALID":
- output_spatial_shape[i] =
- ceil((input_spatial_shape[i] -
- (spatial_filter_shape[i]-1) * dilation_rate[i])
- / strides[i]).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If input/output depth does not match `filter` shape, if padding
- is other than `"VALID"` or `"SAME"`, or if data_format is invalid.
-
-
-- - -
-
-### `tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)` {#conv2d}
-
-Computes a 2-D convolution given 4-D `input` and `filter` tensors.
-
-Given an input tensor of shape `[batch, in_height, in_width, in_channels]`
-and a filter / kernel tensor of shape
-`[filter_height, filter_width, in_channels, out_channels]`, this op
-performs the following:
-
-1. Flattens the filter to a 2-D matrix with shape
- `[filter_height * filter_width * in_channels, output_channels]`.
-2. Extracts image patches from the input tensor to form a *virtual*
- tensor of shape `[batch, out_height, out_width,
- filter_height * filter_width * in_channels]`.
-3. For each patch, right-multiplies the filter matrix and the image patch
- vector.
-
-In detail, with the default NHWC format,
-
- output[b, i, j, k] =
- sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] *
- filter[di, dj, q, k]
-
-Must have `strides[0] = strides[3] = 1`. For the most common case of the same
-horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
-* <b>`filter`</b>: A `Tensor`. Must have the same type as `input`.
-* <b>`strides`</b>: A list of `ints`.
- 1-D of length 4. The stride of the sliding window for each dimension
- of `input`. Must be in the same order as the dimension specified with format.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`use_cudnn_on_gpu`</b>: An optional `bool`. Defaults to `True`.
-* <b>`data_format`</b>: An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`.
- Specify the data format of the input and output data. With the
- default format "NHWC", the data is stored in the order of:
- [batch, in_height, in_width, in_channels].
- Alternatively, the format could be "NCHW", the data storage order of:
- [batch, in_channels, in_height, in_width].
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
-
-- - -
-
-### `tf.nn.depthwise_conv2d(input, filter, strides, padding, rate=None, name=None)` {#depthwise_conv2d}
-
-Depthwise 2-D convolution.
-
-Given an input tensor of shape `[batch, in_height, in_width, in_channels]`
-and a filter tensor of shape
-`[filter_height, filter_width, in_channels, channel_multiplier]`
-containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d`
-applies a different filter to each input channel (expanding from 1 channel
-to `channel_multiplier` channels for each), then concatenates the results
-together. The output has `in_channels * channel_multiplier` channels.
-
-In detail,
-
- output[b, i, j, k * channel_multiplier + q] = sum_{di, dj}
- filter[di, dj, k, q] * input[b, strides[1] * i + rate[0] * di,
- strides[2] * j + rate[1] * dj, k]
-
-Must have `strides[0] = strides[3] = 1`. For the most common case of the
-same horizontal and vertical strides, `strides = [1, stride, stride, 1]`.
-If any value in `rate` is greater than 1, we perform atrous depthwise
-convolution, in which case all values in the `strides` tensor must be equal
-to 1.
-
-##### Args:
-
-
-* <b>`input`</b>: 4-D with shape `[batch, in_height, in_width, in_channels]`.
-* <b>`filter`</b>: 4-D with shape
- `[filter_height, filter_width, in_channels, channel_multiplier]`.
-* <b>`strides`</b>: 1-D of size 4. The stride of the sliding window for each
- dimension of `input`.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
- See the [comment
- here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`rate`</b>: 1-D of size 2. The dilation rate in which we sample input values
- across the `height` and `width` dimensions in atrous convolution. If it is
- greater than 1, then all values of strides must be 1.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A 4-D `Tensor` of shape
- `[batch, out_height, out_width, in_channels * channel_multiplier].`
-
-
-- - -
-
-### `tf.nn.depthwise_conv2d_native(input, filter, strides, padding, name=None)` {#depthwise_conv2d_native}
-
-Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.
-
-Given an input tensor of shape `[batch, in_height, in_width, in_channels]`
-and a filter / kernel tensor of shape
-`[filter_height, filter_width, in_channels, channel_multiplier]`, containing
-`in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies
-a different filter to each input channel (expanding from 1 channel to
-`channel_multiplier` channels for each), then concatenates the results
-together. Thus, the output has `in_channels * channel_multiplier` channels.
-
-for k in 0..in_channels-1
- for q in 0..channel_multiplier-1
- output[b, i, j, k * channel_multiplier + q] =
- sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] *
- filter[di, dj, k, q]
-
-Must have `strides[0] = strides[3] = 1`. For the most common case of the same
-horizontal and vertices strides, `strides = [1, stride, stride, 1]`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
-* <b>`filter`</b>: A `Tensor`. Must have the same type as `input`.
-* <b>`strides`</b>: A list of `ints`.
- 1-D of length 4. The stride of the sliding window for each dimension
- of `input`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
-
-- - -
-
-### `tf.nn.separable_conv2d(input, depthwise_filter, pointwise_filter, strides, padding, rate=None, name=None)` {#separable_conv2d}
-
-2-D convolution with separable filters.
-
-Performs a depthwise convolution that acts separately on channels followed by
-a pointwise convolution that mixes channels. Note that this is separability
-between dimensions `[1, 2]` and `3`, not spatial separability between
-dimensions `1` and `2`.
-
-In detail,
-
- output[b, i, j, k] = sum_{di, dj, q, r]
- input[b, strides[1] * i + di, strides[2] * j + dj, q] *
- depthwise_filter[di, dj, q, r] *
- pointwise_filter[0, 0, q * channel_multiplier + r, k]
-
-`strides` controls the strides for the depthwise convolution only, since
-the pointwise convolution has implicit strides of `[1, 1, 1, 1]`. Must have
-`strides[0] = strides[3] = 1`. For the most common case of the same
-horizontal and vertical strides, `strides = [1, stride, stride, 1]`.
-If any value in `rate` is greater than 1, we perform atrous depthwise
-convolution, in which case all values in the `strides` tensor must be equal
-to 1.
-
-##### Args:
-
-
-* <b>`input`</b>: 4-D `Tensor` with shape `[batch, in_height, in_width, in_channels]`.
-* <b>`depthwise_filter`</b>: 4-D `Tensor` with shape
- `[filter_height, filter_width, in_channels, channel_multiplier]`.
- Contains `in_channels` convolutional filters of depth 1.
-* <b>`pointwise_filter`</b>: 4-D `Tensor` with shape
- `[1, 1, channel_multiplier * in_channels, out_channels]`. Pointwise
- filter to mix channels after `depthwise_filter` has convolved spatially.
-* <b>`strides`</b>: 1-D of size 4. The strides for the depthwise convolution for
- each dimension of `input`.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
- See the [comment
- here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`rate`</b>: 1-D of size 2. The dilation rate in which we sample input values
- across the `height` and `width` dimensions in atrous convolution. If it is
- greater than 1, then all values of strides must be 1.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A 4-D `Tensor` of shape `[batch, out_height, out_width, out_channels]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If channel_multiplier * in_channels > out_channels,
- which means that the separable convolution is overparameterized.
-
-
-- - -
-
-### `tf.nn.atrous_conv2d(value, filters, rate, padding, name=None)` {#atrous_conv2d}
-
-Atrous convolution (a.k.a. convolution with holes or dilated convolution).
-
-Computes a 2-D atrous convolution, also known as convolution with holes or
-dilated convolution, given 4-D `value` and `filters` tensors. If the `rate`
-parameter is equal to one, it performs regular 2-D convolution. If the `rate`
-parameter is greater than one, it performs convolution with holes, sampling
-the input values every `rate` pixels in the `height` and `width` dimensions.
-This is equivalent to convolving the input with a set of upsampled filters,
-produced by inserting `rate - 1` zeros between two consecutive values of the
-filters along the `height` and `width` dimensions, hence the name atrous
-convolution or convolution with holes (the French word trous means holes in
-English).
-
-More specifically:
-
- output[b, i, j, k] = sum_{di, dj, q} filters[di, dj, q, k] *
- value[b, i + rate * di, j + rate * dj, q]
-
-Atrous convolution allows us to explicitly control how densely to compute
-feature responses in fully convolutional networks. Used in conjunction with
-bilinear interpolation, it offers an alternative to `conv2d_transpose` in
-dense prediction tasks such as semantic image segmentation, optical flow
-computation, or depth estimation. It also allows us to effectively enlarge
-the field of view of filters without increasing the number of parameters or
-the amount of computation.
-
-For a description of atrous convolution and how it can be used for dense
-feature extraction, please see: [Semantic Image Segmentation with Deep
-Convolutional Nets and Fully Connected CRFs](http://arxiv.org/abs/1412.7062).
-The same operation is investigated further in [Multi-Scale Context Aggregation
-by Dilated Convolutions](http://arxiv.org/abs/1511.07122). Previous works
-that effectively use atrous convolution in different ways are, among others,
-[OverFeat: Integrated Recognition, Localization and Detection using
-Convolutional Networks](http://arxiv.org/abs/1312.6229) and [Fast Image
-Scanning with Deep Max-Pooling Convolutional Neural Networks](http://arxiv.org/abs/1302.1700).
-Atrous convolution is also closely related to the so-called noble identities
-in multi-rate signal processing.
-
-There are many different ways to implement atrous convolution (see the refs
-above). The implementation here reduces
-
-```python
- atrous_conv2d(value, filters, rate, padding=padding)
-```
-
-to the following three operations:
-
-```python
- paddings = ...
- net = space_to_batch(value, paddings, block_size=rate)
- net = conv2d(net, filters, strides=[1, 1, 1, 1], padding="VALID")
- crops = ...
- net = batch_to_space(net, crops, block_size=rate)
-```
-
-Advanced usage. Note the following optimization: A sequence of `atrous_conv2d`
-operations with identical `rate` parameters, 'SAME' `padding`, and filters
-with odd heights/ widths:
-
-```python
- net = atrous_conv2d(net, filters1, rate, padding="SAME")
- net = atrous_conv2d(net, filters2, rate, padding="SAME")
- ...
- net = atrous_conv2d(net, filtersK, rate, padding="SAME")
-```
-
-can be equivalently performed cheaper in terms of computation and memory as:
-
-```python
- pad = ... # padding so that the input dims are multiples of rate
- net = space_to_batch(net, paddings=pad, block_size=rate)
- net = conv2d(net, filters1, strides=[1, 1, 1, 1], padding="SAME")
- net = conv2d(net, filters2, strides=[1, 1, 1, 1], padding="SAME")
- ...
- net = conv2d(net, filtersK, strides=[1, 1, 1, 1], padding="SAME")
- net = batch_to_space(net, crops=pad, block_size=rate)
-```
-
-because a pair of consecutive `space_to_batch` and `batch_to_space` ops with
-the same `block_size` cancel out when their respective `paddings` and `crops`
-inputs are identical.
-
-##### Args:
-
-
-* <b>`value`</b>: A 4-D `Tensor` of type `float`. It needs to be in the default "NHWC"
- format. Its shape is `[batch, in_height, in_width, in_channels]`.
-* <b>`filters`</b>: A 4-D `Tensor` with the same type as `value` and shape
- `[filter_height, filter_width, in_channels, out_channels]`. `filters`'
- `in_channels` dimension must match that of `value`. Atrous convolution is
- equivalent to standard convolution with upsampled filters with effective
- height `filter_height + (filter_height - 1) * (rate - 1)` and effective
- width `filter_width + (filter_width - 1) * (rate - 1)`, produced by
- inserting `rate - 1` zeros along consecutive elements across the
- `filters`' spatial dimensions.
-* <b>`rate`</b>: A positive int32. The stride with which we sample input values across
- the `height` and `width` dimensions. Equivalently, the rate by which we
- upsample the filter values by inserting zeros across the `height` and
- `width` dimensions. In the literature, the same parameter is sometimes
- called `input stride` or `dilation`.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
-* <b>`name`</b>: Optional name for the returned tensor.
-
-##### Returns:
-
- A `Tensor` with the same type as `value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If input/output depth does not match `filters`' shape, or if
- padding is other than `'VALID'` or `'SAME'`.
-
-
-- - -
-
-### `tf.nn.atrous_conv2d_transpose(value, filters, output_shape, rate, padding, name=None)` {#atrous_conv2d_transpose}
-
-The transpose of `atrous_conv2d`.
-
-This operation is sometimes called "deconvolution" after [Deconvolutional
-Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is
-actually the transpose (gradient) of `atrous_conv2d` rather than an actual
-deconvolution.
-
-##### Args:
-
-
-* <b>`value`</b>: A 4-D `Tensor` of type `float`. It needs to be in the default `NHWC`
- format. Its shape is `[batch, in_height, in_width, in_channels]`.
-* <b>`filters`</b>: A 4-D `Tensor` with the same type as `value` and shape
- `[filter_height, filter_width, out_channels, in_channels]`. `filters`'
- `in_channels` dimension must match that of `value`. Atrous convolution is
- equivalent to standard convolution with upsampled filters with effective
- height `filter_height + (filter_height - 1) * (rate - 1)` and effective
- width `filter_width + (filter_width - 1) * (rate - 1)`, produced by
- inserting `rate - 1` zeros along consecutive elements across the
- `filters`' spatial dimensions.
-* <b>`output_shape`</b>: A 1-D `Tensor` of shape representing the output shape of the
- deconvolution op.
-* <b>`rate`</b>: A positive int32. The stride with which we sample input values across
- the `height` and `width` dimensions. Equivalently, the rate by which we
- upsample the filter values by inserting zeros across the `height` and
- `width` dimensions. In the literature, the same parameter is sometimes
- called `input stride` or `dilation`.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
-* <b>`name`</b>: Optional name for the returned tensor.
-
-##### Returns:
-
- A `Tensor` with the same type as `value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If input/output depth does not match `filters`' shape, or if
- padding is other than `'VALID'` or `'SAME'`, or if the `rate` is less
- than one, or if the output_shape is not a tensor with 4 elements.
-
-
-- - -
-
-### `tf.nn.conv2d_transpose(value, filter, output_shape, strides, padding='SAME', data_format='NHWC', name=None)` {#conv2d_transpose}
-
-The transpose of `conv2d`.
-
-This operation is sometimes called "deconvolution" after [Deconvolutional
-Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is
-actually the transpose (gradient) of `conv2d` rather than an actual
-deconvolution.
-
-##### Args:
-
-
-* <b>`value`</b>: A 4-D `Tensor` of type `float` and shape
- `[batch, height, width, in_channels]` for `NHWC` data format or
- `[batch, in_channels, height, width]` for `NCHW` data format.
-* <b>`filter`</b>: A 4-D `Tensor` with the same type as `value` and shape
- `[height, width, output_channels, in_channels]`. `filter`'s
- `in_channels` dimension must match that of `value`.
-* <b>`output_shape`</b>: A 1-D `Tensor` representing the output shape of the
- deconvolution op.
-* <b>`strides`</b>: A list of ints. The stride of the sliding window for each
- dimension of the input tensor.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
- See the [comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`data_format`</b>: A string. 'NHWC' and 'NCHW' are supported.
-* <b>`name`</b>: Optional name for the returned tensor.
-
-##### Returns:
-
- A `Tensor` with the same type as `value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If input/output depth does not match `filter`'s shape, or if
- padding is other than `'VALID'` or `'SAME'`.
-
-
-- - -
-
-### `tf.nn.conv1d(value, filters, stride, padding, use_cudnn_on_gpu=None, data_format=None, name=None)` {#conv1d}
-
-Computes a 1-D convolution given 3-D input and filter tensors.
-
-Given an input tensor of shape
- [batch, in_width, in_channels]
-if data_format is "NHWC", or
- [batch, in_channels, in_width]
-if data_format is "NCHW",
-and a filter / kernel tensor of shape
-[filter_width, in_channels, out_channels], this op reshapes
-the arguments to pass them to conv2d to perform the equivalent
-convolution operation.
-
-Internally, this op reshapes the input tensors and invokes `tf.nn.conv2d`.
-For example, if `data_format` does not start with "NC", a tensor of shape
- [batch, in_width, in_channels]
-is reshaped to
- [batch, 1, in_width, in_channels],
-and the filter is reshaped to
- [1, filter_width, in_channels, out_channels].
-The result is then reshaped back to
- [batch, out_width, out_channels]
-(where out_width is a function of the stride and padding as in conv2d) and
-returned to the caller.
-
-##### Args:
-
-
-* <b>`value`</b>: A 3D `Tensor`. Must be of type `float32` or `float64`.
-* <b>`filters`</b>: A 3D `Tensor`. Must have the same type as `input`.
-* <b>`stride`</b>: An `integer`. The number of entries by which
- the filter is moved right at each step.
-* <b>`padding`</b>: 'SAME' or 'VALID'
-* <b>`use_cudnn_on_gpu`</b>: An optional `bool`. Defaults to `True`.
-* <b>`data_format`</b>: An optional `string` from `"NHWC", "NCHW"`. Defaults
- to `"NHWC"`, the data is stored in the order of
- [batch, in_width, in_channels]. The `"NCHW"` format stores
- data as [batch, in_channels, in_width].
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as input.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `data_format` is invalid.
-
-
-- - -
-
-### `tf.nn.conv3d(input, filter, strides, padding, name=None)` {#conv3d}
-
-Computes a 3-D convolution given 5-D `input` and `filter` tensors.
-
-In signal processing, cross-correlation is a measure of similarity of
-two waveforms as a function of a time-lag applied to one of them. This
-is also known as a sliding dot product or sliding inner-product.
-
-Our Conv3D implements a form of cross-correlation.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Shape `[batch, in_depth, in_height, in_width, in_channels]`.
-* <b>`filter`</b>: A `Tensor`. Must have the same type as `input`.
- Shape `[filter_depth, filter_height, filter_width, in_channels,
- out_channels]`. `in_channels` must match between `input` and `filter`.
-* <b>`strides`</b>: A list of `ints` that has length `>= 5`.
- 1-D tensor of length 5. The stride of the sliding window for each
- dimension of `input`. Must have `strides[0] = strides[4] = 1`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
-
-- - -
-
-### `tf.nn.conv3d_transpose(value, filter, output_shape, strides, padding='SAME', name=None)` {#conv3d_transpose}
-
-The transpose of `conv3d`.
-
-This operation is sometimes called "deconvolution" after [Deconvolutional
-Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf), but is
-actually the transpose (gradient) of `conv3d` rather than an actual
-deconvolution.
-
-##### Args:
-
-
-* <b>`value`</b>: A 5-D `Tensor` of type `float` and shape
- `[batch, depth, height, width, in_channels]`.
-* <b>`filter`</b>: A 5-D `Tensor` with the same type as `value` and shape
- `[depth, height, width, output_channels, in_channels]`. `filter`'s
- `in_channels` dimension must match that of `value`.
-* <b>`output_shape`</b>: A 1-D `Tensor` representing the output shape of the
- deconvolution op.
-* <b>`strides`</b>: A list of ints. The stride of the sliding window for each
- dimension of the input tensor.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
- See the [comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`name`</b>: Optional name for the returned tensor.
-
-##### Returns:
-
- A `Tensor` with the same type as `value`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If input/output depth does not match `filter`'s shape, or if
- padding is other than `'VALID'` or `'SAME'`.
-
-
-- - -
-
-### `tf.nn.conv2d_backprop_filter(input, filter_sizes, out_backprop, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)` {#conv2d_backprop_filter}
-
-Computes the gradients of convolution with respect to the filter.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
- 4-D with shape `[batch, in_height, in_width, in_channels]`.
-* <b>`filter_sizes`</b>: A `Tensor` of type `int32`.
- An integer vector representing the tensor shape of `filter`,
- where `filter` is a 4-D
- `[filter_height, filter_width, in_channels, out_channels]` tensor.
-* <b>`out_backprop`</b>: A `Tensor`. Must have the same type as `input`.
- 4-D with shape `[batch, out_height, out_width, out_channels]`.
- Gradients w.r.t. the output of the convolution.
-* <b>`strides`</b>: A list of `ints`.
- The stride of the sliding window for each dimension of the input
- of the convolution. Must be in the same order as the dimension specified with
- format.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`use_cudnn_on_gpu`</b>: An optional `bool`. Defaults to `True`.
-* <b>`data_format`</b>: An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`.
- Specify the data format of the input and output data. With the
- default format "NHWC", the data is stored in the order of:
- [batch, in_height, in_width, in_channels].
- Alternatively, the format could be "NCHW", the data storage order of:
- [batch, in_channels, in_height, in_width].
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`. 4-D with shape
- `[filter_height, filter_width, in_channels, out_channels]`. Gradient w.r.t.
- the `filter` input of the convolution.
-
-
-- - -
-
-### `tf.nn.conv2d_backprop_input(input_sizes, filter, out_backprop, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)` {#conv2d_backprop_input}
-
-Computes the gradients of convolution with respect to the input.
-
-##### Args:
-
-
-* <b>`input_sizes`</b>: A `Tensor` of type `int32`.
- An integer vector representing the shape of `input`,
- where `input` is a 4-D `[batch, height, width, channels]` tensor.
-* <b>`filter`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`.
- 4-D with shape
- `[filter_height, filter_width, in_channels, out_channels]`.
-* <b>`out_backprop`</b>: A `Tensor`. Must have the same type as `filter`.
- 4-D with shape `[batch, out_height, out_width, out_channels]`.
- Gradients w.r.t. the output of the convolution.
-* <b>`strides`</b>: A list of `ints`.
- The stride of the sliding window for each dimension of the input
- of the convolution. Must be in the same order as the dimension specified with
- format.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`use_cudnn_on_gpu`</b>: An optional `bool`. Defaults to `True`.
-* <b>`data_format`</b>: An optional `string` from: `"NHWC", "NCHW"`. Defaults to `"NHWC"`.
- Specify the data format of the input and output data. With the
- default format "NHWC", the data is stored in the order of:
- [batch, in_height, in_width, in_channels].
- Alternatively, the format could be "NCHW", the data storage order of:
- [batch, in_channels, in_height, in_width].
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `filter`.
- 4-D with shape `[batch, in_height, in_width, in_channels]`. Gradient
- w.r.t. the input of the convolution.
-
-
-- - -
-
-### `tf.nn.conv3d_backprop_filter_v2(input, filter_sizes, out_backprop, strides, padding, name=None)` {#conv3d_backprop_filter_v2}
-
-Computes the gradients of 3-D convolution with respect to the filter.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Shape `[batch, depth, rows, cols, in_channels]`.
-* <b>`filter_sizes`</b>: A `Tensor` of type `int32`.
- An integer vector representing the tensor shape of `filter`,
- where `filter` is a 5-D
- `[filter_depth, filter_height, filter_width, in_channels, out_channels]`
- tensor.
-* <b>`out_backprop`</b>: A `Tensor`. Must have the same type as `input`.
- Backprop signal of shape `[batch, out_depth, out_rows, out_cols,
- out_channels]`.
-* <b>`strides`</b>: A list of `ints` that has length `>= 5`.
- 1-D tensor of length 5. The stride of the sliding window for each
- dimension of `input`. Must have `strides[0] = strides[4] = 1`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
-
-- - -
-
-### `tf.nn.depthwise_conv2d_native_backprop_filter(input, filter_sizes, out_backprop, strides, padding, name=None)` {#depthwise_conv2d_native_backprop_filter}
-
-Computes the gradients of depthwise convolution with respect to the filter.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
- 4-D with shape `[batch, in_height, in_width, in_channels]`.
-* <b>`filter_sizes`</b>: A `Tensor` of type `int32`.
- An integer vector representing the tensor shape of `filter`,
- where `filter` is a 4-D
- `[filter_height, filter_width, in_channels, depthwise_multiplier]` tensor.
-* <b>`out_backprop`</b>: A `Tensor`. Must have the same type as `input`.
- 4-D with shape `[batch, out_height, out_width, out_channels]`.
- Gradients w.r.t. the output of the convolution.
-* <b>`strides`</b>: A list of `ints`.
- The stride of the sliding window for each dimension of the input
- of the convolution.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`. 4-D with shape
- `[filter_height, filter_width, in_channels, out_channels]`. Gradient w.r.t.
- the `filter` input of the convolution.
-
-
-- - -
-
-### `tf.nn.depthwise_conv2d_native_backprop_input(input_sizes, filter, out_backprop, strides, padding, name=None)` {#depthwise_conv2d_native_backprop_input}
-
-Computes the gradients of depthwise convolution with respect to the input.
-
-##### Args:
-
-
-* <b>`input_sizes`</b>: A `Tensor` of type `int32`.
- An integer vector representing the shape of `input`,
- where `input` is a 4-D `[batch, height, width, channels]` tensor.
-* <b>`filter`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`.
- 4-D with shape
- `[filter_height, filter_width, in_channels, depthwise_multiplier]`.
-* <b>`out_backprop`</b>: A `Tensor`. Must have the same type as `filter`.
- 4-D with shape `[batch, out_height, out_width, out_channels]`.
- Gradients w.r.t. the output of the convolution.
-* <b>`strides`</b>: A list of `ints`.
- The stride of the sliding window for each dimension of the input
- of the convolution.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `filter`.
- 4-D with shape `[batch, in_height, in_width, in_channels]`. Gradient
- w.r.t. the input of the convolution.
-
-
-- - -
-
-### `tf.nn.avg_pool(value, ksize, strides, padding, data_format='NHWC', name=None)` {#avg_pool}
-
-Performs the average pooling on the input.
-
-Each entry in `output` is the mean of the corresponding size `ksize`
-window in `value`.
-
-##### Args:
-
-
-* <b>`value`</b>: A 4-D `Tensor` of shape `[batch, height, width, channels]` and type
- `float32`, `float64`, `qint8`, `quint8`, or `qint32`.
-* <b>`ksize`</b>: A list of ints that has length >= 4.
- The size of the window for each dimension of the input tensor.
-* <b>`strides`</b>: A list of ints that has length >= 4.
- The stride of the sliding window for each dimension of the
- input tensor.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
- See the [comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`data_format`</b>: A string. 'NHWC' and 'NCHW' are supported.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- A `Tensor` with the same type as `value`. The average pooled output tensor.
-
-
-- - -
-
-### `tf.nn.max_pool(value, ksize, strides, padding, data_format='NHWC', name=None)` {#max_pool}
-
-Performs the max pooling on the input.
-
-##### Args:
-
-
-* <b>`value`</b>: A 4-D `Tensor` with shape `[batch, height, width, channels]` and
- type `tf.float32`.
-* <b>`ksize`</b>: A list of ints that has length >= 4. The size of the window for
- each dimension of the input tensor.
-* <b>`strides`</b>: A list of ints that has length >= 4. The stride of the sliding
- window for each dimension of the input tensor.
-* <b>`padding`</b>: A string, either `'VALID'` or `'SAME'`. The padding algorithm.
- See the [comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`data_format`</b>: A string. 'NHWC' and 'NCHW' are supported.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
- A `Tensor` with type `tf.float32`. The max pooled output tensor.
-
-
-- - -
-
-### `tf.nn.max_pool_with_argmax(input, ksize, strides, padding, Targmax=None, name=None)` {#max_pool_with_argmax}
-
-Performs max pooling on the input and outputs both max values and indices.
-
-The indices in `argmax` are flattened, so that a maximum value at position
-`[b, y, x, c]` becomes flattened index
-`((b * height + y) * width + x) * channels + c`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `half`.
- 4-D with shape `[batch, height, width, channels]`. Input to pool over.
-* <b>`ksize`</b>: A list of `ints` that has length `>= 4`.
- The size of the window for each dimension of the input tensor.
-* <b>`strides`</b>: A list of `ints` that has length `>= 4`.
- The stride of the sliding window for each dimension of the
- input tensor.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`Targmax`</b>: An optional `tf.DType` from: `tf.int32, tf.int64`. Defaults to `tf.int64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, argmax).
-
-* <b>`output`</b>: A `Tensor`. Has the same type as `input`. The max pooled output tensor.
-* <b>`argmax`</b>: A `Tensor` of type `Targmax`. 4-D. The flattened indices of the max values chosen for each output.
-
-
-- - -
-
-### `tf.nn.avg_pool3d(input, ksize, strides, padding, name=None)` {#avg_pool3d}
-
-Performs 3D average pooling on the input.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Shape `[batch, depth, rows, cols, channels]` tensor to pool over.
-* <b>`ksize`</b>: A list of `ints` that has length `>= 5`.
- 1-D tensor of length 5. The size of the window for each dimension of
- the input tensor. Must have `ksize[0] = ksize[4] = 1`.
-* <b>`strides`</b>: A list of `ints` that has length `>= 5`.
- 1-D tensor of length 5. The stride of the sliding window for each
- dimension of `input`. Must have `strides[0] = strides[4] = 1`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- The average pooled output tensor.
-
-
-- - -
-
-### `tf.nn.max_pool3d(input, ksize, strides, padding, name=None)` {#max_pool3d}
-
-Performs 3D max pooling on the input.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Shape `[batch, depth, rows, cols, channels]` tensor to pool over.
-* <b>`ksize`</b>: A list of `ints` that has length `>= 5`.
- 1-D tensor of length 5. The size of the window for each dimension of
- the input tensor. Must have `ksize[0] = ksize[4] = 1`.
-* <b>`strides`</b>: A list of `ints` that has length `>= 5`.
- 1-D tensor of length 5. The stride of the sliding window for each
- dimension of `input`. Must have `strides[0] = strides[4] = 1`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`. The max pooled output tensor.
-
-
-- - -
-
-### `tf.nn.fractional_avg_pool(value, pooling_ratio, pseudo_random=None, overlapping=None, deterministic=None, seed=None, seed2=None, name=None)` {#fractional_avg_pool}
-
-Performs fractional average pooling on the input.
-
-Fractional average pooling is similar to Fractional max pooling in the pooling
-region generation step. The only difference is that after pooling regions are
-generated, a mean operation is performed instead of a max operation in each
-pooling region.
-
-##### Args:
-
-
-* <b>`value`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
- 4-D with shape `[batch, height, width, channels]`.
-* <b>`pooling_ratio`</b>: A list of `floats` that has length `>= 4`.
- Pooling ratio for each dimension of `value`, currently only
- supports row and col dimension and should be >= 1.0. For example, a valid
- pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements
- must be 1.0 because we don't allow pooling on batch and channels
- dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions
- respectively.
-* <b>`pseudo_random`</b>: An optional `bool`. Defaults to `False`.
- When set to True, generates the pooling sequence in a
- pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin
- Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071) for
- difference between pseudorandom and random.
-* <b>`overlapping`</b>: An optional `bool`. Defaults to `False`.
- When set to True, it means when pooling, the values at the boundary
- of adjacent pooling cells are used by both cells. For example:
-
- `index 0 1 2 3 4`
-
- `value 20 5 16 3 7`
-
- If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice.
- The result would be [41/3, 26/3] for fractional avg pooling.
-
-* <b>`deterministic`</b>: An optional `bool`. Defaults to `False`.
- When set to True, a fixed pooling region will be used when
- iterating over a FractionalAvgPool node in the computation graph. Mainly used
- in unit test to make FractionalAvgPool deterministic.
-* <b>`seed`</b>: An optional `int`. Defaults to `0`.
- If either seed or seed2 are set to be non-zero, the random number
- generator is seeded by the given seed. Otherwise, it is seeded by a
- random seed.
-* <b>`seed2`</b>: An optional `int`. Defaults to `0`.
- An second seed to avoid seed collision.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, row_pooling_sequence, col_pooling_sequence).
-
-* <b>`output`</b>: A `Tensor`. Has the same type as `value`. output tensor after fractional avg pooling.
-* <b>`row_pooling_sequence`</b>: A `Tensor` of type `int64`. row pooling sequence, needed to calculate gradient.
-* <b>`col_pooling_sequence`</b>: A `Tensor` of type `int64`. column pooling sequence, needed to calculate gradient.
-
-
-- - -
-
-### `tf.nn.fractional_max_pool(value, pooling_ratio, pseudo_random=None, overlapping=None, deterministic=None, seed=None, seed2=None, name=None)` {#fractional_max_pool}
-
-Performs fractional max pooling on the input.
-
-Fractional max pooling is slightly different than regular max pooling. In
-regular max pooling, you downsize an input set by taking the maximum value of
-smaller N x N subsections of the set (often 2x2), and try to reduce the set by
-a factor of N, where N is an integer. Fractional max pooling, as you might
-expect from the word "fractional", means that the overall reduction ratio N
-does not have to be an integer.
-
-The sizes of the pooling regions are generated randomly but are fairly uniform.
-For example, let's look at the height dimension, and the constraints on the
-list of rows that will be pool boundaries.
-
-First we define the following:
-
-1. input_row_length : the number of rows from the input set
-2. output_row_length : which will be smaller than the input
-3. alpha = input_row_length / output_row_length : our reduction ratio
-4. K = floor(alpha)
-5. row_pooling_sequence : this is the result list of pool boundary rows
-
-Then, row_pooling_sequence should satisfy:
-
-1. a[0] = 0 : the first value of the sequence is 0
-2. a[end] = input_row_length : the last value of the sequence is the size
-3. K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size
-4. length(row_pooling_sequence) = output_row_length+1
-
-For more details on fractional max pooling, see this paper:
-[Benjamin Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071)
-
-##### Args:
-
-
-* <b>`value`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`.
- 4-D with shape `[batch, height, width, channels]`.
-* <b>`pooling_ratio`</b>: A list of `floats` that has length `>= 4`.
- Pooling ratio for each dimension of `value`, currently only
- supports row and col dimension and should be >= 1.0. For example, a valid
- pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements
- must be 1.0 because we don't allow pooling on batch and channels
- dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions
- respectively.
-* <b>`pseudo_random`</b>: An optional `bool`. Defaults to `False`.
- When set to True, generates the pooling sequence in a
- pseudorandom fashion, otherwise, in a random fashion. Check paper [Benjamin
- Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071) for
- difference between pseudorandom and random.
-* <b>`overlapping`</b>: An optional `bool`. Defaults to `False`.
- When set to True, it means when pooling, the values at the boundary
- of adjacent pooling cells are used by both cells. For example:
-
- `index 0 1 2 3 4`
-
- `value 20 5 16 3 7`
-
- If the pooling sequence is [0, 2, 4], then 16, at index 2 will be used twice.
- The result would be [20, 16] for fractional max pooling.
-
-* <b>`deterministic`</b>: An optional `bool`. Defaults to `False`.
- When set to True, a fixed pooling region will be used when
- iterating over a FractionalMaxPool node in the computation graph. Mainly used
- in unit test to make FractionalMaxPool deterministic.
-* <b>`seed`</b>: An optional `int`. Defaults to `0`.
- If either seed or seed2 are set to be non-zero, the random number
- generator is seeded by the given seed. Otherwise, it is seeded by a
- random seed.
-* <b>`seed2`</b>: An optional `int`. Defaults to `0`.
- An second seed to avoid seed collision.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, row_pooling_sequence, col_pooling_sequence).
-
-* <b>`output`</b>: A `Tensor`. Has the same type as `value`. output tensor after fractional max pooling.
-* <b>`row_pooling_sequence`</b>: A `Tensor` of type `int64`. row pooling sequence, needed to calculate gradient.
-* <b>`col_pooling_sequence`</b>: A `Tensor` of type `int64`. column pooling sequence, needed to calculate gradient.
-
-
-- - -
-
-### `tf.nn.pool(input, window_shape, pooling_type, padding, dilation_rate=None, strides=None, name=None, data_format=None)` {#pool}
-
-Performs an N-D pooling operation.
-
-In the case that `data_format` does not start with "NC", computes for
- 0 <= b < batch_size,
- 0 <= x[i] < output_spatial_shape[i],
- 0 <= c < num_channels:
-
- output[b, x[0], ..., x[N-1], c] =
- REDUCE_{z[0], ..., z[N-1]}
- input[b,
- x[0] * strides[0] - pad_before[0] + dilation_rate[0]*z[0],
- ...
- x[N-1]*strides[N-1] - pad_before[N-1] + dilation_rate[N-1]*z[N-1],
- c],
-
-where the reduction function REDUCE depends on the value of `pooling_type`,
-and pad_before is defined based on the value of `padding` as described in the
-[comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution).
-The reduction never includes out-of-bounds positions.
-
-In the case that `data_format` starts with `"NC"`, the `input` and output are
-simply transposed as follows:
-
- pool(input, data_format, **kwargs) =
- tf.transpose(pool(tf.transpose(input, [0] + range(2,N+2) + [1]),
- **kwargs),
- [0, N+1] + range(1, N+1))
-
-##### Args:
-
-
-* <b>`input`</b>: Tensor of rank N+2, of shape
- `[batch_size] + input_spatial_shape + [num_channels]` if data_format does
- not start with "NC" (default), or
- `[batch_size, num_channels] + input_spatial_shape` if data_format starts
- with "NC". Pooling happens over the spatial dimensions only.
-* <b>`window_shape`</b>: Sequence of N ints >= 1.
-* <b>`pooling_type`</b>: Specifies pooling operation, must be "AVG" or "MAX".
-* <b>`padding`</b>: The padding algorithm, must be "SAME" or "VALID".
- See the [comment here](https://www.tensorflow.org/api_docs/python/nn.html#convolution)
-* <b>`dilation_rate`</b>: Optional. Dilation rate. List of N ints >= 1.
- Defaults to [1]*N. If any value of dilation_rate is > 1, then all values
- of strides must be 1.
-* <b>`strides`</b>: Optional. Sequence of N ints >= 1. Defaults to [1]*N.
- If any value of strides is > 1, then all values of dilation_rate must be
- 1.
-* <b>`name`</b>: Optional. Name of the op.
-* <b>`data_format`</b>: A string or None. Specifies whether the channel dimension of
- the `input` and output is the last dimension (default, or if `data_format`
- does not start with "NC"), or the second dimension (if `data_format`
- starts with "NC"). For N=1, the valid values are "NWC" (default) and
- "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". For
- N=3, the valid value is "NDHWC".
-
-##### Returns:
-
- Tensor of rank N+2, of shape
- [batch_size] + output_spatial_shape + [num_channels]
-
- if data_format is None or does not start with "NC", or
-
- [batch_size, num_channels] + output_spatial_shape
-
- if data_format starts with "NC",
- where `output_spatial_shape` depends on the value of padding:
-
- If padding = "SAME":
- output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides[i])
- If padding = "VALID":
- output_spatial_shape[i] =
- ceil((input_spatial_shape[i] - (window_shape[i] - 1) * dilation_rate[i])
- / strides[i]).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if arguments are invalid.
-
-
-- - -
-
-### `tf.nn.dilation2d(input, filter, strides, rates, padding, name=None)` {#dilation2d}
-
-Computes the grayscale dilation of 4-D `input` and 3-D `filter` tensors.
-
-The `input` tensor has shape `[batch, in_height, in_width, depth]` and the
-`filter` tensor has shape `[filter_height, filter_width, depth]`, i.e., each
-input channel is processed independently of the others with its own structuring
-function. The `output` tensor has shape
-`[batch, out_height, out_width, depth]`. The spatial dimensions of the output
-tensor depend on the `padding` algorithm. We currently only support the default
-"NHWC" `data_format`.
-
-In detail, the grayscale morphological 2-D dilation is the max-sum correlation
-(for consistency with `conv2d`, we use unmirrored filters):
-
- output[b, y, x, c] =
- max_{dy, dx} input[b,
- strides[1] * y + rates[1] * dy,
- strides[2] * x + rates[2] * dx,
- c] +
- filter[dy, dx, c]
-
-Max-pooling is a special case when the filter has size equal to the pooling
-kernel size and contains all zeros.
-
-Note on duality: The dilation of `input` by the `filter` is equal to the
-negation of the erosion of `-input` by the reflected `filter`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
- 4-D with shape `[batch, in_height, in_width, depth]`.
-* <b>`filter`</b>: A `Tensor`. Must have the same type as `input`.
- 3-D with shape `[filter_height, filter_width, depth]`.
-* <b>`strides`</b>: A list of `ints` that has length `>= 4`.
- The stride of the sliding window for each dimension of the input
- tensor. Must be: `[1, stride_height, stride_width, 1]`.
-* <b>`rates`</b>: A list of `ints` that has length `>= 4`.
- The input stride for atrous morphological dilation. Must be:
- `[1, rate_height, rate_width, 1]`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
- 4-D with shape `[batch, out_height, out_width, depth]`.
-
-
-- - -
-
-### `tf.nn.erosion2d(value, kernel, strides, rates, padding, name=None)` {#erosion2d}
-
-Computes the grayscale erosion of 4-D `value` and 3-D `kernel` tensors.
-
-The `value` tensor has shape `[batch, in_height, in_width, depth]` and the
-`kernel` tensor has shape `[kernel_height, kernel_width, depth]`, i.e.,
-each input channel is processed independently of the others with its own
-structuring function. The `output` tensor has shape
-`[batch, out_height, out_width, depth]`. The spatial dimensions of the
-output tensor depend on the `padding` algorithm. We currently only support the
-default "NHWC" `data_format`.
-
-In detail, the grayscale morphological 2-D erosion is given by:
-
- output[b, y, x, c] =
- min_{dy, dx} value[b,
- strides[1] * y - rates[1] * dy,
- strides[2] * x - rates[2] * dx,
- c] -
- kernel[dy, dx, c]
-
-Duality: The erosion of `value` by the `kernel` is equal to the negation of
-the dilation of `-value` by the reflected `kernel`.
-
-##### Args:
-
-
-* <b>`value`</b>: A `Tensor`. 4-D with shape `[batch, in_height, in_width, depth]`.
-* <b>`kernel`</b>: A `Tensor`. Must have the same type as `value`.
- 3-D with shape `[kernel_height, kernel_width, depth]`.
-* <b>`strides`</b>: A list of `ints` that has length `>= 4`.
- 1-D of length 4. The stride of the sliding window for each dimension of
- the input tensor. Must be: `[1, stride_height, stride_width, 1]`.
-* <b>`rates`</b>: A list of `ints` that has length `>= 4`.
- 1-D of length 4. The input stride for atrous morphological dilation.
- Must be: `[1, rate_height, rate_width, 1]`.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional). If not specified "erosion2d"
- is used.
-
-##### Returns:
-
- A `Tensor`. Has the same type as `value`.
- 4-D with shape `[batch, out_height, out_width, depth]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `value` depth does not match `kernel`' shape, or if
- padding is other than `'VALID'` or `'SAME'`.
-
-
-- - -
-
-### `tf.nn.with_space_to_batch(input, dilation_rate, padding, op, filter_shape=None, spatial_dims=None)` {#with_space_to_batch}
-
-Performs `op` on the space-to-batch representation of `input`.
-
-This has the effect of transforming sliding window operations into the
-corresponding "atrous" operation in which the input is sampled at the
-specified `dilation_rate`.
-
-In the special case that `dilation_rate` is uniformly 1, this simply returns:
-
- op(input, num_spatial_dims, padding)
-
-Otherwise, it returns:
-
- batch_to_space_nd(
- op(space_to_batch_nd(input, adjusted_dilation_rate, adjusted_paddings),
- num_spatial_dims,
- "VALID")
- adjusted_dilation_rate,
- adjusted_crops),
-
-where:
-
- adjusted_dilation_rate is an int64 tensor of shape [max(spatial_dims)],
- adjusted_{paddings,crops} are int64 tensors of shape [max(spatial_dims), 2]
-
-defined as follows:
-
-We first define two int64 tensors `paddings` and `crops` of shape
-`[num_spatial_dims, 2]` based on the value of `padding` and the spatial
-dimensions of the `input`:
-
-If `padding = "VALID"`, then:
-
- paddings, crops = required_space_to_batch_paddings(
- input_shape[spatial_dims],
- dilation_rate)
-
-If `padding = "SAME"`, then:
-
- dilated_filter_shape =
- filter_shape + (filter_shape - 1) * (dilation_rate - 1)
-
- paddings, crops = required_space_to_batch_paddings(
- input_shape[spatial_dims],
- dilation_rate,
- [(dilated_filter_shape - 1) // 2,
- dilated_filter_shape - 1 - (dilated_filter_shape - 1) // 2])
-
-Because `space_to_batch_nd` and `batch_to_space_nd` assume that the spatial
-dimensions are contiguous starting at the second dimension, but the specified
-`spatial_dims` may not be, we must adjust `dilation_rate`, `paddings` and
-`crops` in order to be usable with these operations. For a given dimension,
-if the block size is 1, and both the starting and ending padding and crop
-amounts are 0, then space_to_batch_nd effectively leaves that dimension alone,
-which is what is needed for dimensions not part of `spatial_dims`.
-Furthermore, `space_to_batch_nd` and `batch_to_space_nd` handle this case
-efficiently for any number of leading and trailing dimensions.
-
-For 0 <= i < len(spatial_dims), we assign:
-
- adjusted_dilation_rate[spatial_dims[i] - 1] = dilation_rate[i]
- adjusted_paddings[spatial_dims[i] - 1, :] = paddings[i, :]
- adjusted_crops[spatial_dims[i] - 1, :] = crops[i, :]
-
-All unassigned values of `adjusted_dilation_rate` default to 1, while all
-unassigned values of `adjusted_paddings` and `adjusted_crops` default to 0.
-
-Note in the case that `dilation_rate` is not uniformly 1, specifying "VALID"
-padding is equivalent to specifying `padding = "SAME"` with a filter_shape of
-`[1]*N`.
-
-Advanced usage. Note the following optimization: A sequence of
-`with_space_to_batch` operations with identical (not uniformly 1)
-`dilation_rate` parameters and "VALID" padding
-
- net = with_space_to_batch(net, dilation_rate, "VALID", op_1)
- ...
- net = with_space_to_batch(net, dilation_rate, "VALID", op_k)
-
-can be combined into a single `with_space_to_batch` operation as follows:
-
- def combined_op(converted_input, num_spatial_dims, _):
- result = op_1(converted_input, num_spatial_dims, "VALID")
- ...
- result = op_k(result, num_spatial_dims, "VALID")
-
- net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
-
-This eliminates the overhead of `k-1` calls to `space_to_batch_nd` and
-`batch_to_space_nd`.
-
-Similarly, a sequence of `with_space_to_batch` operations with identical (not
-uniformly 1) `dilation_rate` parameters, "SAME" padding, and odd filter
-dimensions
-
- net = with_space_to_batch(net, dilation_rate, "SAME", op_1, filter_shape_1)
- ...
- net = with_space_to_batch(net, dilation_rate, "SAME", op_k, filter_shape_k)
-
-can be combined into a single `with_space_to_batch` operation as follows:
-
- def combined_op(converted_input, num_spatial_dims, _):
- result = op_1(converted_input, num_spatial_dims, "SAME")
- ...
- result = op_k(result, num_spatial_dims, "SAME")
-
- net = with_space_to_batch(net, dilation_rate, "VALID", combined_op)
-
-##### Args:
-
-
-* <b>`input`</b>: Tensor of rank > max(spatial_dims).
-* <b>`dilation_rate`</b>: int32 Tensor of *known* shape [num_spatial_dims].
-* <b>`padding`</b>: str constant equal to "VALID" or "SAME"
-* <b>`op`</b>: Function that maps (input, num_spatial_dims, padding) -> output
-* <b>`filter_shape`</b>: If padding = "SAME", specifies the shape of the convolution
- kernel/pooling window as an integer Tensor of shape [>=num_spatial_dims].
- If padding = "VALID", filter_shape is ignored and need not be specified.
-* <b>`spatial_dims`</b>: Monotonically increasing sequence of `num_spatial_dims`
- integers (which are >= 1) specifying the spatial dimensions of `input`
- and output. Defaults to: `range(1, num_spatial_dims+1)`.
-
-##### Returns:
-
- The output Tensor as described above.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `padding` is invalid or the arguments are incompatible.
-* <b>`ValueError`</b>: if `spatial_dims` are invalid.
-
-
-- - -
-
-### `tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None)` {#l2_normalize}
-
-Normalizes along dimension `dim` using an L2 norm.
-
-For a 1-D tensor with `dim = 0`, computes
-
- output = x / sqrt(max(sum(x**2), epsilon))
-
-For `x` with more dimensions, independently normalizes each 1-D slice along
-dimension `dim`.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`.
-* <b>`dim`</b>: Dimension along which to normalize. A scalar or a vector of
- integers.
-* <b>`epsilon`</b>: A lower bound value for the norm. Will use `sqrt(epsilon)` as the
- divisor if `norm < sqrt(epsilon)`.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A `Tensor` with the same shape as `x`.
-
-
-- - -
-
-### `tf.nn.local_response_normalization(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None)` {#local_response_normalization}
-
-Local Response Normalization.
-
-The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last
-dimension), and each vector is normalized independently. Within a given vector,
-each component is divided by the weighted, squared sum of inputs within
-`depth_radius`. In detail,
-
- sqr_sum[a, b, c, d] =
- sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)
- output = input / (bias + alpha * sqr_sum) ** beta
-
-For details, see [Krizhevsky et al., ImageNet classification with deep
-convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `float32`, `half`.
- 4-D.
-* <b>`depth_radius`</b>: An optional `int`. Defaults to `5`.
- 0-D. Half-width of the 1-D normalization window.
-* <b>`bias`</b>: An optional `float`. Defaults to `1`.
- An offset (usually positive to avoid dividing by 0).
-* <b>`alpha`</b>: An optional `float`. Defaults to `1`.
- A scale factor, usually positive.
-* <b>`beta`</b>: An optional `float`. Defaults to `0.5`. An exponent.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
-
-- - -
-
-### `tf.nn.sufficient_statistics(x, axes, shift=None, keep_dims=False, name=None)` {#sufficient_statistics}
-
-Calculate the sufficient statistics for the mean and variance of `x`.
-
-These sufficient statistics are computed using the one pass algorithm on
-an input that's optionally shifted. See:
-https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Computing_shifted_data
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`.
-* <b>`axes`</b>: Array of ints. Axes along which to compute mean and variance.
-* <b>`shift`</b>: A `Tensor` containing the value by which to shift the data for
- numerical stability, or `None` if no shift is to be performed. A shift
- close to the true mean provides the most numerically stable results.
-* <b>`keep_dims`</b>: produce statistics with the same dimensionality as the input.
-* <b>`name`</b>: Name used to scope the operations that compute the sufficient stats.
-
-##### Returns:
-
- Four `Tensor` objects of the same type as `x`:
-
- * the count (number of elements to average over).
- * the (possibly shifted) sum of the elements in the array.
- * the (possibly shifted) sum of squares of the elements in the array.
- * the shift by which the mean must be corrected or None if `shift` is None.
-
-
-- - -
-
-### `tf.nn.normalize_moments(counts, mean_ss, variance_ss, shift, name=None)` {#normalize_moments}
-
-Calculate the mean and variance of based on the sufficient statistics.
-
-##### Args:
-
-
-* <b>`counts`</b>: A `Tensor` containing a the total count of the data (one value).
-* <b>`mean_ss`</b>: A `Tensor` containing the mean sufficient statistics: the (possibly
- shifted) sum of the elements to average over.
-* <b>`variance_ss`</b>: A `Tensor` containing the variance sufficient statistics: the
- (possibly shifted) squared sum of the data to compute the variance over.
-* <b>`shift`</b>: A `Tensor` containing the value by which the data is shifted for
- numerical stability, or `None` if no shift was performed.
-* <b>`name`</b>: Name used to scope the operations that compute the moments.
-
-##### Returns:
-
- Two `Tensor` objects: `mean` and `variance`.
-
-
-- - -
-
-### `tf.nn.moments(x, axes, shift=None, name=None, keep_dims=False)` {#moments}
-
-Calculate the mean and variance of `x`.
-
-The mean and variance are calculated by aggregating the contents of `x`
-across `axes`. If `x` is 1-D and `axes = [0]` this is just the mean
-and variance of a vector.
-
-Note: for numerical stability, when shift=None, the true mean
-would be computed and used as shift.
-
-When using these moments for batch normalization (see
-`tf.nn.batch_normalization`):
-
- * for so-called "global normalization", used with convolutional filters with
- shape `[batch, height, width, depth]`, pass `axes=[0, 1, 2]`.
- * for simple batch normalization pass `axes=[0]` (batch only).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`.
-* <b>`axes`</b>: Array of ints. Axes along which to compute mean and
- variance.
-* <b>`shift`</b>: A `Tensor` containing the value by which to shift the data for
- numerical stability, or `None` in which case the true mean of the data is
- used as shift. A shift close to the true mean provides the most
- numerically stable results.
-* <b>`name`</b>: Name used to scope the operations that compute the moments.
-* <b>`keep_dims`</b>: produce moments with the same dimensionality as the input.
-
-##### Returns:
-
- Two `Tensor` objects: `mean` and `variance`.
-
-
-- - -
-
-### `tf.nn.weighted_moments(x, axes, frequency_weights, name=None, keep_dims=False)` {#weighted_moments}
-
-Returns the frequency-weighted mean and variance of `x`.
-
-##### Args:
-
-
-* <b>`x`</b>: A tensor.
-* <b>`axes`</b>: 1-d tensor of int32 values; these are the axes along which
- to compute mean and variance.
-* <b>`frequency_weights`</b>: A tensor of positive weights which can be
- broadcast with x.
-* <b>`name`</b>: Name used to scope the operation.
-* <b>`keep_dims`</b>: Produce moments with the same dimensionality as the input.
-
-##### Returns:
-
- Two tensors: `weighted_mean` and `weighted_variance`.
-
-
-- - -
-
-### `tf.nn.fused_batch_norm(x, scale, offset, mean=None, variance=None, epsilon=0.001, data_format='NHWC', is_training=True, name=None)` {#fused_batch_norm}
-
-Batch normalization.
-
-As described in http://arxiv.org/abs/1502.03167.
-
-##### Args:
-
-
-* <b>`x`</b>: Input `Tensor` of 4 dimensions.
-* <b>`scale`</b>: A `Tensor` of 1 dimension for scaling.
-* <b>`offset`</b>: A `Tensor` of 1 dimension for bias.
-* <b>`mean`</b>: A `Tensor` of 1 dimension for population mean used for inference.
-* <b>`variance`</b>: A `Tensor` of 1 dimension for population variance
- used for inference.
-* <b>`epsilon`</b>: A small float number added to the variance of x.
-* <b>`data_format`</b>: The data format for x. Either "NHWC" (default) or "NCHW".
-* <b>`is_training`</b>: A bool value to specify if the operation is used for
- training or inference.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
-
-* <b>`y`</b>: A 4D Tensor for the normalized, scaled, offsetted x.
-* <b>`batch_mean`</b>: A 1D Tensor for the mean of x.
-* <b>`batch_var`</b>: A 1D Tensor for the variance of x.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If mean or variance is not None when is_training is True.
-
-
-- - -
-
-### `tf.nn.batch_normalization(x, mean, variance, offset, scale, variance_epsilon, name=None)` {#batch_normalization}
-
-Batch normalization.
-
-As described in http://arxiv.org/abs/1502.03167.
-Normalizes a tensor by `mean` and `variance`, and applies (optionally) a
-`scale` \\(\gamma\\) to it, as well as an `offset` \\(\beta\\):
-
-\\(\frac{\gamma(x-\mu)}{\sigma}+\beta\\)
-
-`mean`, `variance`, `offset` and `scale` are all expected to be of one of two
-shapes:
-
- * In all generality, they can have the same number of dimensions as the
- input `x`, with identical sizes as `x` for the dimensions that are not
- normalized over (the 'depth' dimension(s)), and dimension 1 for the
- others which are being normalized over.
- `mean` and `variance` in this case would typically be the outputs of
- `tf.nn.moments(..., keep_dims=True)` during training, or running averages
- thereof during inference.
- * In the common case where the 'depth' dimension is the last dimension in
- the input tensor `x`, they may be one dimensional tensors of the same
- size as the 'depth' dimension.
- This is the case for example for the common `[batch, depth]` layout of
- fully-connected layers, and `[batch, height, width, depth]` for
- convolutions.
- `mean` and `variance` in this case would typically be the outputs of
- `tf.nn.moments(..., keep_dims=False)` during training, or running averages
- thereof during inference.
-
-##### Args:
-
-
-* <b>`x`</b>: Input `Tensor` of arbitrary dimensionality.
-* <b>`mean`</b>: A mean `Tensor`.
-* <b>`variance`</b>: A variance `Tensor`.
-* <b>`offset`</b>: An offset `Tensor`, often denoted \\(\beta\\) in equations, or
- None. If present, will be added to the normalized tensor.
-* <b>`scale`</b>: A scale `Tensor`, often denoted \\(\gamma\\) in equations, or
- `None`. If present, the scale is applied to the normalized tensor.
-* <b>`variance_epsilon`</b>: A small float number to avoid dividing by 0.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- the normalized, scaled, offset tensor.
-
-
-- - -
-
-### `tf.nn.batch_norm_with_global_normalization(t, m, v, beta, gamma, variance_epsilon, scale_after_normalization, name=None)` {#batch_norm_with_global_normalization}
-
-Batch normalization.
-
-This op is deprecated. See `tf.nn.batch_normalization`.
-
-##### Args:
-
-
-* <b>`t`</b>: A 4D input Tensor.
-* <b>`m`</b>: A 1D mean Tensor with size matching the last dimension of t.
- This is the first output from tf.nn.moments,
- or a saved moving average thereof.
-* <b>`v`</b>: A 1D variance Tensor with size matching the last dimension of t.
- This is the second output from tf.nn.moments,
- or a saved moving average thereof.
-* <b>`beta`</b>: A 1D beta Tensor with size matching the last dimension of t.
- An offset to be added to the normalized tensor.
-* <b>`gamma`</b>: A 1D gamma Tensor with size matching the last dimension of t.
- If "scale_after_normalization" is true, this tensor will be multiplied
- with the normalized tensor.
-* <b>`variance_epsilon`</b>: A small float number to avoid dividing by 0.
-* <b>`scale_after_normalization`</b>: A bool indicating whether the resulted tensor
- needs to be multiplied with gamma.
-* <b>`name`</b>: A name for this operation (optional).
-
-##### Returns:
-
- A batch-normalized `t`.
-
-
-- - -
-
-### `tf.nn.l2_loss(t, name=None)` {#l2_loss}
-
-L2 Loss.
-
-Computes half the L2 norm of a tensor without the `sqrt`:
-
- output = sum(t ** 2) / 2
-
-##### Args:
-
-
-* <b>`t`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Typically 2-D, but may have any dimensions.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `t`. 0-D.
-
-
-- - -
-
-### `tf.nn.log_poisson_loss(targets, log_input, compute_full_loss=False, name=None)` {#log_poisson_loss}
-
-Computes log Poisson loss given `log_input`.
-
-Gives the log-likelihood loss between the prediction and the target under the
-assumption that the target has a Poisson distribution.
-Caveat: By default, this is not the exact loss, but the loss minus a
- constant term [log(z!)]. That has no effect for optimization, but
- does not play well with relative loss comparisons. To compute an
- approximation of the log factorial term, specify
- compute_full_loss=True to enable Stirling's Approximation.
-
-For brevity, let `c = log(x) = log_input`, `z = targets`. The log Poisson
-loss is
-
- -log(exp(-x) * (x^z) / z!)
- = -log(exp(-x) * (x^z)) + log(z!)
- ~ -log(exp(-x)) - log(x^z) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
- [ Note the second term is the Stirling's Approximation for log(z!).
- It is invariant to x and does not affect optimization, though
- important for correct relative loss comparisons. It is only
- computed when compute_full_loss == True. ]
- = x - z * log(x) [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
- = exp(c) - z * c [+ z * log(z) - z + 0.5 * log(2 * pi * z)]
-
-##### Args:
-
-
-* <b>`targets`</b>: A `Tensor` of the same type and shape as `log_input`.
-* <b>`log_input`</b>: A `Tensor` of type `float32` or `float64`.
-* <b>`compute_full_loss`</b>: whether to compute the full loss. If false, a constant
- term is dropped in favor of more efficient optimization.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of the same shape as `log_input` with the componentwise
- logistic losses.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `log_input` and `targets` do not have the same shape.
-
-
-- - -
-
-### `tf.nn.sigmoid_cross_entropy_with_logits(_sentinel=None, labels=None, logits=None, name=None)` {#sigmoid_cross_entropy_with_logits}
-
-Computes sigmoid cross entropy given `logits`.
-
-Measures the probability error in discrete classification tasks in which each
-class is independent and not mutually exclusive. For instance, one could
-perform multilabel classification where a picture can contain both an elephant
-and a dog at the same time.
-
-For brevity, let `x = logits`, `z = labels`. The logistic loss is
-
- z * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
- = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))
- = z * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))
- = z * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))
- = (1 - z) * x + log(1 + exp(-x))
- = x - x * z + log(1 + exp(-x))
-
-For x < 0, to avoid overflow in exp(-x), we reformulate the above
-
- x - x * z + log(1 + exp(-x))
- = log(exp(x)) - x * z + log(1 + exp(-x))
- = - x * z + log(1 + exp(x))
-
-Hence, to ensure stability and avoid overflow, the implementation uses this
-equivalent formulation
-
- max(x, 0) - x * z + log(1 + exp(-abs(x)))
-
-`logits` and `labels` must have the same type and shape.
-
-##### Args:
-
- _sentinel: Used to prevent positional parameters. Internal, do not use.
-
-* <b>`labels`</b>: A `Tensor` of the same type and shape as `logits`.
-* <b>`logits`</b>: A `Tensor` of type `float32` or `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of the same shape as `logits` with the componentwise
- logistic losses.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `logits` and `labels` do not have the same shape.
-
-
-- - -
-
-### `tf.nn.softmax(logits, dim=-1, name=None)` {#softmax}
-
-Computes softmax activations.
-
-For each batch `i` and class `j` we have
-
- softmax = exp(logits) / reduce_sum(exp(logits), dim)
-
-##### Args:
-
-
-* <b>`logits`</b>: A non-empty `Tensor`. Must be one of the following types: `half`,
- `float32`, `float64`.
-* <b>`dim`</b>: The dimension softmax would be performed on. The default is -1 which
- indicates the last dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `logits`. Same shape as `logits`.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: if `logits` is empty or `dim` is beyond the last
- dimension of `logits`.
-
-
-- - -
-
-### `tf.nn.log_softmax(logits, dim=-1, name=None)` {#log_softmax}
-
-Computes log softmax activations.
-
-For each batch `i` and class `j` we have
-
- logsoftmax = logits - log(reduce_sum(exp(logits), dim))
-
-##### Args:
-
-
-* <b>`logits`</b>: A non-empty `Tensor`. Must be one of the following types: `half`,
- `float32`, `float64`.
-* <b>`dim`</b>: The dimension softmax would be performed on. The default is -1 which
- indicates the last dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `logits`. Same shape as `logits`.
-
-##### Raises:
-
-
-* <b>`InvalidArgumentError`</b>: if `logits` is empty or `dim` is beyond the last
- dimension of `logits`.
-
-
-- - -
-
-### `tf.nn.softmax_cross_entropy_with_logits(_sentinel=None, labels=None, logits=None, dim=-1, name=None)` {#softmax_cross_entropy_with_logits}
-
-Computes softmax cross entropy between `logits` and `labels`.
-
-Measures the probability error in discrete classification tasks in which the
-classes are mutually exclusive (each entry is in exactly one class). For
-example, each CIFAR-10 image is labeled with one and only one label: an image
-can be a dog or a truck, but not both.
-
-**NOTE:** While the classes are mutually exclusive, their probabilities
-need not be. All that is required is that each row of `labels` is
-a valid probability distribution. If they are not, the computation of the
-gradient will be incorrect.
-
-If using exclusive `labels` (wherein one and only
-one class is true at a time), see `sparse_softmax_cross_entropy_with_logits`.
-
-**WARNING:** This op expects unscaled logits, since it performs a `softmax`
-on `logits` internally for efficiency. Do not call this op with the
-output of `softmax`, as it will produce incorrect results.
-
-`logits` and `labels` must have the same shape `[batch_size, num_classes]`
-and the same dtype (either `float16`, `float32`, or `float64`).
-
-**Note that to avoid confusion, it is required to pass only named arguments to
-this function.**
-
-##### Args:
-
- _sentinel: Used to prevent positional parameters. Internal, do not use.
-
-* <b>`labels`</b>: Each row `labels[i]` must be a valid probability distribution.
-* <b>`logits`</b>: Unscaled log probabilities.
-* <b>`dim`</b>: The class dimension. Defaulted to -1 which is the last dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A 1-D `Tensor` of length `batch_size` of the same type as `logits` with the
- softmax cross entropy loss.
-
-
-- - -
-
-### `tf.nn.sparse_softmax_cross_entropy_with_logits(_sentinel=None, labels=None, logits=None, name=None)` {#sparse_softmax_cross_entropy_with_logits}
-
-Computes sparse softmax cross entropy between `logits` and `labels`.
-
-Measures the probability error in discrete classification tasks in which the
-classes are mutually exclusive (each entry is in exactly one class). For
-example, each CIFAR-10 image is labeled with one and only one label: an image
-can be a dog or a truck, but not both.
-
-**NOTE:** For this operation, the probability of a given label is considered
-exclusive. That is, soft classes are not allowed, and the `labels` vector
-must provide a single specific index for the true class for each row of
-`logits` (each minibatch entry). For soft softmax classification with
-a probability distribution for each entry, see
-`softmax_cross_entropy_with_logits`.
-
-**WARNING:** This op expects unscaled logits, since it performs a softmax
-on `logits` internally for efficiency. Do not call this op with the
-output of `softmax`, as it will produce incorrect results.
-
-A common use case is to have logits of shape `[batch_size, num_classes]` and
-labels of shape `[batch_size]`. But higher dimensions are supported.
-
-**Note that to avoid confusion, it is required to pass only named arguments to
-this function.**
-
-##### Args:
-
- _sentinel: Used to prevent positional parameters. Internal, do not use.
-
-* <b>`labels`</b>: `Tensor` of shape `[d_0, d_1, ..., d_{r-1}]` (where `r` is rank of
- `labels` and result) and dtype `int32` or `int64`. Each entry in `labels`
- must be an index in `[0, num_classes)`. Other values will raise an
- exception when this op is run on CPU, and return `NaN` for corresponding
- loss and gradient rows on GPU.
-* <b>`logits`</b>: Unscaled log probabilities of shape
- `[d_0, d_1, ..., d_{r-1}, num_classes]` and dtype `float32` or `float64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of the same shape as `labels` and of the same type as `logits`
- with the softmax cross entropy loss.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If logits are scalars (need to have rank >= 1) or if the rank
- of the labels is not equal to the rank of the labels minus one.
-
-
-- - -
-
-### `tf.nn.weighted_cross_entropy_with_logits(targets, logits, pos_weight, name=None)` {#weighted_cross_entropy_with_logits}
-
-Computes a weighted cross entropy.
-
-This is like `sigmoid_cross_entropy_with_logits()` except that `pos_weight`,
-allows one to trade off recall and precision by up- or down-weighting the
-cost of a positive error relative to a negative error.
-
-The usual cross-entropy cost is defined as:
-
- targets * -log(sigmoid(logits)) + (1 - targets) * -log(1 - sigmoid(logits))
-
-The argument `pos_weight` is used as a multiplier for the positive targets:
-
- targets * -log(sigmoid(logits)) * pos_weight +
- (1 - targets) * -log(1 - sigmoid(logits))
-
-For brevity, let `x = logits`, `z = targets`, `q = pos_weight`.
-The loss is:
-
- qz * -log(sigmoid(x)) + (1 - z) * -log(1 - sigmoid(x))
- = qz * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))
- = qz * log(1 + exp(-x)) + (1 - z) * (-log(exp(-x)) + log(1 + exp(-x)))
- = qz * log(1 + exp(-x)) + (1 - z) * (x + log(1 + exp(-x))
- = (1 - z) * x + (qz + 1 - z) * log(1 + exp(-x))
- = (1 - z) * x + (1 + (q - 1) * z) * log(1 + exp(-x))
-
-Setting `l = (1 + (q - 1) * z)`, to ensure stability and avoid overflow,
-the implementation uses
-
- (1 - z) * x + l * (log(1 + exp(-abs(x))) + max(-x, 0))
-
-`logits` and `targets` must have the same type and shape.
-
-##### Args:
-
-
-* <b>`targets`</b>: A `Tensor` of the same type and shape as `logits`.
-* <b>`logits`</b>: A `Tensor` of type `float32` or `float64`.
-* <b>`pos_weight`</b>: A coefficient to use on the positive examples.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of the same shape as `logits` with the componentwise
- weighted logistic losses.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `logits` and `targets` do not have the same shape.
-
-
-- - -
-
-### `tf.nn.embedding_lookup(params, ids, partition_strategy='mod', name=None, validate_indices=True, max_norm=None)` {#embedding_lookup}
-
-Looks up `ids` in a list of embedding tensors.
-
-This function is used to perform parallel lookups on the list of
-tensors in `params`. It is a generalization of
-[`tf.gather()`](../../api_docs/python/array_ops.md#gather), where `params` is
-interpreted as a partitioning of a large embedding tensor. `params` may be
-a `PartitionedVariable` as returned by using `tf.get_variable()` with a
-partitioner.
-
-If `len(params) > 1`, each element `id` of `ids` is partitioned between
-the elements of `params` according to the `partition_strategy`.
-In all strategies, if the id space does not evenly divide the number of
-partitions, each of the first `(max_id + 1) % len(params)` partitions will
-be assigned one more id.
-
-If `partition_strategy` is `"mod"`, we assign each id to partition
-`p = id % len(params)`. For instance,
-13 ids are split across 5 partitions as:
-`[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`
-
-If `partition_strategy` is `"div"`, we assign ids to partitions in a
-contiguous manner. In this case, 13 ids are split across 5 partitions as:
-`[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`
-
-The results of the lookup are concatenated into a dense
-tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
-
-##### Args:
-
-
-* <b>`params`</b>: A single tensor representing the complete embedding tensor,
- or a list of P tensors all of same shape except for the first dimension,
- representing sharded embedding tensors. Alternatively, a
- `PartitionedVariable`, created by partitioning along dimension 0. Each
- element must be appropriately sized for the given `partition_strategy`.
-* <b>`ids`</b>: A `Tensor` with type `int32` or `int64` containing the ids to be looked
- up in `params`.
-* <b>`partition_strategy`</b>: A string specifying the partitioning strategy, relevant
- if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default
- is `"mod"`.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`validate_indices`</b>: Whether or not to validate gather indices.
-* <b>`max_norm`</b>: If not None, embedding values are l2-normalized to the value of
- max_norm.
-
-##### Returns:
-
- A `Tensor` with the same type as the tensors in `params`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `params` is empty.
-
-
-- - -
-
-### `tf.nn.embedding_lookup_sparse(params, sp_ids, sp_weights, partition_strategy='mod', name=None, combiner=None, max_norm=None)` {#embedding_lookup_sparse}
-
-Computes embeddings for the given ids and weights.
-
-This op assumes that there is at least one id for each row in the dense tensor
-represented by sp_ids (i.e. there are no rows with empty features), and that
-all the indices of sp_ids are in canonical row-major order.
-
-It also assumes that all id values lie in the range [0, p0), where p0
-is the sum of the size of params along dimension 0.
-
-##### Args:
-
-
-* <b>`params`</b>: A single tensor representing the complete embedding tensor,
- or a list of P tensors all of same shape except for the first dimension,
- representing sharded embedding tensors. Alternatively, a
- `PartitionedVariable`, created by partitioning along dimension 0. Each
- element must be appropriately sized for the given `partition_strategy`.
-* <b>`sp_ids`</b>: N x M SparseTensor of int64 ids (typically from FeatureValueToId),
- where N is typically batch size and M is arbitrary.
-* <b>`sp_weights`</b>: either a SparseTensor of float / double weights, or None to
- indicate all weights should be taken to be 1. If specified, sp_weights
- must have exactly the same shape and indices as sp_ids.
-* <b>`partition_strategy`</b>: A string specifying the partitioning strategy, relevant
- if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default
- is `"mod"`. See `tf.nn.embedding_lookup` for more details.
-* <b>`name`</b>: Optional name for the op.
-* <b>`combiner`</b>: A string specifying the reduction op. Currently "mean", "sqrtn"
- and "sum" are supported.
- "sum" computes the weighted sum of the embedding results for each row.
- "mean" is the weighted sum divided by the total weight.
- "sqrtn" is the weighted sum divided by the square root of the sum of the
- squares of the weights.
-* <b>`max_norm`</b>: If not None, each embedding is normalized to have l2 norm equal
- to max_norm before combining.
-
-##### Returns:
-
- A dense tensor representing the combined embeddings for the
- sparse ids. For each row in the dense tensor represented by sp_ids, the op
- looks up the embeddings for all ids in that row, multiplies them by the
- corresponding weight, and combines these embeddings as specified.
-
- In other words, if
-
- shape(combined params) = [p0, p1, ..., pm]
-
- and
-
- shape(sp_ids) = shape(sp_weights) = [d0, d1, ..., dn]
-
- then
-
- shape(output) = [d0, d1, ..., dn-1, p1, ..., pm].
-
- For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are
-
- [0, 0]: id 1, weight 2.0
- [0, 1]: id 3, weight 0.5
- [1, 0]: id 0, weight 1.0
- [2, 3]: id 1, weight 3.0
-
- with `combiner`="mean", then the output will be a 3x20 matrix where
-
- output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5)
- output[1, :] = params[0, :] * 1.0
- output[2, :] = params[1, :] * 3.0
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If sp_ids is not a SparseTensor, or if sp_weights is neither
- None nor SparseTensor.
-* <b>`ValueError`</b>: If combiner is not one of {"mean", "sqrtn", "sum"}.
-
-
-- - -
-
-### `tf.nn.dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None)` {#dynamic_rnn}
-
-Creates a recurrent neural network specified by RNNCell `cell`.
-
-This function is functionally identical to the function `rnn` above, but
-performs fully dynamic unrolling of `inputs`.
-
-Unlike `rnn`, the input `inputs` is not a Python list of `Tensors`, one for
-each frame. Instead, `inputs` may be a single `Tensor` where
-the maximum time is either the first or second dimension (see the parameter
-`time_major`). Alternatively, it may be a (possibly nested) tuple of
-Tensors, each of them having matching batch and time dimensions.
-The corresponding output is either a single `Tensor` having the same number
-of time steps and batch size, or a (possibly nested) tuple of such tensors,
-matching the nested structure of `cell.output_size`.
-
-The parameter `sequence_length` is optional and is used to copy-through state
-and zero-out outputs when past a batch element's sequence length. So it's more
-for correctness than performance, unlike in rnn().
-
-##### Args:
-
-
-* <b>`cell`</b>: An instance of RNNCell.
-* <b>`inputs`</b>: The RNN inputs.
-
- If `time_major == False` (default), this must be a `Tensor` of shape:
- `[batch_size, max_time, ...]`, or a nested tuple of such
- elements.
-
- If `time_major == True`, this must be a `Tensor` of shape:
- `[max_time, batch_size, ...]`, or a nested tuple of such
- elements.
-
- This may also be a (possibly nested) tuple of Tensors satisfying
- this property. The first two dimensions must match across all the inputs,
- but otherwise the ranks and other shape components may differ.
- In this case, input to `cell` at each time-step will replicate the
- structure of these tuples, except for the time dimension (from which the
- time is taken).
-
- The input to `cell` at each time step will be a `Tensor` or (possibly
- nested) tuple of Tensors each with dimensions `[batch_size, ...]`.
-
-* <b>`sequence_length`</b>: (optional) An int32/int64 vector sized `[batch_size]`.
-* <b>`initial_state`</b>: (optional) An initial state for the RNN.
- If `cell.state_size` is an integer, this must be
- a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`.
- If `cell.state_size` is a tuple, this should be a tuple of
- tensors having shapes `[batch_size, s] for s in cell.state_size`.
-* <b>`dtype`</b>: (optional) The data type for the initial state and expected output.
- Required if initial_state is not provided or RNN state has a heterogeneous
- dtype.
-* <b>`parallel_iterations`</b>: (Default: 32). The number of iterations to run in
- parallel. Those operations which do not have any temporal dependency
- and can be run in parallel, will be. This parameter trades off
- time for space. Values >> 1 use more memory but take less time,
- while smaller values use less memory but computations take longer.
-* <b>`swap_memory`</b>: Transparently swap the tensors produced in forward inference
- but needed for back prop from GPU to CPU. This allows training RNNs
- which would typically not fit on a single GPU, with very minimal (or no)
- performance penalty.
-* <b>`time_major`</b>: The shape format of the `inputs` and `outputs` Tensors.
- If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`.
- If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`.
- Using `time_major = True` is a bit more efficient because it avoids
- transposes at the beginning and end of the RNN calculation. However,
- most TensorFlow data is batch-major, so by default this function
- accepts input and emits output in batch-major form.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
-
-##### Returns:
-
- A pair (outputs, state) where:
-
-
-* <b>`outputs`</b>: The RNN output `Tensor`.
-
- If time_major == False (default), this will be a `Tensor` shaped:
- `[batch_size, max_time, cell.output_size]`.
-
- If time_major == True, this will be a `Tensor` shaped:
- `[max_time, batch_size, cell.output_size]`.
-
- Note, if `cell.output_size` is a (possibly nested) tuple of integers
- or `TensorShape` objects, then `outputs` will be a tuple having the
- same structure as `cell.output_size`, containing Tensors having shapes
- corresponding to the shape data in `cell.output_size`.
-
-
-* <b>`state`</b>: The final state. If `cell.state_size` is an int, this
- will be shaped `[batch_size, cell.state_size]`. If it is a
- `TensorShape`, this will be shaped `[batch_size] + cell.state_size`.
- If it is a (possibly nested) tuple of ints or `TensorShape`, this will
- be a tuple having the corresponding shapes.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cell` is not an instance of RNNCell.
-* <b>`ValueError`</b>: If inputs is None or an empty list.
-
-
-- - -
-
-### `tf.nn.bidirectional_dynamic_rnn(cell_fw, cell_bw, inputs, sequence_length=None, initial_state_fw=None, initial_state_bw=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None)` {#bidirectional_dynamic_rnn}
-
-Creates a dynamic version of bidirectional recurrent neural network.
-
-Similar to the unidirectional case above (rnn) but takes input and builds
-independent forward and backward RNNs. The input_size of forward and
-backward cell must match. The initial state for both directions is zero by
-default (but can be set optionally) and no intermediate states are ever
-returned -- the network is fully unrolled for the given (passed in)
-length(s) of the sequence(s) or completely unrolled if length(s) is not
-given.
-
-##### Args:
-
-
-* <b>`cell_fw`</b>: An instance of RNNCell, to be used for forward direction.
-* <b>`cell_bw`</b>: An instance of RNNCell, to be used for backward direction.
-* <b>`inputs`</b>: The RNN inputs.
- If time_major == False (default), this must be a tensor of shape:
- `[batch_size, max_time, input_size]`.
- If time_major == True, this must be a tensor of shape:
- `[max_time, batch_size, input_size]`.
- [batch_size, input_size].
-* <b>`sequence_length`</b>: An int32/int64 vector, size `[batch_size]`,
- containing the actual lengths for each of the sequences.
-* <b>`initial_state_fw`</b>: (optional) An initial state for the forward RNN.
- This must be a tensor of appropriate type and shape
- `[batch_size, cell_fw.state_size]`.
- If `cell_fw.state_size` is a tuple, this should be a tuple of
- tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
-* <b>`initial_state_bw`</b>: (optional) Same as for `initial_state_fw`, but using
- the corresponding properties of `cell_bw`.
-* <b>`dtype`</b>: (optional) The data type for the initial states and expected output.
- Required if initial_states are not provided or RNN states have a
- heterogeneous dtype.
-* <b>`parallel_iterations`</b>: (Default: 32). The number of iterations to run in
- parallel. Those operations which do not have any temporal dependency
- and can be run in parallel, will be. This parameter trades off
- time for space. Values >> 1 use more memory but take less time,
- while smaller values use less memory but computations take longer.
-* <b>`swap_memory`</b>: Transparently swap the tensors produced in forward inference
- but needed for back prop from GPU to CPU. This allows training RNNs
- which would typically not fit on a single GPU, with very minimal (or no)
- performance penalty.
-* <b>`time_major`</b>: The shape format of the `inputs` and `outputs` Tensors.
- If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`.
- If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`.
- Using `time_major = True` is a bit more efficient because it avoids
- transposes at the beginning and end of the RNN calculation. However,
- most TensorFlow data is batch-major, so by default this function
- accepts input and emits output in batch-major form.
-* <b>`dtype`</b>: (optional) The data type for the initial state. Required if
- either of the initial states are not provided.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to
- "bidirectional_rnn"
-
-##### Returns:
-
- A tuple (outputs, output_states) where:
-
-* <b>`outputs`</b>: A tuple (output_fw, output_bw) containing the forward and
- the backward rnn output `Tensor`.
- If time_major == False (default),
- output_fw will be a `Tensor` shaped:
- `[batch_size, max_time, cell_fw.output_size]`
- and output_bw will be a `Tensor` shaped:
- `[batch_size, max_time, cell_bw.output_size]`.
- If time_major == True,
- output_fw will be a `Tensor` shaped:
- `[max_time, batch_size, cell_fw.output_size]`
- and output_bw will be a `Tensor` shaped:
- `[max_time, batch_size, cell_bw.output_size]`.
- It returns a tuple instead of a single concatenated `Tensor`, unlike
- in the `bidirectional_rnn`. If the concatenated one is preferred,
- the forward and backward outputs can be concatenated as
- `tf.concat(outputs, 2)`.
-* <b>`output_states`</b>: A tuple (output_state_fw, output_state_bw) containing
- the forward and the backward final states of bidirectional rnn.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cell_fw` or `cell_bw` is not an instance of `RNNCell`.
-
-
-- - -
-
-### `tf.nn.raw_rnn(cell, loop_fn, parallel_iterations=None, swap_memory=False, scope=None)` {#raw_rnn}
-
-Creates an `RNN` specified by RNNCell `cell` and loop function `loop_fn`.
-
-**NOTE: This method is still in testing, and the API may change.**
-
-This function is a more primitive version of `dynamic_rnn` that provides
-more direct access to the inputs each iteration. It also provides more
-control over when to start and finish reading the sequence, and
-what to emit for the output.
-
-For example, it can be used to implement the dynamic decoder of a seq2seq
-model.
-
-Instead of working with `Tensor` objects, most operations work with
-`TensorArray` objects directly.
-
-The operation of `raw_rnn`, in pseudo-code, is basically the following:
-
-```python
-time = tf.constant(0, dtype=tf.int32)
-(finished, next_input, initial_state, _, loop_state) = loop_fn(
- time=time, cell_output=None, cell_state=None, loop_state=None)
-emit_ta = TensorArray(dynamic_size=True, dtype=initial_state.dtype)
-state = initial_state
-while not all(finished):
- (output, cell_state) = cell(next_input, state)
- (next_finished, next_input, next_state, emit, loop_state) = loop_fn(
- time=time + 1, cell_output=output, cell_state=cell_state,
- loop_state=loop_state)
- # Emit zeros and copy forward state for minibatch entries that are finished.
- state = tf.where(finished, state, next_state)
- emit = tf.where(finished, tf.zeros_like(emit), emit)
- emit_ta = emit_ta.write(time, emit)
- # If any new minibatch entries are marked as finished, mark these.
- finished = tf.logical_or(finished, next_finished)
- time += 1
-return (emit_ta, state, loop_state)
-```
-
-with the additional properties that output and state may be (possibly nested)
-tuples, as determined by `cell.output_size` and `cell.state_size`, and
-as a result the final `state` and `emit_ta` may themselves be tuples.
-
-A simple implementation of `dynamic_rnn` via `raw_rnn` looks like this:
-
-```python
-inputs = tf.placeholder(shape=(max_time, batch_size, input_depth),
- dtype=tf.float32)
-sequence_length = tf.placeholder(shape=(batch_size,), dtype=tf.int32)
-inputs_ta = tf.TensorArray(dtype=tf.float32, size=max_time)
-inputs_ta = inputs_ta.unstack(inputs)
-
-cell = tf.contrib.rnn.LSTMCell(num_units)
-
-def loop_fn(time, cell_output, cell_state, loop_state):
- emit_output = cell_output # == None for time == 0
- if cell_output is None: # time == 0
- next_cell_state = cell.zero_state(batch_size, tf.float32)
- else:
- next_cell_state = cell_state
- elements_finished = (time >= sequence_length)
- finished = tf.reduce_all(elements_finished)
- next_input = tf.cond(
- finished,
- lambda: tf.zeros([batch_size, input_depth], dtype=tf.float32),
- lambda: inputs_ta.read(time))
- next_loop_state = None
- return (elements_finished, next_input, next_cell_state,
- emit_output, next_loop_state)
-
-outputs_ta, final_state, _ = raw_rnn(cell, loop_fn)
-outputs = outputs_ta.stack()
-```
-
-##### Args:
-
-
-* <b>`cell`</b>: An instance of RNNCell.
-* <b>`loop_fn`</b>: A callable that takes inputs
- `(time, cell_output, cell_state, loop_state)`
- and returns the tuple
- `(finished, next_input, next_cell_state, emit_output, next_loop_state)`.
- Here `time` is an int32 scalar `Tensor`, `cell_output` is a
- `Tensor` or (possibly nested) tuple of tensors as determined by
- `cell.output_size`, and `cell_state` is a `Tensor`
- or (possibly nested) tuple of tensors, as determined by the `loop_fn`
- on its first call (and should match `cell.state_size`).
- The outputs are: `finished`, a boolean `Tensor` of
- shape `[batch_size]`, `next_input`: the next input to feed to `cell`,
- `next_cell_state`: the next state to feed to `cell`,
- and `emit_output`: the output to store for this iteration.
-
- Note that `emit_output` should be a `Tensor` or (possibly nested)
- tuple of tensors with shapes and structure matching `cell.output_size`
- and `cell_output` above. The parameter `cell_state` and output
- `next_cell_state` may be either a single or (possibly nested) tuple
- of tensors. The parameter `loop_state` and
- output `next_loop_state` may be either a single or (possibly nested) tuple
- of `Tensor` and `TensorArray` objects. This last parameter
- may be ignored by `loop_fn` and the return value may be `None`. If it
- is not `None`, then the `loop_state` will be propagated through the RNN
- loop, for use purely by `loop_fn` to keep track of its own state.
- The `next_loop_state` parameter returned may be `None`.
-
- The first call to `loop_fn` will be `time = 0`, `cell_output = None`,
- `cell_state = None`, and `loop_state = None`. For this call:
- The `next_cell_state` value should be the value with which to initialize
- the cell's state. It may be a final state from a previous RNN or it
- may be the output of `cell.zero_state()`. It should be a
- (possibly nested) tuple structure of tensors.
- If `cell.state_size` is an integer, this must be
- a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`.
- If `cell.state_size` is a `TensorShape`, this must be a `Tensor` of
- appropriate type and shape `[batch_size] + cell.state_size`.
- If `cell.state_size` is a (possibly nested) tuple of ints or
- `TensorShape`, this will be a tuple having the corresponding shapes.
- The `emit_output` value may be either `None` or a (possibly nested)
- tuple structure of tensors, e.g.,
- `(tf.zeros(shape_0, dtype=dtype_0), tf.zeros(shape_1, dtype=dtype_1))`.
- If this first `emit_output` return value is `None`,
- then the `emit_ta` result of `raw_rnn` will have the same structure and
- dtypes as `cell.output_size`. Otherwise `emit_ta` will have the same
- structure, shapes (prepended with a `batch_size` dimension), and dtypes
- as `emit_output`. The actual values returned for `emit_output` at this
- initializing call are ignored. Note, this emit structure must be
- consistent across all time steps.
-
-
-* <b>`parallel_iterations`</b>: (Default: 32). The number of iterations to run in
- parallel. Those operations which do not have any temporal dependency
- and can be run in parallel, will be. This parameter trades off
- time for space. Values >> 1 use more memory but take less time,
- while smaller values use less memory but computations take longer.
-* <b>`swap_memory`</b>: Transparently swap the tensors produced in forward inference
- but needed for back prop from GPU to CPU. This allows training RNNs
- which would typically not fit on a single GPU, with very minimal (or no)
- performance penalty.
-* <b>`scope`</b>: VariableScope for the created subgraph; defaults to "rnn".
-
-##### Returns:
-
- A tuple `(emit_ta, final_state, final_loop_state)` where:
-
- `emit_ta`: The RNN output `TensorArray`.
- If `loop_fn` returns a (possibly nested) set of Tensors for
- `emit_output` during initialization, (inputs `time = 0`,
- `cell_output = None`, and `loop_state = None`), then `emit_ta` will
- have the same structure, dtypes, and shapes as `emit_output` instead.
- If `loop_fn` returns `emit_output = None` during this call,
- the structure of `cell.output_size` is used:
- If `cell.output_size` is a (possibly nested) tuple of integers
- or `TensorShape` objects, then `emit_ta` will be a tuple having the
- same structure as `cell.output_size`, containing TensorArrays whose
- elements' shapes correspond to the shape data in `cell.output_size`.
-
- `final_state`: The final cell state. If `cell.state_size` is an int, this
- will be shaped `[batch_size, cell.state_size]`. If it is a
- `TensorShape`, this will be shaped `[batch_size] + cell.state_size`.
- If it is a (possibly nested) tuple of ints or `TensorShape`, this will
- be a tuple having the corresponding shapes.
-
- `final_loop_state`: The final loop state as returned by `loop_fn`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cell` is not an instance of RNNCell, or `loop_fn` is not
- a `callable`.
-
-
-- - -
-
-### `tf.nn.ctc_loss(labels, inputs, sequence_length, preprocess_collapse_repeated=False, ctc_merge_repeated=True, time_major=True)` {#ctc_loss}
-
-Computes the CTC (Connectionist Temporal Classification) Loss.
-
-This op implements the CTC loss as presented in the article:
-
-A. Graves, S. Fernandez, F. Gomez, J. Schmidhuber.
-Connectionist Temporal Classification: Labelling Unsegmented Sequence Data
-with Recurrent Neural Networks. ICML 2006, Pittsburgh, USA, pp. 369-376.
-
-http://www.cs.toronto.edu/~graves/icml_2006.pdf
-
-Input requirements:
-
-```
-sequence_length(b) <= time for all b
-
-max(labels.indices(labels.indices[:, 1] == b, 2))
- <= sequence_length(b) for all b.
-```
-
-Notes:
-
-This class performs the softmax operation for you, so inputs should
-be e.g. linear projections of outputs by an LSTM.
-
-The `inputs` Tensor's innermost dimension size, `num_classes`, represents
-`num_labels + 1` classes, where num_labels is the number of true labels, and
-the largest value `(num_classes - 1)` is reserved for the blank label.
-
-For example, for a vocabulary containing 3 labels `[a, b, c]`,
-`num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.
-
-Regarding the arguments `preprocess_collapse_repeated` and
-`ctc_merge_repeated`:
-
-If `preprocess_collapse_repeated` is True, then a preprocessing step runs
-before loss calculation, wherein repeated labels passed to the loss
-are merged into single labels. This is useful if the training labels come
-from, e.g., forced alignments and therefore have unnecessary repetitions.
-
-If `ctc_merge_repeated` is set False, then deep within the CTC calculation,
-repeated non-blank labels will not be merged and are interpreted
-as individual labels. This is a simplified (non-standard) version of CTC.
-
-Here is a table of the (roughly) expected first order behavior:
-
-* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`
-
- Classical CTC behavior: Outputs true repeated classes with blanks in
- between, and can also output repeated classes with no blanks in
- between that need to be collapsed by the decoder.
-
-* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`
-
- Never learns to output repeated classes, as they are collapsed
- in the input labels before training.
-
-* `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`
-
- Outputs repeated classes with blanks in between, but generally does not
- require the decoder to collapse/merge repeated classes.
-
-* `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`
-
- Untested. Very likely will not learn to output repeated classes.
-
-##### Args:
-
-
-* <b>`labels`</b>: An `int32` `SparseTensor`.
- `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores
- the id for (batch b, time t).
- `labels.values[i]` must take on values in `[0, num_labels)`.
- See `core/ops/ctc_ops.cc` for more details.
-* <b>`inputs`</b>: 3-D `float` `Tensor`.
- If time_major == False, this will be a `Tensor` shaped:
- `[batch_size x max_time x num_classes]`.
- If time_major == True (default), this will be a `Tensor` shaped:
- `[max_time x batch_size x num_classes]`.
- The logits.
-* <b>`sequence_length`</b>: 1-D `int32` vector, size `[batch_size]`.
- The sequence lengths.
-* <b>`preprocess_collapse_repeated`</b>: Boolean. Default: False.
- If True, repeated labels are collapsed prior to the CTC calculation.
-* <b>`ctc_merge_repeated`</b>: Boolean. Default: True.
-* <b>`time_major`</b>: The shape format of the `inputs` Tensors.
- If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`.
- If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`.
- Using `time_major = True` (default) is a bit more efficient because it avoids
- transposes at the beginning of the ctc_loss calculation. However, most
- TensorFlow data is batch-major, so by this function also accepts inputs
- in batch-major form.
-
-##### Returns:
-
- A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if labels is not a `SparseTensor`.
-
-
-- - -
-
-### `tf.nn.ctc_greedy_decoder(inputs, sequence_length, merge_repeated=True)` {#ctc_greedy_decoder}
-
-Performs greedy decoding on the logits given in input (best path).
-
-Note: Regardless of the value of merge_repeated, if the maximum index of a
-given time and batch corresponds to the blank index `(num_classes - 1)`, no
-new element is emitted.
-
-If `merge_repeated` is `True`, merge repeated classes in output.
-This means that if consecutive logits' maximum indices are the same,
-only the first of these is emitted. The sequence `A B B * B * B` (where '*'
-is the blank label) becomes
-
- * `A B B B` if `merge_repeated=True`.
- * `A B B B B` if `merge_repeated=False`.
-
-##### Args:
-
-
-* <b>`inputs`</b>: 3-D `float` `Tensor` sized
- `[max_time x batch_size x num_classes]`. The logits.
-* <b>`sequence_length`</b>: 1-D `int32` vector containing sequence lengths,
- having size `[batch_size]`.
-* <b>`merge_repeated`</b>: Boolean. Default: True.
-
-##### Returns:
-
- A tuple `(decoded, log_probabilities)` where
-
-* <b>`decoded`</b>: A single-element list. `decoded[0]`
- is an `SparseTensor` containing the decoded outputs s.t.:
- `decoded.indices`: Indices matrix `(total_decoded_outputs x 2)`.
- The rows store: `[batch, time]`.
- `decoded.values`: Values vector, size `(total_decoded_outputs)`.
- The vector stores the decoded classes.
- `decoded.shape`: Shape vector, size `(2)`.
- The shape values are: `[batch_size, max_decoded_length]`
-* <b>`log_probability`</b>: A `float` matrix `(batch_size x 1)` containing sequence
- log-probabilities.
-
-
-- - -
-
-### `tf.nn.ctc_beam_search_decoder(inputs, sequence_length, beam_width=100, top_paths=1, merge_repeated=True)` {#ctc_beam_search_decoder}
-
-Performs beam search decoding on the logits given in input.
-
-**Note** The `ctc_greedy_decoder` is a special case of the
-`ctc_beam_search_decoder` with `top_paths=1` and `beam_width=1` (but
-that decoder is faster for this special case).
-
-If `merge_repeated` is `True`, merge repeated classes in the output beams.
-This means that if consecutive entries in a beam are the same,
-only the first of these is emitted. That is, when the top path
-is `A B B B B`, the return value is:
-
- * `A B` if `merge_repeated = True`.
- * `A B B B B` if `merge_repeated = False`.
-
-##### Args:
-
-
-* <b>`inputs`</b>: 3-D `float` `Tensor`, size
- `[max_time x batch_size x num_classes]`. The logits.
-* <b>`sequence_length`</b>: 1-D `int32` vector containing sequence lengths,
- having size `[batch_size]`.
-* <b>`beam_width`</b>: An int scalar >= 0 (beam search beam width).
-* <b>`top_paths`</b>: An int scalar >= 0, <= beam_width (controls output size).
-* <b>`merge_repeated`</b>: Boolean. Default: True.
-
-##### Returns:
-
- A tuple `(decoded, log_probabilities)` where
-
-* <b>`decoded`</b>: A list of length top_paths, where `decoded[j]`
- is a `SparseTensor` containing the decoded outputs:
- `decoded[j].indices`: Indices matrix `(total_decoded_outputs[j] x 2)`
- The rows store: [batch, time].
- `decoded[j].values`: Values vector, size `(total_decoded_outputs[j])`.
- The vector stores the decoded classes for beam j.
- `decoded[j].shape`: Shape vector, size `(2)`.
- The shape values are: `[batch_size, max_decoded_length[j]]`.
-* <b>`log_probability`</b>: A `float` matrix `(batch_size x top_paths)` containing
- sequence log-probabilities.
-
-
-- - -
-
-### `tf.nn.top_k(input, k=1, sorted=True, name=None)` {#top_k}
-
-Finds values and indices of the `k` largest entries for the last dimension.
-
-If the input is a vector (rank-1), finds the `k` largest entries in the vector
-and outputs their values and indices as vectors. Thus `values[j]` is the
-`j`-th largest entry in `input`, and its index is `indices[j]`.
-
-For matrices (resp. higher rank input), computes the top `k` entries in each
-row (resp. vector along the last dimension). Thus,
-
- values.shape = indices.shape = input.shape[:-1] + [k]
-
-If two elements are equal, the lower-index element appears first.
-
-##### Args:
-
-
-* <b>`input`</b>: 1-D or higher `Tensor` with last dimension at least `k`.
-* <b>`k`</b>: 0-D `int32` `Tensor`. Number of top elements to look for along the last
- dimension (along each row for matrices).
-* <b>`sorted`</b>: If true the resulting `k` elements will be sorted by the values in
- descending order.
-* <b>`name`</b>: Optional name for the operation.
-
-##### Returns:
-
-
-* <b>`values`</b>: The `k` largest elements along each last dimensional slice.
-* <b>`indices`</b>: The indices of `values` within the last dimension of `input`.
-
-
-- - -
-
-### `tf.nn.in_top_k(predictions, targets, k, name=None)` {#in_top_k}
-
-Says whether the targets are in the top `K` predictions.
-
-This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the
-prediction for the target class is among the top `k` predictions among
-all predictions for example `i`. Note that the behavior of `InTopK` differs
-from the `TopK` op in its handling of ties; if multiple classes have the
-same prediction value and straddle the top-`k` boundary, all of those
-classes are considered to be in the top `k`.
-
-More formally, let
-
- \\(predictions_i\\) be the predictions for all classes for example `i`,
- \\(targets_i\\) be the target class for example `i`,
- \\(out_i\\) be the output for example `i`,
-
-$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$
-
-##### Args:
-
-
-* <b>`predictions`</b>: A `Tensor` of type `float32`.
- A `batch_size` x `classes` tensor.
-* <b>`targets`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A `batch_size` vector of class ids.
-* <b>`k`</b>: An `int`. Number of top elements to look at for computing precision.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`. Computed Precision at `k` as a `bool Tensor`.
-
-
-- - -
-
-### `tf.nn.nce_loss(weights, biases, labels, inputs, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=False, partition_strategy='mod', name='nce_loss')` {#nce_loss}
-
-Computes and returns the noise-contrastive estimation training loss.
-
-See [Noise-contrastive estimation: A new estimation principle for
-unnormalized statistical
-models](http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf).
-Also see our [Candidate Sampling Algorithms
-Reference](../../extras/candidate_sampling.pdf)
-
-Note: By default this uses a log-uniform (Zipfian) distribution for sampling,
-so your labels must be sorted in order of decreasing frequency to achieve
-good results. For more details, see
-[log_uniform_candidate_sampler](#log_uniform_candidate_sampler).
-
-Note: In the case where `num_true` > 1, we assign to each target class
-the target probability 1 / `num_true` so that the target probabilities
-sum to 1 per-example.
-
-Note: It would be useful to allow a variable number of target classes per
-example. We hope to provide this functionality in a future release.
-For now, if you have a variable number of target classes, you can pad them
-out to a constant number by either repeating them or by padding
-with an otherwise unused class.
-
-##### Args:
-
-
-* <b>`weights`</b>: A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor`
- objects whose concatenation along dimension 0 has shape
- [num_classes, dim]. The (possibly-partitioned) class embeddings.
-* <b>`biases`</b>: A `Tensor` of shape `[num_classes]`. The class biases.
-* <b>`labels`</b>: A `Tensor` of type `int64` and shape `[batch_size,
- num_true]`. The target classes.
-* <b>`inputs`</b>: A `Tensor` of shape `[batch_size, dim]`. The forward
- activations of the input network.
-* <b>`num_sampled`</b>: An `int`. The number of classes to randomly sample per batch.
-* <b>`num_classes`</b>: An `int`. The number of possible classes.
-* <b>`num_true`</b>: An `int`. The number of target classes per training example.
-* <b>`sampled_values`</b>: a tuple of (`sampled_candidates`, `true_expected_count`,
- `sampled_expected_count`) returned by a `*_candidate_sampler` function.
- (if None, we default to `log_uniform_candidate_sampler`)
-* <b>`remove_accidental_hits`</b>: A `bool`. Whether to remove "accidental hits"
- where a sampled class equals one of the target classes. If set to
- `True`, this is a "Sampled Logistic" loss instead of NCE, and we are
- learning to generate log-odds instead of log probabilities. See
- our [Candidate Sampling Algorithms Reference]
- (../../extras/candidate_sampling.pdf).
- Default is False.
-* <b>`partition_strategy`</b>: A string specifying the partitioning strategy, relevant
- if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported.
- Default is `"mod"`. See `tf.nn.embedding_lookup` for more details.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `batch_size` 1-D tensor of per-example NCE losses.
-
-
-- - -
-
-### `tf.nn.sampled_softmax_loss(weights, biases, labels, inputs, num_sampled, num_classes, num_true=1, sampled_values=None, remove_accidental_hits=True, partition_strategy='mod', name='sampled_softmax_loss')` {#sampled_softmax_loss}
-
-Computes and returns the sampled softmax training loss.
-
-This is a faster way to train a softmax classifier over a huge number of
-classes.
-
-This operation is for training only. It is generally an underestimate of
-the full softmax loss.
-
-At inference time, you can compute full softmax probabilities with the
-expression `tf.nn.softmax(tf.matmul(inputs, tf.transpose(weights)) + biases)`.
-
-See our [Candidate Sampling Algorithms Reference]
-(../../extras/candidate_sampling.pdf)
-
-Also see Section 3 of [Jean et al., 2014](http://arxiv.org/abs/1412.2007)
-([pdf](http://arxiv.org/pdf/1412.2007.pdf)) for the math.
-
-##### Args:
-
-
-* <b>`weights`</b>: A `Tensor` of shape `[num_classes, dim]`, or a list of `Tensor`
- objects whose concatenation along dimension 0 has shape
- [num_classes, dim]. The (possibly-sharded) class embeddings.
-* <b>`biases`</b>: A `Tensor` of shape `[num_classes]`. The class biases.
-* <b>`labels`</b>: A `Tensor` of type `int64` and shape `[batch_size,
- num_true]`. The target classes. Note that this format differs from
- the `labels` argument of `nn.softmax_cross_entropy_with_logits`.
-* <b>`inputs`</b>: A `Tensor` of shape `[batch_size, dim]`. The forward
- activations of the input network.
-* <b>`num_sampled`</b>: An `int`. The number of classes to randomly sample per batch.
-* <b>`num_classes`</b>: An `int`. The number of possible classes.
-* <b>`num_true`</b>: An `int`. The number of target classes per training example.
-* <b>`sampled_values`</b>: a tuple of (`sampled_candidates`, `true_expected_count`,
- `sampled_expected_count`) returned by a `*_candidate_sampler` function.
- (if None, we default to `log_uniform_candidate_sampler`)
-* <b>`remove_accidental_hits`</b>: A `bool`. whether to remove "accidental hits"
- where a sampled class equals one of the target classes. Default is
- True.
-* <b>`partition_strategy`</b>: A string specifying the partitioning strategy, relevant
- if `len(weights) > 1`. Currently `"div"` and `"mod"` are supported.
- Default is `"mod"`. See `tf.nn.embedding_lookup` for more details.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `batch_size` 1-D tensor of per-example sampled softmax losses.
-
-
-- - -
-
-### `tf.nn.uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)` {#uniform_candidate_sampler}
-
-Samples a set of classes using a uniform base distribution.
-
-This operation randomly samples a tensor of sampled classes
-(`sampled_candidates`) from the range of integers `[0, range_max)`.
-
-The elements of `sampled_candidates` are drawn without replacement
-(if `unique=True`) or with replacement (if `unique=False`) from
-the base distribution.
-
-The base distribution for this operation is the uniform distribution
-over the range of integers `[0, range_max)`.
-
-In addition, this operation returns tensors `true_expected_count`
-and `sampled_expected_count` representing the number of times each
-of the target classes (`true_classes`) and the sampled
-classes (`sampled_candidates`) is expected to occur in an average
-tensor of sampled classes. These values correspond to `Q(y|x)`
-defined in [this
-document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
-If `unique=True`, then these are post-rejection probabilities and we
-compute them approximately.
-
-##### Args:
-
-
-* <b>`true_classes`</b>: A `Tensor` of type `int64` and shape `[batch_size,
- num_true]`. The target classes.
-* <b>`num_true`</b>: An `int`. The number of target classes per training example.
-* <b>`num_sampled`</b>: An `int`. The number of classes to randomly sample per batch.
-* <b>`unique`</b>: A `bool`. Determines whether all sampled classes in a batch are
- unique.
-* <b>`range_max`</b>: An `int`. The number of possible classes.
-* <b>`seed`</b>: An `int`. An operation-specific seed. Default is 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
-
-* <b>`sampled_candidates`</b>: A tensor of type `int64` and shape `[num_sampled]`.
- The sampled classes.
-* <b>`true_expected_count`</b>: A tensor of type `float`. Same shape as
- `true_classes`. The expected counts under the sampling distribution
- of each of `true_classes`.
-* <b>`sampled_expected_count`</b>: A tensor of type `float`. Same shape as
- `sampled_candidates`. The expected counts under the sampling distribution
- of each of `sampled_candidates`.
-
-
-- - -
-
-### `tf.nn.log_uniform_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)` {#log_uniform_candidate_sampler}
-
-Samples a set of classes using a log-uniform (Zipfian) base distribution.
-
-This operation randomly samples a tensor of sampled classes
-(`sampled_candidates`) from the range of integers `[0, range_max)`.
-
-The elements of `sampled_candidates` are drawn without replacement
-(if `unique=True`) or with replacement (if `unique=False`) from
-the base distribution.
-
-The base distribution for this operation is an approximately log-uniform
-or Zipfian distribution:
-
-`P(class) = (log(class + 2) - log(class + 1)) / log(range_max + 1)`
-
-This sampler is useful when the target classes approximately follow such
-a distribution - for example, if the classes represent words in a lexicon
-sorted in decreasing order of frequency. If your classes are not ordered by
-decreasing frequency, do not use this op.
-
-In addition, this operation returns tensors `true_expected_count`
-and `sampled_expected_count` representing the number of times each
-of the target classes (`true_classes`) and the sampled
-classes (`sampled_candidates`) is expected to occur in an average
-tensor of sampled classes. These values correspond to `Q(y|x)`
-defined in [this
-document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
-If `unique=True`, then these are post-rejection probabilities and we
-compute them approximately.
-
-##### Args:
-
-
-* <b>`true_classes`</b>: A `Tensor` of type `int64` and shape `[batch_size,
- num_true]`. The target classes.
-* <b>`num_true`</b>: An `int`. The number of target classes per training example.
-* <b>`num_sampled`</b>: An `int`. The number of classes to randomly sample per batch.
-* <b>`unique`</b>: A `bool`. Determines whether all sampled classes in a batch are
- unique.
-* <b>`range_max`</b>: An `int`. The number of possible classes.
-* <b>`seed`</b>: An `int`. An operation-specific seed. Default is 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
-
-* <b>`sampled_candidates`</b>: A tensor of type `int64` and shape `[num_sampled]`.
- The sampled classes.
-* <b>`true_expected_count`</b>: A tensor of type `float`. Same shape as
- `true_classes`. The expected counts under the sampling distribution
- of each of `true_classes`.
-* <b>`sampled_expected_count`</b>: A tensor of type `float`. Same shape as
- `sampled_candidates`. The expected counts under the sampling distribution
- of each of `sampled_candidates`.
-
-
-- - -
-
-### `tf.nn.learned_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, seed=None, name=None)` {#learned_unigram_candidate_sampler}
-
-Samples a set of classes from a distribution learned during training.
-
-This operation randomly samples a tensor of sampled classes
-(`sampled_candidates`) from the range of integers `[0, range_max)`.
-
-The elements of `sampled_candidates` are drawn without replacement
-(if `unique=True`) or with replacement (if `unique=False`) from
-the base distribution.
-
-The base distribution for this operation is constructed on the fly
-during training. It is a unigram distribution over the target
-classes seen so far during training. Every integer in `[0, range_max)`
-begins with a weight of 1, and is incremented by 1 each time it is
-seen as a target class. The base distribution is not saved to checkpoints,
-so it is reset when the model is reloaded.
-
-In addition, this operation returns tensors `true_expected_count`
-and `sampled_expected_count` representing the number of times each
-of the target classes (`true_classes`) and the sampled
-classes (`sampled_candidates`) is expected to occur in an average
-tensor of sampled classes. These values correspond to `Q(y|x)`
-defined in [this
-document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
-If `unique=True`, then these are post-rejection probabilities and we
-compute them approximately.
-
-##### Args:
-
-
-* <b>`true_classes`</b>: A `Tensor` of type `int64` and shape `[batch_size,
- num_true]`. The target classes.
-* <b>`num_true`</b>: An `int`. The number of target classes per training example.
-* <b>`num_sampled`</b>: An `int`. The number of classes to randomly sample per batch.
-* <b>`unique`</b>: A `bool`. Determines whether all sampled classes in a batch are
- unique.
-* <b>`range_max`</b>: An `int`. The number of possible classes.
-* <b>`seed`</b>: An `int`. An operation-specific seed. Default is 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
-
-* <b>`sampled_candidates`</b>: A tensor of type `int64` and shape `[num_sampled]`.
- The sampled classes.
-* <b>`true_expected_count`</b>: A tensor of type `float`. Same shape as
- `true_classes`. The expected counts under the sampling distribution
- of each of `true_classes`.
-* <b>`sampled_expected_count`</b>: A tensor of type `float`. Same shape as
- `sampled_candidates`. The expected counts under the sampling distribution
- of each of `sampled_candidates`.
-
-
-- - -
-
-### `tf.nn.fixed_unigram_candidate_sampler(true_classes, num_true, num_sampled, unique, range_max, vocab_file='', distortion=1.0, num_reserved_ids=0, num_shards=1, shard=0, unigrams=(), seed=None, name=None)` {#fixed_unigram_candidate_sampler}
-
-Samples a set of classes using the provided (fixed) base distribution.
-
-This operation randomly samples a tensor of sampled classes
-(`sampled_candidates`) from the range of integers `[0, range_max)`.
-
-The elements of `sampled_candidates` are drawn without replacement
-(if `unique=True`) or with replacement (if `unique=False`) from
-the base distribution.
-
-The base distribution is read from a file or passed in as an
-in-memory array. There is also an option to skew the distribution by
-applying a distortion power to the weights.
-
-In addition, this operation returns tensors `true_expected_count`
-and `sampled_expected_count` representing the number of times each
-of the target classes (`true_classes`) and the sampled
-classes (`sampled_candidates`) is expected to occur in an average
-tensor of sampled classes. These values correspond to `Q(y|x)`
-defined in [this
-document](http://www.tensorflow.org/extras/candidate_sampling.pdf).
-If `unique=True`, then these are post-rejection probabilities and we
-compute them approximately.
-
-##### Args:
-
-
-* <b>`true_classes`</b>: A `Tensor` of type `int64` and shape `[batch_size,
- num_true]`. The target classes.
-* <b>`num_true`</b>: An `int`. The number of target classes per training example.
-* <b>`num_sampled`</b>: An `int`. The number of classes to randomly sample per batch.
-* <b>`unique`</b>: A `bool`. Determines whether all sampled classes in a batch are
- unique.
-* <b>`range_max`</b>: An `int`. The number of possible classes.
-* <b>`vocab_file`</b>: Each valid line in this file (which should have a CSV-like
- format) corresponds to a valid word ID. IDs are in sequential order,
- starting from num_reserved_ids. The last entry in each line is expected
- to be a value corresponding to the count or relative probability. Exactly
- one of `vocab_file` and `unigrams` needs to be passed to this operation.
-* <b>`distortion`</b>: The distortion is used to skew the unigram probability
- distribution. Each weight is first raised to the distortion's power
- before adding to the internal unigram distribution. As a result,
- `distortion = 1.0` gives regular unigram sampling (as defined by the vocab
- file), and `distortion = 0.0` gives a uniform distribution.
-* <b>`num_reserved_ids`</b>: Optionally some reserved IDs can be added in the range
- `[0, num_reserved_ids]` by the users. One use case is that a special
- unknown word token is used as ID 0. These IDs will have a sampling
- probability of 0.
-* <b>`num_shards`</b>: A sampler can be used to sample from a subset of the original
- range in order to speed up the whole computation through parallelism. This
- parameter (together with `shard`) indicates the number of partitions that
- are being used in the overall computation.
-* <b>`shard`</b>: A sampler can be used to sample from a subset of the original range
- in order to speed up the whole computation through parallelism. This
- parameter (together with `num_shards`) indicates the particular partition
- number of the operation, when partitioning is being used.
-* <b>`unigrams`</b>: A list of unigram counts or probabilities, one per ID in
- sequential order. Exactly one of `vocab_file` and `unigrams` should be
- passed to this operation.
-* <b>`seed`</b>: An `int`. An operation-specific seed. Default is 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
-
-* <b>`sampled_candidates`</b>: A tensor of type `int64` and shape `[num_sampled]`.
- The sampled classes.
-* <b>`true_expected_count`</b>: A tensor of type `float`. Same shape as
- `true_classes`. The expected counts under the sampling distribution
- of each of `true_classes`.
-* <b>`sampled_expected_count`</b>: A tensor of type `float`. Same shape as
- `sampled_candidates`. The expected counts under the sampling distribution
- of each of `sampled_candidates`.
-
-
-- - -
-
-### `tf.nn.compute_accidental_hits(true_classes, sampled_candidates, num_true, seed=None, name=None)` {#compute_accidental_hits}
-
-Compute the position ids in `sampled_candidates` matching `true_classes`.
-
-In Candidate Sampling, this operation facilitates virtually removing
-sampled classes which happen to match target classes. This is done
-in Sampled Softmax and Sampled Logistic.
-
-See our [Candidate Sampling Algorithms
-Reference](http://www.tensorflow.org/extras/candidate_sampling.pdf).
-
-We presuppose that the `sampled_candidates` are unique.
-
-We call it an 'accidental hit' when one of the target classes
-matches one of the sampled classes. This operation reports
-accidental hits as triples `(index, id, weight)`, where `index`
-represents the row number in `true_classes`, `id` represents the
-position in `sampled_candidates`, and weight is `-FLOAT_MAX`.
-
-The result of this op should be passed through a `sparse_to_dense`
-operation, then added to the logits of the sampled classes. This
-removes the contradictory effect of accidentally sampling the true
-target classes as noise classes for the same example.
-
-##### Args:
-
-
-* <b>`true_classes`</b>: A `Tensor` of type `int64` and shape `[batch_size,
- num_true]`. The target classes.
-* <b>`sampled_candidates`</b>: A tensor of type `int64` and shape `[num_sampled]`.
- The sampled_candidates output of CandidateSampler.
-* <b>`num_true`</b>: An `int`. The number of target classes per training example.
-* <b>`seed`</b>: An `int`. An operation-specific seed. Default is 0.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
-
-* <b>`indices`</b>: A `Tensor` of type `int32` and shape `[num_accidental_hits]`.
- Values indicate rows in `true_classes`.
-* <b>`ids`</b>: A `Tensor` of type `int64` and shape `[num_accidental_hits]`.
- Values indicate positions in `sampled_candidates`.
-* <b>`weights`</b>: A `Tensor` of type `float` and shape `[num_accidental_hits]`.
- Each value is `-FLOAT_MAX`.
-
-
-- - -
-
-### `tf.nn.quantized_conv2d(input, filter, min_input, max_input, min_filter, max_filter, strides, padding, out_type=None, name=None)` {#quantized_conv2d}
-
-Computes a 2D convolution given quantized 4D input and filter tensors.
-
-The inputs are quantized tensors where the lowest value represents the real
-number of the associated minimum, and the highest represents the maximum.
-This means that you can only interpret the quantized output in the same way, by
-taking the returned minimum and maximum values into account.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
-* <b>`filter`</b>: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
- filter's input_depth dimension must match input's depth dimensions.
-* <b>`min_input`</b>: A `Tensor` of type `float32`.
- The float value that the lowest quantized input value represents.
-* <b>`max_input`</b>: A `Tensor` of type `float32`.
- The float value that the highest quantized input value represents.
-* <b>`min_filter`</b>: A `Tensor` of type `float32`.
- The float value that the lowest quantized filter value represents.
-* <b>`max_filter`</b>: A `Tensor` of type `float32`.
- The float value that the highest quantized filter value represents.
-* <b>`strides`</b>: A list of `ints`.
- The stride of the sliding window for each dimension of the input
- tensor.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`out_type`</b>: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32`. Defaults to `tf.qint32`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, min_output, max_output).
-
-* <b>`output`</b>: A `Tensor` of type `out_type`.
-* <b>`min_output`</b>: A `Tensor` of type `float32`. The float value that the lowest quantized output value represents.
-* <b>`max_output`</b>: A `Tensor` of type `float32`. The float value that the highest quantized output value represents.
-
-
-- - -
-
-### `tf.nn.quantized_relu_x(features, max_value, min_features, max_features, out_type=None, name=None)` {#quantized_relu_x}
-
-Computes Quantized Rectified Linear X: `min(max(features, 0), max_value)`
-
-##### Args:
-
-
-* <b>`features`</b>: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
-* <b>`max_value`</b>: A `Tensor` of type `float32`.
-* <b>`min_features`</b>: A `Tensor` of type `float32`.
- The float value that the lowest quantized value represents.
-* <b>`max_features`</b>: A `Tensor` of type `float32`.
- The float value that the highest quantized value represents.
-* <b>`out_type`</b>: An optional `tf.DType` from: `tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32`. Defaults to `tf.quint8`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (activations, min_activations, max_activations).
-
-* <b>`activations`</b>: A `Tensor` of type `out_type`. Has the same output shape as "features".
-* <b>`min_activations`</b>: A `Tensor` of type `float32`. The float value that the lowest quantized value represents.
-* <b>`max_activations`</b>: A `Tensor` of type `float32`. The float value that the highest quantized value represents.
-
-
-- - -
-
-### `tf.nn.quantized_max_pool(input, min_input, max_input, ksize, strides, padding, name=None)` {#quantized_max_pool}
-
-Produces the max pool of the input tensor for quantized types.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
- The 4D (batch x rows x cols x depth) Tensor to MaxReduce over.
-* <b>`min_input`</b>: A `Tensor` of type `float32`.
- The float value that the lowest quantized input value represents.
-* <b>`max_input`</b>: A `Tensor` of type `float32`.
- The float value that the highest quantized input value represents.
-* <b>`ksize`</b>: A list of `ints`.
- The size of the window for each dimension of the input tensor.
- The length must be 4 to match the number of dimensions of the input.
-* <b>`strides`</b>: A list of `ints`.
- The stride of the sliding window for each dimension of the input
- tensor. The length must be 4 to match the number of dimensions of the input.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, min_output, max_output).
-
-* <b>`output`</b>: A `Tensor`. Has the same type as `input`.
-* <b>`min_output`</b>: A `Tensor` of type `float32`. The float value that the lowest quantized output value represents.
-* <b>`max_output`</b>: A `Tensor` of type `float32`. The float value that the highest quantized output value represents.
-
-
-- - -
-
-### `tf.nn.quantized_avg_pool(input, min_input, max_input, ksize, strides, padding, name=None)` {#quantized_avg_pool}
-
-Produces the average pool of the input tensor for quantized types.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `qint8`, `quint8`, `qint16`, `quint16`, `qint32`.
- 4-D with shape `[batch, height, width, channels]`.
-* <b>`min_input`</b>: A `Tensor` of type `float32`.
- The float value that the lowest quantized input value represents.
-* <b>`max_input`</b>: A `Tensor` of type `float32`.
- The float value that the highest quantized input value represents.
-* <b>`ksize`</b>: A list of `ints`.
- The size of the window for each dimension of the input tensor.
- The length must be 4 to match the number of dimensions of the input.
-* <b>`strides`</b>: A list of `ints`.
- The stride of the sliding window for each dimension of the input
- tensor. The length must be 4 to match the number of dimensions of the input.
-* <b>`padding`</b>: A `string` from: `"SAME", "VALID"`.
- The type of padding algorithm to use.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A tuple of `Tensor` objects (output, min_output, max_output).
-
-* <b>`output`</b>: A `Tensor`. Has the same type as `input`.
-* <b>`min_output`</b>: A `Tensor` of type `float32`. The float value that the lowest quantized output value represents.
-* <b>`max_output`</b>: A `Tensor` of type `float32`. The float value that the highest quantized output value represents.
-
-
-
-## Other Functions and Classes
-- - -
-
-### `tf.nn.zero_fraction(value, name=None)` {#zero_fraction}
-
-Returns the fraction of zeros in `value`.
-
-If `value` is empty, the result is `nan`.
-
-This is useful in summaries to measure and report sparsity. For example,
-
-```python
- z = tf.Relu(...)
- summ = tf.contrib.deprecated.scalar_summary('sparsity',
- tf.nn.zero_fraction(z))
-```
-
-##### Args:
-
-
-* <b>`value`</b>: A tensor of numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The fraction of zeros in `value`, with type `float32`.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/python_io.md b/tensorflow/g3doc/api_docs/python/python_io.md
deleted file mode 100644
index c41fe3ada0..0000000000
--- a/tensorflow/g3doc/api_docs/python/python_io.md
+++ /dev/null
@@ -1,117 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Data IO (Python functions)
-[TOC]
-
-Python functions for directly manipulating TFRecord-formatted files.
-
-See the @{$python/python_io} guide.
-
-- - -
-
-### `class tf.python_io.TFRecordWriter` {#TFRecordWriter}
-
-A class to write records to a TFRecords file.
-
-This class implements `__enter__` and `__exit__`, and can be used
-in `with` blocks like a normal file.
-- - -
-
-#### `tf.python_io.TFRecordWriter.__enter__()` {#TFRecordWriter.__enter__}
-
-Enter a `with` block.
-
-
-- - -
-
-#### `tf.python_io.TFRecordWriter.__exit__(unused_type, unused_value, unused_traceback)` {#TFRecordWriter.__exit__}
-
-Exit a `with` block, closing the file.
-
-
-- - -
-
-#### `tf.python_io.TFRecordWriter.__init__(path, options=None)` {#TFRecordWriter.__init__}
-
-Opens file `path` and creates a `TFRecordWriter` writing to it.
-
-##### Args:
-
-
-* <b>`path`</b>: The path to the TFRecords file.
-* <b>`options`</b>: (optional) A TFRecordOptions object.
-
-##### Raises:
-
-
-* <b>`IOError`</b>: If `path` cannot be opened for writing.
-
-
-- - -
-
-#### `tf.python_io.TFRecordWriter.close()` {#TFRecordWriter.close}
-
-Close the file.
-
-
-- - -
-
-#### `tf.python_io.TFRecordWriter.write(record)` {#TFRecordWriter.write}
-
-Write a string record to the file.
-
-##### Args:
-
-
-* <b>`record`</b>: str
-
-
-
-- - -
-
-### `tf.python_io.tf_record_iterator(path, options=None)` {#tf_record_iterator}
-
-An iterator that read the records from a TFRecords file.
-
-##### Args:
-
-
-* <b>`path`</b>: The path to the TFRecords file.
-* <b>`options`</b>: (optional) A TFRecordOptions object.
-
-##### Yields:
-
- Strings.
-
-##### Raises:
-
-
-* <b>`IOError`</b>: If `path` cannot be opened for reading.
-
-
-- - -
-
-### `class tf.python_io.TFRecordCompressionType` {#TFRecordCompressionType}
-
-The type of compression for the record.
-
-- - -
-
-### `class tf.python_io.TFRecordOptions` {#TFRecordOptions}
-
-Options used for manipulating TFRecord files.
-- - -
-
-#### `tf.python_io.TFRecordOptions.__init__(compression_type)` {#TFRecordOptions.__init__}
-
-
-
-
-- - -
-
-#### `tf.python_io.TFRecordOptions.get_compression_type_string(cls, options)` {#TFRecordOptions.get_compression_type_string}
-
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/script_ops.md b/tensorflow/g3doc/api_docs/python/script_ops.md
deleted file mode 100644
index 13e9feb865..0000000000
--- a/tensorflow/g3doc/api_docs/python/script_ops.md
+++ /dev/null
@@ -1,65 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Wraps python functions
-
-Note: Functions taking `Tensor` arguments can also take anything accepted by
-[`tf.convert_to_tensor`](framework.md#convert_to_tensor).
-
-[TOC]
-
-Script Language Operators. See the @{python/script_ops} guide.
-
-- - -
-
-### `tf.py_func(func, inp, Tout, stateful=True, name=None)` {#py_func}
-
-Wraps a python function and uses it as a TensorFlow op.
-
-Given a python function `func`, which takes numpy arrays as its
-inputs and returns numpy arrays as its outputs, wrap this function as an
-operation in a TensorFlow graph. The following snippet constructs a simple
-TensorFlow graph that invokes the `np.sinh()` NumPy function as a operation
-in the graph:
-
-```python
-def my_func(x):
- # x will be a numpy array with the contents of the placeholder below
- return np.sinh(x)
-inp = tf.placeholder(tf.float32)
-y = tf.py_func(my_func, [inp], tf.float32)
-```
-
-**N.B.** The `tf.py_func()` operation has the following known limitations:
-
-* The body of the function (i.e. `func`) will not be serialized in a
- `GraphDef`. Therefore, you should not use this function if you need to
- serialize your model and restore it in a different environment.
-
-* The operation must run in the same address space as the Python program
- that calls `tf.py_func()`. If you are using distributed TensorFlow, you
- must run a `tf.train.Server` in the same process as the program that calls
- `tf.py_func()` and you must pin the created operation to a device in that
- server (e.g. using `with tf.device():`).
-
-##### Args:
-
-
-* <b>`func`</b>: A Python function, which accepts a list of NumPy `ndarray` objects
- having element types that match the corresponding `tf.Tensor` objects
- in `inp`, and returns a list of `ndarray` objects (or a single `ndarray`)
- having element types that match the corresponding values in `Tout`.
-* <b>`inp`</b>: A list of `Tensor` objects.
-* <b>`Tout`</b>: A list or tuple of tensorflow data types or a single tensorflow data
- type if there is only one, indicating what `func` returns.
-* <b>`stateful`</b>: (Boolean.) If True, the function should be considered stateful.
- If a function is stateless, when given the same input it will return the
- same output and have no observable side effects. Optimizations such as
- common subexpression elimination are only performed on stateless
- operations.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A list of `Tensor` or a single `Tensor` which `func` computes.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/session_ops.md b/tensorflow/g3doc/api_docs/python/session_ops.md
deleted file mode 100644
index 9794923c79..0000000000
--- a/tensorflow/g3doc/api_docs/python/session_ops.md
+++ /dev/null
@@ -1,116 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Tensor Handle Operations
-
-Note: Functions taking `Tensor` arguments can also take anything accepted by
-[`tf.convert_to_tensor`](framework.md#convert_to_tensor).
-
-[TOC]
-
-Tensor Handle Operations. See the @{python/session_ops} guide.
-
-- - -
-
-### `tf.get_session_handle(data, name=None)` {#get_session_handle}
-
-Return the handle of `data`.
-
-This is EXPERIMENTAL and subject to change.
-
-Keep `data` "in-place" in the runtime and create a handle that can be
-used to retrieve `data` in a subsequent run().
-
-Combined with `get_session_tensor`, we can keep a tensor produced in
-one run call in place, and use it as the input in a future run call.
-
-##### Args:
-
-
-* <b>`data`</b>: A tensor to be stored in the session.
-* <b>`name`</b>: Optional name prefix for the return tensor.
-
-##### Returns:
-
- A scalar string tensor representing a unique handle for `data`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if `data` is not a Tensor.
-
-
-* <b>`Example`</b>:
-
-```python
-c = tf.multiply(a, b)
-h = tf.get_session_handle(c)
-h = sess.run(h)
-
-p, a = tf.get_session_tensor(h.handle, tf.float32)
-b = tf.multiply(a, 10)
-c = sess.run(b, feed_dict={p: h.handle})
-```
-
-
-- - -
-
-### `tf.get_session_tensor(handle, dtype, name=None)` {#get_session_tensor}
-
-Get the tensor of type `dtype` by feeding a tensor handle.
-
-This is EXPERIMENTAL and subject to change.
-
-Get the value of the tensor from a tensor handle. The tensor
-is produced in a previous run() and stored in the state of the
-session.
-
-##### Args:
-
-
-* <b>`handle`</b>: The string representation of a persistent tensor handle.
-* <b>`dtype`</b>: The type of the output tensor.
-* <b>`name`</b>: Optional name prefix for the return tensor.
-
-##### Returns:
-
- A pair of tensors. The first is a placeholder for feeding a
- tensor handle and the second is the tensor in the session state
- keyed by the tensor handle.
-
-
-* <b>`Example`</b>:
-
-```python
-c = tf.multiply(a, b)
-h = tf.get_session_handle(c)
-h = sess.run(h)
-
-p, a = tf.get_session_tensor(h.handle, tf.float32)
-b = tf.multiply(a, 10)
-c = sess.run(b, feed_dict={p: h.handle})
-```
-
-
-- - -
-
-### `tf.delete_session_tensor(handle, name=None)` {#delete_session_tensor}
-
-Delete the tensor for the given tensor handle.
-
-This is EXPERIMENTAL and subject to change.
-
-Delete the tensor of a given tensor handle. The tensor is produced
-in a previous run() and stored in the state of the session.
-
-##### Args:
-
-
-* <b>`handle`</b>: The string representation of a persistent tensor handle.
-* <b>`name`</b>: Optional name prefix for the return tensor.
-
-##### Returns:
-
- A pair of graph elements. The first is a placeholder for feeding a
- tensor handle and the second is a deletion operation.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/sparse_ops.md b/tensorflow/g3doc/api_docs/python/sparse_ops.md
deleted file mode 100644
index b933d2251b..0000000000
--- a/tensorflow/g3doc/api_docs/python/sparse_ops.md
+++ /dev/null
@@ -1,1439 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Sparse Tensors
-
-Note: Functions taking `Tensor` arguments can also take anything accepted by
-[`tf.convert_to_tensor`](framework.md#convert_to_tensor).
-
-[TOC]
-
-Sparse Tensor Representation. See the @{python/sparse_ops} guide.
-
-- - -
-
-### `class tf.SparseTensor` {#SparseTensor}
-
-Represents a sparse tensor.
-
-TensorFlow represents a sparse tensor as three separate dense tensors:
-`indices`, `values`, and `dense_shape`. In Python, the three tensors are
-collected into a `SparseTensor` class for ease of use. If you have separate
-`indices`, `values`, and `dense_shape` tensors, wrap them in a `SparseTensor`
-object before passing to the ops below.
-
-Concretely, the sparse tensor `SparseTensor(indices, values, dense_shape)`
-comprises the following components, where `N` and `ndims` are the number
-of values and number of dimensions in the `SparseTensor`, respectively:
-
-* `indices`: A 2-D int64 tensor of dense_shape `[N, ndims]`, which specifies
- the indices of the elements in the sparse tensor that contain nonzero
- values (elements are zero-indexed). For example, `indices=[[1,3], [2,4]]`
- specifies that the elements with indexes of [1,3] and [2,4] have
- nonzero values.
-
-* `values`: A 1-D tensor of any type and dense_shape `[N]`, which supplies the
- values for each element in `indices`. For example, given
- `indices=[[1,3], [2,4]]`, the parameter `values=[18, 3.6]` specifies
- that element [1,3] of the sparse tensor has a value of 18, and element
- [2,4] of the tensor has a value of 3.6.
-
-* `dense_shape`: A 1-D int64 tensor of dense_shape `[ndims]`, which specifies
- the dense_shape of the sparse tensor. Takes a list indicating the number of
- elements in each dimension. For example, `dense_shape=[3,6]` specifies a
- two-dimensional 3x6 tensor, `dense_shape=[2,3,4]` specifies a
- three-dimensional 2x3x4 tensor, and `dense_shape=[9]` specifies a
- one-dimensional tensor with 9 elements.
-
-The corresponding dense tensor satisfies:
-
-```python
-dense.shape = dense_shape
-dense[tuple(indices[i])] = values[i]
-```
-
-By convention, `indices` should be sorted in row-major order (or equivalently
-lexicographic order on the tuples `indices[i]`). This is not enforced when
-`SparseTensor` objects are constructed, but most ops assume correct ordering.
-If the ordering of sparse tensor `st` is wrong, a fixed version can be
-obtained by calling `tf.sparse_reorder(st)`.
-
-Example: The sparse tensor
-
-```python
-SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])
-```
-
-represents the dense tensor
-
-```python
-[[1, 0, 0, 0]
- [0, 0, 2, 0]
- [0, 0, 0, 0]]
-```
-- - -
-
-#### `tf.SparseTensor.__div__(sp_x, y)` {#SparseTensor.__div__}
-
-Component-wise divides a SparseTensor by a dense Tensor.
-
-*Limitation*: this Op only broadcasts the dense side to the sparse side, but not
-the other direction.
-
-##### Args:
-
-
-* <b>`sp_indices`</b>: A `Tensor` of type `int64`.
- 2-D. `N x R` matrix with the indices of non-empty values in a
- SparseTensor, possibly not in canonical ordering.
-* <b>`sp_values`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- 1-D. `N` non-empty values corresponding to `sp_indices`.
-* <b>`sp_shape`</b>: A `Tensor` of type `int64`.
- 1-D. Shape of the input SparseTensor.
-* <b>`dense`</b>: A `Tensor`. Must have the same type as `sp_values`.
- `R`-D. The dense Tensor operand.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `sp_values`.
- 1-D. The `N` values that are operated on.
-
-
-- - -
-
-#### `tf.SparseTensor.__init__(indices, values, dense_shape)` {#SparseTensor.__init__}
-
-Creates a `SparseTensor`.
-
-##### Args:
-
-
-* <b>`indices`</b>: A 2-D int64 tensor of shape `[N, ndims]`.
-* <b>`values`</b>: A 1-D tensor of any type and shape `[N]`.
-* <b>`dense_shape`</b>: A 1-D int64 tensor of shape `[ndims]`.
-
-##### Returns:
-
- A `SparseTensor`.
-
-
-- - -
-
-#### `tf.SparseTensor.__mul__(sp_x, y)` {#SparseTensor.__mul__}
-
-Component-wise multiplies a SparseTensor by a dense Tensor.
-
-The output locations corresponding to the implicitly zero elements in the sparse
-tensor will be zero (i.e., will not take up storage space), regardless of the
-contents of the dense tensor (even if it's +/-INF and that INF*0 == NaN).
-
-*Limitation*: this Op only broadcasts the dense side to the sparse side, but not
-the other direction.
-
-##### Args:
-
-
-* <b>`sp_indices`</b>: A `Tensor` of type `int64`.
- 2-D. `N x R` matrix with the indices of non-empty values in a
- SparseTensor, possibly not in canonical ordering.
-* <b>`sp_values`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- 1-D. `N` non-empty values corresponding to `sp_indices`.
-* <b>`sp_shape`</b>: A `Tensor` of type `int64`.
- 1-D. Shape of the input SparseTensor.
-* <b>`dense`</b>: A `Tensor`. Must have the same type as `sp_values`.
- `R`-D. The dense Tensor operand.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `sp_values`.
- 1-D. The `N` values that are operated on.
-
-
-- - -
-
-#### `tf.SparseTensor.__str__()` {#SparseTensor.__str__}
-
-
-
-
-- - -
-
-#### `tf.SparseTensor.__truediv__(sp_x, y)` {#SparseTensor.__truediv__}
-
-Internal helper function for 'sp_t / dense_t'.
-
-
-- - -
-
-#### `tf.SparseTensor.dense_shape` {#SparseTensor.dense_shape}
-
-A 1-D Tensor of int64 representing the shape of the dense tensor.
-
-
-- - -
-
-#### `tf.SparseTensor.dtype` {#SparseTensor.dtype}
-
-The `DType` of elements in this tensor.
-
-
-- - -
-
-#### `tf.SparseTensor.eval(feed_dict=None, session=None)` {#SparseTensor.eval}
-
-Evaluates this sparse tensor in a `Session`.
-
-Calling this method will execute all preceding operations that
-produce the inputs needed for the operation that produces this
-tensor.
-
-*N.B.* Before invoking `SparseTensor.eval()`, its graph must have been
-launched in a session, and either a default session must be
-available, or `session` must be specified explicitly.
-
-##### Args:
-
-
-* <b>`feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
- See [`Session.run()`](../../api_docs/python/client.md#Session.run) for a
- description of the valid feed values.
-* <b>`session`</b>: (Optional.) The `Session` to be used to evaluate this sparse
- tensor. If none, the default session will be used.
-
-##### Returns:
-
- A `SparseTensorValue` object.
-
-
-- - -
-
-#### `tf.SparseTensor.from_value(cls, sparse_tensor_value)` {#SparseTensor.from_value}
-
-
-
-
-- - -
-
-#### `tf.SparseTensor.get_shape()` {#SparseTensor.get_shape}
-
-Get the `TensorShape` representing the shape of the dense tensor.
-
-##### Returns:
-
- A `TensorShape` object.
-
-
-- - -
-
-#### `tf.SparseTensor.graph` {#SparseTensor.graph}
-
-The `Graph` that contains the index, value, and dense_shape tensors.
-
-
-- - -
-
-#### `tf.SparseTensor.indices` {#SparseTensor.indices}
-
-The indices of non-zero values in the represented dense tensor.
-
-##### Returns:
-
- A 2-D Tensor of int64 with dense_shape `[N, ndims]`, where `N` is the
- number of non-zero values in the tensor, and `ndims` is the rank.
-
-
-- - -
-
-#### `tf.SparseTensor.op` {#SparseTensor.op}
-
-The `Operation` that produces `values` as an output.
-
-
-- - -
-
-#### `tf.SparseTensor.values` {#SparseTensor.values}
-
-The non-zero values in the represented dense tensor.
-
-##### Returns:
-
- A 1-D Tensor of any data type.
-
-
-
-- - -
-
-### `class tf.SparseTensorValue` {#SparseTensorValue}
-
-SparseTensorValue(indices, values, dense_shape)
-- - -
-
-#### `tf.SparseTensorValue.__getnewargs__()` {#SparseTensorValue.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.SparseTensorValue.__getstate__()` {#SparseTensorValue.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.SparseTensorValue.__new__(_cls, indices, values, dense_shape)` {#SparseTensorValue.__new__}
-
-Create new instance of SparseTensorValue(indices, values, dense_shape)
-
-
-- - -
-
-#### `tf.SparseTensorValue.__repr__()` {#SparseTensorValue.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.SparseTensorValue.dense_shape` {#SparseTensorValue.dense_shape}
-
-Alias for field number 2
-
-
-- - -
-
-#### `tf.SparseTensorValue.indices` {#SparseTensorValue.indices}
-
-Alias for field number 0
-
-
-- - -
-
-#### `tf.SparseTensorValue.values` {#SparseTensorValue.values}
-
-Alias for field number 1
-
-
-
-- - -
-
-### `tf.sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value=0, validate_indices=True, name=None)` {#sparse_to_dense}
-
-Converts a sparse representation into a dense tensor.
-
-Builds an array `dense` with shape `output_shape` such that
-
-```python
-# If sparse_indices is scalar
-dense[i] = (i == sparse_indices ? sparse_values : default_value)
-
-# If sparse_indices is a vector, then for each i
-dense[sparse_indices[i]] = sparse_values[i]
-
-# If sparse_indices is an n by d matrix, then for each i in [0, n)
-dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i]
-```
-
-All other values in `dense` are set to `default_value`. If `sparse_values`
-is a scalar, all sparse indices are set to this single value.
-
-Indices should be sorted in lexicographic order, and indices must not
-contain any repeats. If `validate_indices` is True, these properties
-are checked during execution.
-
-##### Args:
-
-
-* <b>`sparse_indices`</b>: A 0-D, 1-D, or 2-D `Tensor` of type `int32` or `int64`.
- `sparse_indices[i]` contains the complete index where `sparse_values[i]`
- will be placed.
-* <b>`output_shape`</b>: A 1-D `Tensor` of the same type as `sparse_indices`. Shape
- of the dense output tensor.
-* <b>`sparse_values`</b>: A 0-D or 1-D `Tensor`. Values corresponding to each row of
- `sparse_indices`, or a scalar value to be used for all sparse indices.
-* <b>`default_value`</b>: A 0-D `Tensor` of the same type as `sparse_values`. Value
- to set for indices not specified in `sparse_indices`. Defaults to zero.
-* <b>`validate_indices`</b>: A boolean value. If True, indices are checked to make
- sure they are sorted in lexicographic order and that there are no repeats.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Dense `Tensor` of shape `output_shape`. Has the same type as
- `sparse_values`.
-
-
-- - -
-
-### `tf.sparse_tensor_to_dense(sp_input, default_value=0, validate_indices=True, name=None)` {#sparse_tensor_to_dense}
-
-Converts a `SparseTensor` into a dense tensor.
-
-This op is a convenience wrapper around `sparse_to_dense` for `SparseTensor`s.
-
-For example, if `sp_input` has shape `[3, 5]` and non-empty string values:
-
- [0, 1]: a
- [0, 3]: b
- [2, 0]: c
-
-and `default_value` is `x`, then the output will be a dense `[3, 5]`
-string tensor with values:
-
- [[x a x b x]
- [x x x x x]
- [c x x x x]]
-
-Indices must be without repeats. This is only
-tested if validate_indices is True.
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The input `SparseTensor`.
-* <b>`default_value`</b>: Scalar value to set for indices not specified in
- `sp_input`. Defaults to zero.
-* <b>`validate_indices`</b>: A boolean value. If `True`, indices are checked to make
- sure they are sorted in lexicographic order and that there are no repeats.
-* <b>`name`</b>: A name prefix for the returned tensors (optional).
-
-##### Returns:
-
- A dense tensor with shape `sp_input.dense_shape` and values specified by
- the non-empty values in `sp_input`. Indices not in `sp_input` are assigned
- `default_value`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-
-
-- - -
-
-### `tf.sparse_to_indicator(sp_input, vocab_size, name=None)` {#sparse_to_indicator}
-
-Converts a `SparseTensor` of ids into a dense bool indicator tensor.
-
-The last dimension of `sp_input.indices` is discarded and replaced with
-the values of `sp_input`. If `sp_input.dense_shape = [D0, D1, ..., Dn, K]`,
-then `output.shape = [D0, D1, ..., Dn, vocab_size]`, where
-
- output[d_0, d_1, ..., d_n, sp_input[d_0, d_1, ..., d_n, k]] = True
-
-and False elsewhere in `output`.
-
-For example, if `sp_input.dense_shape = [2, 3, 4]` with non-empty values:
-
- [0, 0, 0]: 0
- [0, 1, 0]: 10
- [1, 0, 3]: 103
- [1, 1, 2]: 150
- [1, 1, 3]: 149
- [1, 1, 4]: 150
- [1, 2, 1]: 121
-
-and `vocab_size = 200`, then the output will be a `[2, 3, 200]` dense bool
-tensor with False everywhere except at positions
-
- (0, 0, 0), (0, 1, 10), (1, 0, 103), (1, 1, 149), (1, 1, 150),
- (1, 2, 121).
-
-Note that repeats are allowed in the input SparseTensor.
-This op is useful for converting `SparseTensor`s into dense formats for
-compatibility with ops that expect dense tensors.
-
-The input `SparseTensor` must be in row-major order.
-
-##### Args:
-
-
-* <b>`sp_input`</b>: A `SparseTensor` with `values` property of type `int32` or
- `int64`.
-* <b>`vocab_size`</b>: A scalar int64 Tensor (or Python int) containing the new size
- of the last dimension, `all(0 <= sp_input.values < vocab_size)`.
-* <b>`name`</b>: A name prefix for the returned tensors (optional)
-
-##### Returns:
-
- A dense bool indicator tensor representing the indices with specified value.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-
-
-- - -
-
-### `tf.sparse_merge(sp_ids, sp_values, vocab_size, name=None, already_sorted=False)` {#sparse_merge}
-
-Combines a batch of feature ids and values into a single `SparseTensor`.
-
-The most common use case for this function occurs when feature ids and
-their corresponding values are stored in `Example` protos on disk.
-`parse_example` will return a batch of ids and a batch of values, and this
-function joins them into a single logical `SparseTensor` for use in
-functions such as `sparse_tensor_dense_matmul`, `sparse_to_dense`, etc.
-
-The `SparseTensor` returned by this function has the following properties:
-
- - `indices` is equivalent to `sp_ids.indices` with the last
- dimension discarded and replaced with `sp_ids.values`.
- - `values` is simply `sp_values.values`.
- - If `sp_ids.dense_shape = [D0, D1, ..., Dn, K]`, then
- `output.shape = [D0, D1, ..., Dn, vocab_size]`.
-
-For example, consider the following feature vectors:
-
-```python
- vector1 = [-3, 0, 0, 0, 0, 0]
- vector2 = [ 0, 1, 0, 4, 1, 0]
- vector3 = [ 5, 0, 0, 9, 0, 0]
-```
-
-These might be stored sparsely in the following Example protos by storing
-only the feature ids (column number if the vectors are treated as a matrix)
-of the non-zero elements and the corresponding values:
-
-```python
- examples = [Example(features={
- "ids": Feature(int64_list=Int64List(value=[0])),
- "values": Feature(float_list=FloatList(value=[-3]))}),
- Example(features={
- "ids": Feature(int64_list=Int64List(value=[1, 4, 3])),
- "values": Feature(float_list=FloatList(value=[1, 1, 4]))}),
- Example(features={
- "ids": Feature(int64_list=Int64List(value=[0, 3])),
- "values": Feature(float_list=FloatList(value=[5, 9]))})]
-```
-
-The result of calling parse_example on these examples will produce a
-dictionary with entries for "ids" and "values". Passing those two objects
-to this function along with vocab_size=6, will produce a `SparseTensor` that
-sparsely represents all three instances. Namely, the `indices` property will
-contain the coordinates of the non-zero entries in the feature matrix (the
-first dimension is the row number in the matrix, i.e., the index within the
-batch, and the second dimension is the column number, i.e., the feature id);
-`values` will contain the actual values. `shape` will be the shape of the
-original matrix, i.e., (3, 6). For our example above, the output will be
-equal to:
-
-```python
- SparseTensor(indices=[[0, 0], [1, 1], [1, 3], [1, 4], [2, 0], [2, 3]],
- values=[-3, 1, 4, 1, 5, 9],
- dense_shape=[3, 6])
-```
-
-This method generalizes to higher-dimensions by simply providing a list for
-both the sp_ids as well as the vocab_size.
-In this case the resulting `SparseTensor` has the following properties:
- - `indices` is equivalent to `sp_ids[0].indices` with the last
- dimension discarded and concatenated with
- `sp_ids[0].values, sp_ids[1].values, ...`.
- - `values` is simply `sp_values.values`.
- - If `sp_ids.dense_shape = [D0, D1, ..., Dn, K]`, then
- `output.shape = [D0, D1, ..., Dn] + vocab_size`.
-
-##### Args:
-
-
-* <b>`sp_ids`</b>: A single `SparseTensor` with `values` property of type `int32`
- or `int64` or a Python list of such `SparseTensor`s or a list thereof.
-* <b>`sp_values`</b>: A`SparseTensor` of any type.
-* <b>`vocab_size`</b>: A scalar `int64` Tensor (or Python int) containing the new size
- of the last dimension, `all(0 <= sp_ids.values < vocab_size)`.
- Or a list thereof with `all(0 <= sp_ids[i].values < vocab_size[i])` for
- all `i`.
-* <b>`name`</b>: A name prefix for the returned tensors (optional)
-* <b>`already_sorted`</b>: A boolean to specify whether the per-batch values in
- `sp_values` are already sorted. If so skip sorting, False by default
- (optional).
-
-##### Returns:
-
- A `SparseTensor` compactly representing a batch of feature ids and values,
- useful for passing to functions that expect such a `SparseTensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_values` is not a `SparseTensor`. Or if `sp_ids` is neither
- a `SparseTensor` nor a list thereof. Or if `vocab_size` is not a
- `Tensor` or a Python int and `sp_ids` is a `SparseTensor`. Or if
- `vocab_size` is not a or list thereof and `sp_ids` is a list.
-* <b>`ValueError`</b>: If `sp_ids` and `vocab_size` are lists of different lengths.
-
-
-- - -
-
-### `tf.sparse_concat(axis, sp_inputs, name=None, expand_nonconcat_dim=False, concat_dim=None)` {#sparse_concat}
-
-Concatenates a list of `SparseTensor` along the specified dimension.
-
-Concatenation is with respect to the dense versions of each sparse input.
-It is assumed that each inputs is a `SparseTensor` whose elements are ordered
-along increasing dimension number.
-
-If expand_nonconcat_dim is False, all inputs' shapes must match, except for
-the concat dimension. If expand_nonconcat_dim is True, then inputs' shapes are
-allowed to vary among all inputs.
-
-The `indices`, `values`, and `shapes` lists must have the same length.
-
-If expand_nonconcat_dim is False, then the output shape is identical to the
-inputs', except along the concat dimension, where it is the sum of the inputs'
-sizes along that dimension.
-
-If expand_nonconcat_dim is True, then the output shape along the non-concat
-dimensions will be expand to be the largest among all inputs, and it is the
-sum of the inputs sizes along the concat dimension.
-
-The output elements will be resorted to preserve the sort order along
-increasing dimension number.
-
-This op runs in `O(M log M)` time, where `M` is the total number of non-empty
-values across all inputs. This is due to the need for an internal sort in
-order to concatenate efficiently across an arbitrary dimension.
-
-For example, if `axis = 1` and the inputs are
-
- sp_inputs[0]: shape = [2, 3]
- [0, 2]: "a"
- [1, 0]: "b"
- [1, 1]: "c"
-
- sp_inputs[1]: shape = [2, 4]
- [0, 1]: "d"
- [0, 2]: "e"
-
-then the output will be
-
- shape = [2, 7]
- [0, 2]: "a"
- [0, 4]: "d"
- [0, 5]: "e"
- [1, 0]: "b"
- [1, 1]: "c"
-
-Graphically this is equivalent to doing
-
- [ a] concat [ d e ] = [ a d e ]
- [b c ] [ ] [b c ]
-
-Another example, if 'axis = 1' and the inputs are
-
- sp_inputs[0]: shape = [3, 3]
- [0, 2]: "a"
- [1, 0]: "b"
- [2, 1]: "c"
-
- sp_inputs[1]: shape = [2, 4]
- [0, 1]: "d"
- [0, 2]: "e"
-
-if expand_nonconcat_dim = False, this will result in an error. But if
-expand_nonconcat_dim = True, this will result in:
-
- shape = [3, 7]
- [0, 2]: "a"
- [0, 4]: "d"
- [0, 5]: "e"
- [1, 0]: "b"
- [2, 1]: "c"
-
-Graphically this is equivalent to doing
-
- [ a] concat [ d e ] = [ a d e ]
- [b ] [ ] [b ]
- [ c ] [ c ]
-
-
-##### Args:
-
-
-* <b>`axis`</b>: Dimension to concatenate along. Must be in range [-rank, rank),
- where rank is the number of dimensions in each input `SparseTensor`.
-* <b>`sp_inputs`</b>: List of `SparseTensor` to concatenate.
-* <b>`name`</b>: A name prefix for the returned tensors (optional).
-* <b>`expand_nonconcat_dim`</b>: Whether to allow the expansion in the non-concat
- dimensions. Defaulted to False.
-* <b>`concat_dim`</b>: The old (deprecated) name for axis.
-
-##### Returns:
-
- A `SparseTensor` with the concatenated output.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_inputs` is not a list of `SparseTensor`.
-
-
-- - -
-
-### `tf.sparse_reorder(sp_input, name=None)` {#sparse_reorder}
-
-Reorders a `SparseTensor` into the canonical, row-major ordering.
-
-Note that by convention, all sparse ops preserve the canonical ordering
-along increasing dimension number. The only time ordering can be violated
-is during manual manipulation of the indices and values to add entries.
-
-Reordering does not affect the shape of the `SparseTensor`.
-
-For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`:
-
- [0, 3]: b
- [0, 1]: a
- [3, 1]: d
- [2, 0]: c
-
-then the output will be a `SparseTensor` of shape `[4, 5]` and
-`indices` / `values`:
-
- [0, 1]: a
- [0, 3]: b
- [2, 0]: c
- [3, 1]: d
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The input `SparseTensor`.
-* <b>`name`</b>: A name prefix for the returned tensors (optional)
-
-##### Returns:
-
- A `SparseTensor` with the same shape and non-empty values, but in
- canonical ordering.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-
-
-- - -
-
-### `tf.sparse_reshape(sp_input, shape, name=None)` {#sparse_reshape}
-
-Reshapes a `SparseTensor` to represent values in a new dense shape.
-
-This operation has the same semantics as `reshape` on the represented dense
-tensor. The indices of non-empty values in `sp_input` are recomputed based
-on the new dense shape, and a new `SparseTensor` is returned containing the
-new indices and new shape. The order of non-empty values in `sp_input` is
-unchanged.
-
-If one component of `shape` is the special value -1, the size of that
-dimension is computed so that the total dense size remains constant. At
-most one component of `shape` can be -1. The number of dense elements
-implied by `shape` must be the same as the number of dense elements
-originally represented by `sp_input`.
-
-For example, if `sp_input` has shape `[2, 3, 6]` and `indices` / `values`:
-
- [0, 0, 0]: a
- [0, 0, 1]: b
- [0, 1, 0]: c
- [1, 0, 0]: d
- [1, 2, 3]: e
-
-and `shape` is `[9, -1]`, then the output will be a `SparseTensor` of
-shape `[9, 4]` and `indices` / `values`:
-
- [0, 0]: a
- [0, 1]: b
- [1, 2]: c
- [4, 2]: d
- [8, 1]: e
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The input `SparseTensor`.
-* <b>`shape`</b>: A 1-D (vector) int64 `Tensor` specifying the new dense shape of the
- represented `SparseTensor`.
-* <b>`name`</b>: A name prefix for the returned tensors (optional)
-
-##### Returns:
-
- A `SparseTensor` with the same non-empty values but with indices calculated
- by the new dense shape.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-
-
-- - -
-
-### `tf.sparse_split(keyword_required=KeywordRequired(), sp_input=None, num_split=None, axis=None, name=None, split_dim=None)` {#sparse_split}
-
-Split a `SparseTensor` into `num_split` tensors along `axis`.
-
-If the `sp_input.dense_shape[axis]` is not an integer multiple of `num_split`
-each slice starting from 0:`shape[axis] % num_split` gets extra one
-dimension. For example, if `axis = 1` and `num_split = 2` and the
-input is:
-
- input_tensor = shape = [2, 7]
- [ a d e ]
- [b c ]
-
-Graphically the output tensors are:
-
- output_tensor[0] =
- [ a ]
- [b c ]
-
- output_tensor[1] =
- [ d e ]
- [ ]
-
-##### Args:
-
-
-* <b>`keyword_required`</b>: Python 2 standin for * (temporary for argument reorder)
-* <b>`sp_input`</b>: The `SparseTensor` to split.
-* <b>`num_split`</b>: A Python integer. The number of ways to split.
-* <b>`axis`</b>: A 0-D `int32` `Tensor`. The dimension along which to split.
-* <b>`name`</b>: A name for the operation (optional).
-* <b>`split_dim`</b>: Deprecated old name for axis.
-
-##### Returns:
-
- `num_split` `SparseTensor` objects resulting from splitting `value`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-* <b>`ValueError`</b>: If the deprecated `split_dim` and `axis` are both non None.
-
-
-- - -
-
-### `tf.sparse_retain(sp_input, to_retain)` {#sparse_retain}
-
-Retains specified non-empty values within a `SparseTensor`.
-
-For example, if `sp_input` has shape `[4, 5]` and 4 non-empty string values:
-
- [0, 1]: a
- [0, 3]: b
- [2, 0]: c
- [3, 1]: d
-
-and `to_retain = [True, False, False, True]`, then the output will
-be a `SparseTensor` of shape `[4, 5]` with 2 non-empty values:
-
- [0, 1]: a
- [3, 1]: d
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The input `SparseTensor` with `N` non-empty elements.
-* <b>`to_retain`</b>: A bool vector of length `N` with `M` true values.
-
-##### Returns:
-
- A `SparseTensor` with the same shape as the input and `M` non-empty
- elements corresponding to the true positions in `to_retain`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-
-
-- - -
-
-### `tf.sparse_reset_shape(sp_input, new_shape=None)` {#sparse_reset_shape}
-
-Resets the shape of a `SparseTensor` with indices and values unchanged.
-
-If `new_shape` is None, returns a copy of `sp_input` with its shape reset
-to the tight bounding box of `sp_input`.
-
-If `new_shape` is provided, then it must be larger or equal in all dimensions
-compared to the shape of `sp_input`. When this condition is met, the returned
-SparseTensor will have its shape reset to `new_shape` and its indices and
-values unchanged from that of `sp_input.`
-
-For example:
-
- Consider a `sp_input` with shape [2, 3, 5]:
-
- [0, 0, 1]: a
- [0, 1, 0]: b
- [0, 2, 2]: c
- [1, 0, 3]: d
-
- - It is an error to set `new_shape` as [3, 7] since this represents a
- rank-2 tensor while `sp_input` is rank-3. This is either a ValueError
- during graph construction (if both shapes are known) or an OpError during
- run time.
-
- - Setting `new_shape` as [2, 3, 6] will be fine as this shape is larger or
- equal in every dimension compared to the original shape [2, 3, 5].
-
- - On the other hand, setting new_shape as [2, 3, 4] is also an error: The
- third dimension is smaller than the original shape [2, 3, 5] (and an
- `InvalidArgumentError` will be raised).
-
- - If `new_shape` is None, the returned SparseTensor will have a shape
- [2, 3, 4], which is the tight bounding box of `sp_input`.
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The input `SparseTensor`.
-* <b>`new_shape`</b>: None or a vector representing the new shape for the returned
- `SparseTensor`.
-
-##### Returns:
-
- A `SparseTensor` indices and values unchanged from `input_sp`. Its shape is
- `new_shape` if that is set. Otherwise it is the tight bounding box of
- `input_sp`
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-* <b>`ValueError`</b>: If `new_shape` represents a tensor with a different rank from
- that of `sp_input` (if shapes are known when graph is constructed).
-* <b>`OpError`</b>:
- - If `new_shape` has dimension sizes that are too small.
- - If shapes are not known during graph construction time, and during run
- time it is found out that the ranks do not match.
-
-
-- - -
-
-### `tf.sparse_fill_empty_rows(sp_input, default_value, name=None)` {#sparse_fill_empty_rows}
-
-Fills empty rows in the input 2-D `SparseTensor` with a default value.
-
-This op adds entries with the specified `default_value` at index
-`[row, 0]` for any row in the input that does not already have a value.
-
-For example, suppose `sp_input` has shape `[5, 6]` and non-empty values:
-
- [0, 1]: a
- [0, 3]: b
- [2, 0]: c
- [3, 1]: d
-
-Rows 1 and 4 are empty, so the output will be of shape `[5, 6]` with values:
-
- [0, 1]: a
- [0, 3]: b
- [1, 0]: default_value
- [2, 0]: c
- [3, 1]: d
- [4, 0]: default_value
-
-Note that the input may have empty columns at the end, with no effect on
-this op.
-
-The output `SparseTensor` will be in row-major order and will have the
-same shape as the input.
-
-This op also returns an indicator vector such that
-
- empty_row_indicator[i] = True iff row i was an empty row.
-
-##### Args:
-
-
-* <b>`sp_input`</b>: A `SparseTensor` with shape `[N, M]`.
-* <b>`default_value`</b>: The value to fill for empty rows, with the same type as
- `sp_input.`
-* <b>`name`</b>: A name prefix for the returned tensors (optional)
-
-##### Returns:
-
-
-* <b>`sp_ordered_output`</b>: A `SparseTensor` with shape `[N, M]`, and with all empty
- rows filled in with `default_value`.
-* <b>`empty_row_indicator`</b>: A bool vector of length `N` indicating whether each
- input row was empty.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-
-
-- - -
-
-### `tf.sparse_transpose(sp_input, perm=None, name=None)` {#sparse_transpose}
-
-Transposes a `SparseTensor`
-
-The returned tensor's dimension i will correspond to the input dimension
-`perm[i]`. If `perm` is not given, it is set to (n-1...0), where n is
-the rank of the input tensor. Hence by default, this operation performs a
-regular matrix transpose on 2-D input Tensors.
-
-For example, if `sp_input` has shape `[4, 5]` and `indices` / `values`:
-
- [0, 3]: b
- [0, 1]: a
- [3, 1]: d
- [2, 0]: c
-
-then the output will be a `SparseTensor` of shape `[5, 4]` and
-`indices` / `values`:
-
- [0, 2]: c
- [1, 0]: a
- [1, 3]: d
- [3, 0]: b
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The input `SparseTensor`.
-* <b>`perm`</b>: A permutation of the dimensions of `sp_input`.
-* <b>`name`</b>: A name prefix for the returned tensors (optional)
-
-##### Returns:
-
- A transposed `SparseTensor`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sp_input` is not a `SparseTensor`.
-
-
-- - -
-
-### `tf.sparse_reduce_sum(sp_input, axis=None, keep_dims=False, reduction_axes=None)` {#sparse_reduce_sum}
-
-Computes the sum of elements across dimensions of a SparseTensor.
-
-This Op takes a SparseTensor and is the sparse counterpart to
-`tf.reduce_sum()`. In particular, this Op also returns a dense `Tensor`
-instead of a sparse one.
-
-Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless
-`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in
-`reduction_axes`. If `keep_dims` is true, the reduced dimensions are retained
-with length 1.
-
-If `reduction_axes` has no entries, all dimensions are reduced, and a tensor
-with a single element is returned. Additionally, the axes can be negative,
-similar to the indexing rules in Python.
-
-For example:
-
-```python
-# 'x' represents [[1, ?, 1]
-# [?, 1, ?]]
-# where ? is implicitly-zero.
-tf.sparse_reduce_sum(x) ==> 3
-tf.sparse_reduce_sum(x, 0) ==> [1, 1, 1]
-tf.sparse_reduce_sum(x, 1) ==> [2, 1] # Can also use -1 as the axis.
-tf.sparse_reduce_sum(x, 1, keep_dims=True) ==> [[2], [1]]
-tf.sparse_reduce_sum(x, [0, 1]) ==> 3
-```
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The SparseTensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce; list or scalar. If `None` (the
- default), reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retain reduced dimensions with length 1.
-* <b>`reduction_axes`</b>: Deprecated name of axis.
-
-##### Returns:
-
- The reduced Tensor.
-
-
-- - -
-
-### `tf.sparse_reduce_sum_sparse(sp_input, axis=None, keep_dims=False, reduction_axes=None)` {#sparse_reduce_sum_sparse}
-
-Computes the sum of elements across dimensions of a SparseTensor.
-
-This Op takes a SparseTensor and is the sparse counterpart to
-`tf.reduce_sum()`. In contrast to SparseReduceSum, this Op returns a
-SparseTensor.
-
-Reduces `sp_input` along the dimensions given in `reduction_axes`. Unless
-`keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in
-`reduction_axes`. If `keep_dims` is true, the reduced dimensions are retained
-with length 1.
-
-If `reduction_axes` has no entries, all dimensions are reduced, and a tensor
-with a single element is returned. Additionally, the axes can be negative,
-which are interpreted according to the indexing rules in Python.
-
-##### Args:
-
-
-* <b>`sp_input`</b>: The SparseTensor to reduce. Should have numeric type.
-* <b>`axis`</b>: The dimensions to reduce; list or scalar. If `None` (the
- default), reduces all dimensions.
-* <b>`keep_dims`</b>: If true, retain reduced dimensions with length 1.
-* <b>`reduction_axes`</b>: Deprecated name of axis
-
-##### Returns:
-
- The reduced SparseTensor.
-
-
-- - -
-
-### `tf.sparse_add(a, b, thresh=0)` {#sparse_add}
-
-Adds two tensors, at least one of each is a `SparseTensor`.
-
-If one `SparseTensor` and one `Tensor` are passed in, returns a `Tensor`. If
-both arguments are `SparseTensor`s, this returns a `SparseTensor`. The order
-of arguments does not matter. Use vanilla `tf.add()` for adding two dense
-`Tensor`s.
-
-The indices of any input `SparseTensor` are assumed ordered in standard
-lexicographic order. If this is not the case, before this step run
-`SparseReorder` to restore index ordering.
-
-If both arguments are sparse, we perform "clipping" as follows. By default,
-if two values sum to zero at some index, the output `SparseTensor` would still
-include that particular location in its index, storing a zero in the
-corresponding value slot. To override this, callers can specify `thresh`,
-indicating that if the sum has a magnitude strictly smaller than `thresh`, its
-corresponding value and index would then not be included. In particular,
-`thresh == 0.0` (default) means everything is kept and actual thresholding
-happens only for a positive value.
-
-For example, suppose the logical sum of two sparse operands is (densified):
-
- [ 2]
- [.1 0]
- [ 6 -.2]
-
-Then,
-
- * `thresh == 0` (the default): all 5 index/value pairs will be returned.
- * `thresh == 0.11`: only .1 and 0 will vanish, and the remaining three
- index/value pairs will be returned.
- * `thresh == 0.21`: .1, 0, and -.2 will vanish.
-
-##### Args:
-
-
-* <b>`a`</b>: The first operand; `SparseTensor` or `Tensor`.
-* <b>`b`</b>: The second operand; `SparseTensor` or `Tensor`. At least one operand
- must be sparse.
-* <b>`thresh`</b>: A 0-D `Tensor`. The magnitude threshold that determines if an
- output value/index pair takes space. Its dtype should match that of the
- values if they are real; if the latter are complex64/complex128, then the
- dtype should be float32/float64, correspondingly.
-
-##### Returns:
-
- A `SparseTensor` or a `Tensor`, representing the sum.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If both `a` and `b` are `Tensor`s. Use `tf.add()` instead.
-
-
-- - -
-
-### `tf.sparse_softmax(sp_input, name=None)` {#sparse_softmax}
-
-Applies softmax to a batched N-D `SparseTensor`.
-
-The inputs represent an N-D SparseTensor with logical shape `[..., B, C]`
-(where `N >= 2`), and with indices sorted in the canonical lexicographic
-order.
-
-This op is equivalent to applying the normal `tf.nn.softmax()` to each
-innermost logical submatrix with shape `[B, C]`, but with the catch that *the
-implicitly zero elements do not participate*. Specifically, the algorithm is
-equivalent to:
-
- (1) Applies `tf.nn.softmax()` to a densified view of each innermost
- submatrix with shape `[B, C]`, along the size-C dimension;
- (2) Masks out the original implicitly-zero locations;
- (3) Renormalizes the remaining elements.
-
-Hence, the `SparseTensor` result has exactly the same non-zero indices and
-shape.
-
-Example:
-
-```python
-# First batch:
-# [? e.]
-# [1. ? ]
-# Second batch:
-# [e ? ]
-# [e e ]
-shape = [2, 2, 2] # 3-D SparseTensor
-values = np.asarray([[[0., np.e], [1., 0.]], [[np.e, 0.], [np.e, np.e]]])
-indices = np.vstack(np.where(values)).astype(np.int64).T
-
-result = tf.sparse_softmax(tf.SparseTensor(indices, values, shape))
-# ...returning a 3-D SparseTensor, equivalent to:
-# [? 1.] [1 ?]
-# [1. ? ] and [.5 .5]
-# where ? means implicitly zero.
-```
-
-##### Args:
-
-
-* <b>`sp_input`</b>: N-D `SparseTensor`, where `N >= 2`.
-* <b>`name`</b>: optional name of the operation.
-
-##### Returns:
-
-
-* <b>`output`</b>: N-D `SparseTensor` representing the results.
-
-
-- - -
-
-### `tf.sparse_tensor_dense_matmul(sp_a, b, adjoint_a=False, adjoint_b=False, name=None)` {#sparse_tensor_dense_matmul}
-
-Multiply SparseTensor (of rank 2) "A" by dense matrix "B".
-
-No validity checking is performed on the indices of A. However, the following
-input format is recommended for optimal behavior:
-
-if adjoint_a == false:
- A should be sorted in lexicographically increasing order. Use
- sparse_reorder if you're not sure.
-if adjoint_a == true:
- A should be sorted in order of increasing dimension 1 (i.e., "column major"
- order instead of "row major" order).
-
-Deciding when to use sparse_tensor_dense_matmul vs. matmul(sp_a=True):
-
-There are a number of questions to ask in the decision process, including:
-
-* Will the SparseTensor A fit in memory if densified?
-* Is the column count of the product large (>> 1)?
-* Is the density of A larger than approximately 15%?
-
-If the answer to several of these questions is yes, consider
-converting the `SparseTensor` to a dense one and using `tf.matmul` with
-`sp_a=True`.
-
-This operation tends to perform well when A is more sparse, if the column size
-of the product is small (e.g. matrix-vector multiplication), if
-`sp_a.dense_shape` takes on large values.
-
-Below is a rough speed comparison between sparse_tensor_dense_matmul,
-labelled 'sparse', and matmul(sp_a=True), labelled 'dense'. For purposes of
-the comparison, the time spent converting from a SparseTensor to a dense
-Tensor is not included, so it is overly conservative with respect to
-the time ratio.
-
-Benchmark system:
-CPU: Intel Ivybridge with HyperThreading (6 cores) dL1:32KB dL2:256KB dL3:12MB
-GPU: NVidia Tesla k40c
-
-Compiled with:
-`-c opt --config=cuda --copt=-mavx`
-
-```
-tensorflow/python/sparse_tensor_dense_matmul_op_test --benchmarks
-A sparse [m, k] with % nonzero values between 1% and 80%
-B dense [k, n]
-
-% nnz n gpu m k dt(dense) dt(sparse) dt(sparse)/dt(dense)
-0.01 1 True 100 100 0.000221166 0.00010154 0.459112
-0.01 1 True 100 1000 0.00033858 0.000109275 0.322745
-0.01 1 True 1000 100 0.000310557 9.85661e-05 0.317385
-0.01 1 True 1000 1000 0.0008721 0.000100875 0.115669
-0.01 1 False 100 100 0.000208085 0.000107603 0.51711
-0.01 1 False 100 1000 0.000327112 9.51118e-05 0.290762
-0.01 1 False 1000 100 0.000308222 0.00010345 0.335635
-0.01 1 False 1000 1000 0.000865721 0.000101397 0.117124
-0.01 10 True 100 100 0.000218522 0.000105537 0.482958
-0.01 10 True 100 1000 0.000340882 0.000111641 0.327506
-0.01 10 True 1000 100 0.000315472 0.000117376 0.372064
-0.01 10 True 1000 1000 0.000905493 0.000123263 0.136128
-0.01 10 False 100 100 0.000221529 9.82571e-05 0.44354
-0.01 10 False 100 1000 0.000330552 0.000112615 0.340687
-0.01 10 False 1000 100 0.000341277 0.000114097 0.334324
-0.01 10 False 1000 1000 0.000819944 0.000120982 0.147549
-0.01 25 True 100 100 0.000207806 0.000105977 0.509981
-0.01 25 True 100 1000 0.000322879 0.00012921 0.400181
-0.01 25 True 1000 100 0.00038262 0.00014158 0.370035
-0.01 25 True 1000 1000 0.000865438 0.000202083 0.233504
-0.01 25 False 100 100 0.000209401 0.000104696 0.499979
-0.01 25 False 100 1000 0.000321161 0.000130737 0.407076
-0.01 25 False 1000 100 0.000377012 0.000136801 0.362856
-0.01 25 False 1000 1000 0.000861125 0.00020272 0.235413
-0.2 1 True 100 100 0.000206952 9.69219e-05 0.46833
-0.2 1 True 100 1000 0.000348674 0.000147475 0.422959
-0.2 1 True 1000 100 0.000336908 0.00010122 0.300439
-0.2 1 True 1000 1000 0.001022 0.000203274 0.198898
-0.2 1 False 100 100 0.000207532 9.5412e-05 0.459746
-0.2 1 False 100 1000 0.000356127 0.000146824 0.41228
-0.2 1 False 1000 100 0.000322664 0.000100918 0.312764
-0.2 1 False 1000 1000 0.000998987 0.000203442 0.203648
-0.2 10 True 100 100 0.000211692 0.000109903 0.519165
-0.2 10 True 100 1000 0.000372819 0.000164321 0.440753
-0.2 10 True 1000 100 0.000338651 0.000144806 0.427596
-0.2 10 True 1000 1000 0.00108312 0.000758876 0.70064
-0.2 10 False 100 100 0.000215727 0.000110502 0.512231
-0.2 10 False 100 1000 0.000375419 0.0001613 0.429653
-0.2 10 False 1000 100 0.000336999 0.000145628 0.432132
-0.2 10 False 1000 1000 0.00110502 0.000762043 0.689618
-0.2 25 True 100 100 0.000218705 0.000129913 0.594009
-0.2 25 True 100 1000 0.000394794 0.00029428 0.745402
-0.2 25 True 1000 100 0.000404483 0.0002693 0.665788
-0.2 25 True 1000 1000 0.0012002 0.00194494 1.62052
-0.2 25 False 100 100 0.000221494 0.0001306 0.589632
-0.2 25 False 100 1000 0.000396436 0.000297204 0.74969
-0.2 25 False 1000 100 0.000409346 0.000270068 0.659754
-0.2 25 False 1000 1000 0.00121051 0.00193737 1.60046
-0.5 1 True 100 100 0.000214981 9.82111e-05 0.456836
-0.5 1 True 100 1000 0.000415328 0.000223073 0.537101
-0.5 1 True 1000 100 0.000358324 0.00011269 0.314492
-0.5 1 True 1000 1000 0.00137612 0.000437401 0.317851
-0.5 1 False 100 100 0.000224196 0.000101423 0.452386
-0.5 1 False 100 1000 0.000400987 0.000223286 0.556841
-0.5 1 False 1000 100 0.000368825 0.00011224 0.304318
-0.5 1 False 1000 1000 0.00136036 0.000429369 0.31563
-0.5 10 True 100 100 0.000222125 0.000112308 0.505608
-0.5 10 True 100 1000 0.000461088 0.00032357 0.701753
-0.5 10 True 1000 100 0.000394624 0.000225497 0.571422
-0.5 10 True 1000 1000 0.00158027 0.00190898 1.20801
-0.5 10 False 100 100 0.000232083 0.000114978 0.495418
-0.5 10 False 100 1000 0.000454574 0.000324632 0.714146
-0.5 10 False 1000 100 0.000379097 0.000227768 0.600817
-0.5 10 False 1000 1000 0.00160292 0.00190168 1.18638
-0.5 25 True 100 100 0.00023429 0.000151703 0.647501
-0.5 25 True 100 1000 0.000497462 0.000598873 1.20386
-0.5 25 True 1000 100 0.000460778 0.000557038 1.20891
-0.5 25 True 1000 1000 0.00170036 0.00467336 2.74845
-0.5 25 False 100 100 0.000228981 0.000155334 0.678371
-0.5 25 False 100 1000 0.000496139 0.000620789 1.25124
-0.5 25 False 1000 100 0.00045473 0.000551528 1.21287
-0.5 25 False 1000 1000 0.00171793 0.00467152 2.71927
-0.8 1 True 100 100 0.000222037 0.000105301 0.47425
-0.8 1 True 100 1000 0.000410804 0.000329327 0.801664
-0.8 1 True 1000 100 0.000349735 0.000131225 0.375212
-0.8 1 True 1000 1000 0.00139219 0.000677065 0.48633
-0.8 1 False 100 100 0.000214079 0.000107486 0.502085
-0.8 1 False 100 1000 0.000413746 0.000323244 0.781261
-0.8 1 False 1000 100 0.000348983 0.000131983 0.378193
-0.8 1 False 1000 1000 0.00136296 0.000685325 0.50282
-0.8 10 True 100 100 0.000229159 0.00011825 0.516017
-0.8 10 True 100 1000 0.000498845 0.000532618 1.0677
-0.8 10 True 1000 100 0.000383126 0.00029935 0.781336
-0.8 10 True 1000 1000 0.00162866 0.00307312 1.88689
-0.8 10 False 100 100 0.000230783 0.000124958 0.541452
-0.8 10 False 100 1000 0.000493393 0.000550654 1.11606
-0.8 10 False 1000 100 0.000377167 0.000298581 0.791642
-0.8 10 False 1000 1000 0.00165795 0.00305103 1.84024
-0.8 25 True 100 100 0.000233496 0.000175241 0.75051
-0.8 25 True 100 1000 0.00055654 0.00102658 1.84458
-0.8 25 True 1000 100 0.000463814 0.000783267 1.68875
-0.8 25 True 1000 1000 0.00186905 0.00755344 4.04132
-0.8 25 False 100 100 0.000240243 0.000175047 0.728625
-0.8 25 False 100 1000 0.000578102 0.00104499 1.80763
-0.8 25 False 1000 100 0.000485113 0.000776849 1.60138
-0.8 25 False 1000 1000 0.00211448 0.00752736 3.55992
-```
-
-##### Args:
-
-
-* <b>`sp_a`</b>: SparseTensor A, of rank 2.
-* <b>`b`</b>: A dense Matrix with the same dtype as sp_a.
-* <b>`adjoint_a`</b>: Use the adjoint of A in the matrix multiply. If A is complex,
- this is transpose(conj(A)). Otherwise it's transpose(A).
-* <b>`adjoint_b`</b>: Use the adjoint of B in the matrix multiply. If B is complex,
- this is transpose(conj(B)). Otherwise it's transpose(B).
-* <b>`name`</b>: A name prefix for the returned tensors (optional)
-
-##### Returns:
-
- A dense matrix (pseudo-code in dense np.matrix notation):
- A = A.H if adjoint_a else A
- B = B.H if adjoint_b else B
- return A*B
-
-
-- - -
-
-### `tf.sparse_maximum(sp_a, sp_b, name=None)` {#sparse_maximum}
-
-Returns the element-wise max of two SparseTensors.
-
-Assumes the two SparseTensors have the same shape, i.e., no broadcasting.
-Example:
-
-```python
-sp_zero = sparse_tensor.SparseTensor([[0]], [0], [7])
-sp_one = sparse_tensor.SparseTensor([[1]], [1], [7])
-res = tf.sparse_maximum(sp_zero, sp_one).eval()
-# "res" should be equal to SparseTensor([[0], [1]], [0, 1], [7]).
-```
-
-##### Args:
-
-
-* <b>`sp_a`</b>: a `SparseTensor` operand whose dtype is real, and indices
- lexicographically ordered.
-* <b>`sp_b`</b>: the other `SparseTensor` operand with the same requirements (and the
- same shape).
-* <b>`name`</b>: optional name of the operation.
-
-##### Returns:
-
-
-* <b>`output`</b>: the output SparseTensor.
-
-
-- - -
-
-### `tf.sparse_minimum(sp_a, sp_b, name=None)` {#sparse_minimum}
-
-Returns the element-wise min of two SparseTensors.
-
-Assumes the two SparseTensors have the same shape, i.e., no broadcasting.
-Example:
-
-```python
-sp_zero = sparse_tensor.SparseTensor([[0]], [0], [7])
-sp_one = sparse_tensor.SparseTensor([[1]], [1], [7])
-res = tf.sparse_minimum(sp_zero, sp_one).eval()
-# "res" should be equal to SparseTensor([[0], [1]], [0, 0], [7]).
-```
-
-##### Args:
-
-
-* <b>`sp_a`</b>: a `SparseTensor` operand whose dtype is real, and indices
- lexicographically ordered.
-* <b>`sp_b`</b>: the other `SparseTensor` operand with the same requirements (and the
- same shape).
-* <b>`name`</b>: optional name of the operation.
-
-##### Returns:
-
-
-* <b>`output`</b>: the output SparseTensor.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/state_ops.md b/tensorflow/g3doc/api_docs/python/state_ops.md
deleted file mode 100644
index 5477beda8a..0000000000
--- a/tensorflow/g3doc/api_docs/python/state_ops.md
+++ /dev/null
@@ -1,3657 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Variables
-
-Note: Functions taking `Tensor` arguments can also take anything accepted by
-[`tf.convert_to_tensor`](framework.md#convert_to_tensor).
-
-[TOC]
-
-Variables. See the @{python/state_ops} guide.
-
-- - -
-
-### `class tf.Variable` {#Variable}
-
-See the [Variables How To](../../how_tos/variables/index.md) for a high
-level overview.
-
-A variable maintains state in the graph across calls to `run()`. You add a
-variable to the graph by constructing an instance of the class `Variable`.
-
-The `Variable()` constructor requires an initial value for the variable,
-which can be a `Tensor` of any type and shape. The initial value defines the
-type and shape of the variable. After construction, the type and shape of
-the variable are fixed. The value can be changed using one of the assign
-methods.
-
-If you want to change the shape of a variable later you have to use an
-`assign` Op with `validate_shape=False`.
-
-Just like any `Tensor`, variables created with `Variable()` can be used as
-inputs for other Ops in the graph. Additionally, all the operators
-overloaded for the `Tensor` class are carried over to variables, so you can
-also add nodes to the graph by just doing arithmetic on variables.
-
-```python
-import tensorflow as tf
-
-# Create a variable.
-w = tf.Variable(<initial-value>, name=<optional-name>)
-
-# Use the variable in the graph like any Tensor.
-y = tf.matmul(w, ...another variable or tensor...)
-
-# The overloaded operators are available too.
-z = tf.sigmoid(w + y)
-
-# Assign a new value to the variable with `assign()` or a related method.
-w.assign(w + 1.0)
-w.assign_add(1.0)
-```
-
-When you launch the graph, variables have to be explicitly initialized before
-you can run Ops that use their value. You can initialize a variable by
-running its *initializer op*, restoring the variable from a save file, or
-simply running an `assign` Op that assigns a value to the variable. In fact,
-the variable *initializer op* is just an `assign` Op that assigns the
-variable's initial value to the variable itself.
-
-```python
-# Launch the graph in a session.
-with tf.Session() as sess:
- # Run the variable initializer.
- sess.run(w.initializer)
- # ...you now can run ops that use the value of 'w'...
-```
-
-The most common initialization pattern is to use the convenience function
-`global_variables_initializer()` to add an Op to the graph that initializes
-all the variables. You then run that Op after launching the graph.
-
-```python
-# Add an Op to initialize global variables.
-init_op = tf.global_variables_initializer()
-
-# Launch the graph in a session.
-with tf.Session() as sess:
- # Run the Op that initializes global variables.
- sess.run(init_op)
- # ...you can now run any Op that uses variable values...
-```
-
-If you need to create a variable with an initial value dependent on another
-variable, use the other variable's `initialized_value()`. This ensures that
-variables are initialized in the right order.
-
-All variables are automatically collected in the graph where they are
-created. By default, the constructor adds the new variable to the graph
-collection `GraphKeys.GLOBAL_VARIABLES`. The convenience function
-`global_variables()` returns the contents of that collection.
-
-When building a machine learning model it is often convenient to distinguish
-between variables holding the trainable model parameters and other variables
-such as a `global step` variable used to count training steps. To make this
-easier, the variable constructor supports a `trainable=<bool>` parameter. If
-`True`, the new variable is also added to the graph collection
-`GraphKeys.TRAINABLE_VARIABLES`. The convenience function
-`trainable_variables()` returns the contents of this collection. The
-various `Optimizer` classes use this collection as the default list of
-variables to optimize.
-
-
-Creating a variable.
-
-- - -
-
-#### `tf.Variable.__init__(initial_value=None, trainable=True, collections=None, validate_shape=True, caching_device=None, name=None, variable_def=None, dtype=None, expected_shape=None, import_scope=None)` {#Variable.__init__}
-
-Creates a new variable with value `initial_value`.
-
-The new variable is added to the graph collections listed in `collections`,
-which defaults to `[GraphKeys.GLOBAL_VARIABLES]`.
-
-If `trainable` is `True` the variable is also added to the graph collection
-`GraphKeys.TRAINABLE_VARIABLES`.
-
-This constructor creates both a `variable` Op and an `assign` Op to set the
-variable to its initial value.
-
-##### Args:
-
-
-* <b>`initial_value`</b>: A `Tensor`, or Python object convertible to a `Tensor`,
- which is the initial value for the Variable. The initial value must have
- a shape specified unless `validate_shape` is set to False. Can also be a
- callable with no argument that returns the initial value when called. In
- that case, `dtype` must be specified. (Note that initializer functions
- from init_ops.py must first be bound to a shape before being used here.)
-* <b>`trainable`</b>: If `True`, the default, also adds the variable to the graph
- collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as
- the default list of variables to use by the `Optimizer` classes.
-* <b>`collections`</b>: List of graph collections keys. The new variable is added to
- these collections. Defaults to `[GraphKeys.GLOBAL_VARIABLES]`.
-* <b>`validate_shape`</b>: If `False`, allows the variable to be initialized with a
- value of unknown shape. If `True`, the default, the shape of
- `initial_value` must be known.
-* <b>`caching_device`</b>: Optional device string describing where the Variable
- should be cached for reading. Defaults to the Variable's device.
- If not `None`, caches on another device. Typical use is to cache
- on the device where the Ops using the Variable reside, to deduplicate
- copying through `Switch` and other conditional statements.
-* <b>`name`</b>: Optional name for the variable. Defaults to `'Variable'` and gets
- uniquified automatically.
-* <b>`variable_def`</b>: `VariableDef` protocol buffer. If not `None`, recreates
- the Variable object with its contents. `variable_def` and the other
- arguments are mutually exclusive.
-* <b>`dtype`</b>: If set, initial_value will be converted to the given type.
- If `None`, either the datatype will be kept (if `initial_value` is
- a Tensor), or `convert_to_tensor` will decide.
-* <b>`expected_shape`</b>: A TensorShape. If set, initial_value is expected
- to have this shape.
-* <b>`import_scope`</b>: Optional `string`. Name scope to add to the
- `Variable.` Only used when initializing from protocol buffer.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both `variable_def` and initial_value are specified.
-* <b>`ValueError`</b>: If the initial value is not specified, or does not have a
- shape and `validate_shape` is `True`.
-
-
-- - -
-
-#### `tf.Variable.initialized_value()` {#Variable.initialized_value}
-
-Returns the value of the initialized variable.
-
-You should use this instead of the variable itself to initialize another
-variable with a value that depends on the value of this variable.
-
-Beware of using initialized_value except during initialization:
-initialized_value causes the Variable's initializer op to be run, so running
-this op resets the variable to the initial value.
-
-```python
-# Initialize 'v' with a random tensor.
-v = tf.Variable(tf.truncated_normal([10, 40]))
-# Use `initialized_value` to guarantee that `v` has been
-# initialized before its value is used to initialize `w`.
-# The random values are picked only once.
-w = tf.Variable(v.initialized_value() * 2.0)
-```
-
-##### Returns:
-
- A `Tensor` holding the value of this variable after its initializer
- has run.
-
-
-
-Changing a variable value.
-
-- - -
-
-#### `tf.Variable.assign(value, use_locking=False)` {#Variable.assign}
-
-Assigns a new value to the variable.
-
-This is essentially a shortcut for `assign(self, value)`.
-
-##### Args:
-
-
-* <b>`value`</b>: A `Tensor`. The new value for this variable.
-* <b>`use_locking`</b>: If `True`, use locking during the assignment.
-
-##### Returns:
-
- A `Tensor` that will hold the new value of this variable after
- the assignment has completed.
-
-
-- - -
-
-#### `tf.Variable.assign_add(delta, use_locking=False)` {#Variable.assign_add}
-
-Adds a value to this variable.
-
- This is essentially a shortcut for `assign_add(self, delta)`.
-
-##### Args:
-
-
-* <b>`delta`</b>: A `Tensor`. The value to add to this variable.
-* <b>`use_locking`</b>: If `True`, use locking during the operation.
-
-##### Returns:
-
- A `Tensor` that will hold the new value of this variable after
- the addition has completed.
-
-
-- - -
-
-#### `tf.Variable.assign_sub(delta, use_locking=False)` {#Variable.assign_sub}
-
-Subtracts a value from this variable.
-
-This is essentially a shortcut for `assign_sub(self, delta)`.
-
-##### Args:
-
-
-* <b>`delta`</b>: A `Tensor`. The value to subtract from this variable.
-* <b>`use_locking`</b>: If `True`, use locking during the operation.
-
-##### Returns:
-
- A `Tensor` that will hold the new value of this variable after
- the subtraction has completed.
-
-
-- - -
-
-#### `tf.Variable.scatter_sub(sparse_delta, use_locking=False)` {#Variable.scatter_sub}
-
-Subtracts `IndexedSlices` from this variable.
-
-This is essentially a shortcut for `scatter_sub(self, sparse_delta.indices,
-sparse_delta.values)`.
-
-##### Args:
-
-
-* <b>`sparse_delta`</b>: `IndexedSlices` to be subtracted from this variable.
-* <b>`use_locking`</b>: If `True`, use locking during the operation.
-
-##### Returns:
-
- A `Tensor` that will hold the new value of this variable after
- the scattered subtraction has completed.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `sparse_delta` is not an `IndexedSlices`.
-
-
-- - -
-
-#### `tf.Variable.count_up_to(limit)` {#Variable.count_up_to}
-
-Increments this variable until it reaches `limit`.
-
-When that Op is run it tries to increment the variable by `1`. If
-incrementing the variable would bring it above `limit` then the Op raises
-the exception `OutOfRangeError`.
-
-If no error is raised, the Op outputs the value of the variable before
-the increment.
-
-This is essentially a shortcut for `count_up_to(self, limit)`.
-
-##### Args:
-
-
-* <b>`limit`</b>: value at which incrementing the variable raises an error.
-
-##### Returns:
-
- A `Tensor` that will hold the variable value before the increment. If no
- other Op modifies this variable, the values produced will all be
- distinct.
-
-
-
-- - -
-
-#### `tf.Variable.eval(session=None)` {#Variable.eval}
-
-In a session, computes and returns the value of this variable.
-
-This is not a graph construction method, it does not add ops to the graph.
-
-This convenience method requires a session where the graph containing this
-variable has been launched. If no session is passed, the default session is
-used. See the [Session class](../../api_docs/python/client.md#Session) for
-more information on launching a graph and on sessions.
-
-```python
-v = tf.Variable([1, 2])
-init = tf.global_variables_initializer()
-
-with tf.Session() as sess:
- sess.run(init)
- # Usage passing the session explicitly.
- print(v.eval(sess))
- # Usage with the default session. The 'with' block
- # above makes 'sess' the default session.
- print(v.eval())
-```
-
-##### Args:
-
-
-* <b>`session`</b>: The session to use to evaluate this variable. If
- none, the default session is used.
-
-##### Returns:
-
- A numpy `ndarray` with a copy of the value of this variable.
-
-
-
-Properties.
-
-- - -
-
-#### `tf.Variable.name` {#Variable.name}
-
-The name of this variable.
-
-
-- - -
-
-#### `tf.Variable.dtype` {#Variable.dtype}
-
-The `DType` of this variable.
-
-
-- - -
-
-#### `tf.Variable.get_shape()` {#Variable.get_shape}
-
-The `TensorShape` of this variable.
-
-##### Returns:
-
- A `TensorShape`.
-
-
-- - -
-
-#### `tf.Variable.device` {#Variable.device}
-
-The device of this variable.
-
-
-- - -
-
-#### `tf.Variable.initializer` {#Variable.initializer}
-
-The initializer operation for this variable.
-
-
-- - -
-
-#### `tf.Variable.graph` {#Variable.graph}
-
-The `Graph` of this variable.
-
-
-- - -
-
-#### `tf.Variable.op` {#Variable.op}
-
-The `Operation` of this variable.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.Variable.__abs__(a, *args)` {#Variable.__abs__}
-
-Computes the absolute value of a tensor.
-
-Given a tensor of real numbers `x`, this operation returns a tensor
-containing the absolute value of each element in `x`. For example, if x is
-an input element and y is an output element, this operation computes
-\\(y = |x|\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` or `SparseTensor` of type `float32`, `float64`, `int32`, or
- `int64`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` or `SparseTensor` the same size and type as `x` with absolute
- values.
-
-
-- - -
-
-#### `tf.Variable.__add__(a, *args)` {#Variable.__add__}
-
-Returns x + y element-wise.
-
-*NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Variable.__and__(a, *args)` {#Variable.__and__}
-
-Returns the truth value of x AND y element-wise.
-
-*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__div__(a, *args)` {#Variable.__div__}
-
-Divide two values using Python 2 semantics. Used for Tensor.__div__.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` returns the quotient of x and y.
-
-
-- - -
-
-#### `tf.Variable.__floordiv__(a, *args)` {#Variable.__floordiv__}
-
-Divides `x / y` elementwise, rounding toward the most negative integer.
-
-The same as `tf.div(x,y)` for integers, but uses `tf.floor(tf.div(x,y))` for
-floating point arguments so that the result is always an integer (though
-possibly an integer represented as floating point). This op is generated by
-`x // y` floor division in Python 3 and in Python 2.7 with
-`from __future__ import division`.
-
-Note that for efficiency, `floordiv` uses C semantics for negative numbers
-(unlike Python and Numpy).
-
-`x` and `y` must have the same type, and the result will have the same type
-as well.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` rounded down (except possibly towards zero for negative integers).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the inputs are complex.
-
-
-- - -
-
-#### `tf.Variable.__ge__(a, *args)` {#Variable.__ge__}
-
-Returns the truth value of (x >= y) element-wise.
-
-*NOTE*: `GreaterEqual` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__getitem__(var, slice_spec)` {#Variable.__getitem__}
-
-Creates a slice helper object given a variable.
-
-This allows creating a sub-tensor from part of the current contents
-of a variable.
-See
-[`Tensor.__getitem__`](../../api_docs/python/framework.md#Tensor.__getitem__)
-for detailed examples of slicing.
-
-This function in addition also allows assignment to a sliced range.
-This is similar to `__setitem__` functionality in Python. However,
-the syntax is different so that the user can capture the assignment
-operation for grouping or passing to `sess.run()`.
-For example,
-
-```prettyprint
-import tensorflow as tf
-A = tf.Variable([[1,2,3], [4,5,6], [7,8,9]], dtype=tf.float32)
-with tf.Session() as sess:
- sess.run(tf.global_variables_initializer())
- print sess.run(A[:2, :2]) # => [[1,2], [4,5]]
-
- op = A[:2,:2].assign(22. * tf.ones((2, 2)))
- print sess.run(op) # => [[22, 22, 3], [22, 22, 6], [7,8,9]]
-```
-
-Note that assignments currently do not support NumPy broadcasting
-semantics.
-
-##### Args:
-
-
-* <b>`var`</b>: An `ops.Variable` object.
-* <b>`slice_spec`</b>: The arguments to `Tensor.__getitem__`.
-
-##### Returns:
-
- The appropriate slice of "tensor", based on "slice_spec".
- As an operator. The operator also has a `assign()` method
- that can be used to generate an assignment operator.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If a slice range is negative size.
-* <b>`TypeError`</b>: If the slice indices aren't int, slice, or Ellipsis.
-
-
-- - -
-
-#### `tf.Variable.__gt__(a, *args)` {#Variable.__gt__}
-
-Returns the truth value of (x > y) element-wise.
-
-*NOTE*: `Greater` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__invert__(a, *args)` {#Variable.__invert__}
-
-Returns the truth value of NOT x element-wise.
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__iter__()` {#Variable.__iter__}
-
-Dummy method to prevent iteration. Do not call.
-
-NOTE(mrry): If we register __getitem__ as an overloaded operator,
-Python will valiantly attempt to iterate over the variable's Tensor from 0
-to infinity. Declaring this method prevents this unintended behavior.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: when invoked.
-
-
-- - -
-
-#### `tf.Variable.__le__(a, *args)` {#Variable.__le__}
-
-Returns the truth value of (x <= y) element-wise.
-
-*NOTE*: `LessEqual` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__lt__(a, *args)` {#Variable.__lt__}
-
-Returns the truth value of (x < y) element-wise.
-
-*NOTE*: `Less` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `float32`, `float64`, `int32`, `int64`, `uint8`, `int16`, `int8`, `uint16`, `half`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__mod__(a, *args)` {#Variable.__mod__}
-
-Returns element-wise remainder of division. When `x < 0` xor `y < 0` is
-
-true, this follows Python semantics in that the result here is consistent
-with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.
-
-*NOTE*: `FloorMod` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Variable.__mul__(a, *args)` {#Variable.__mul__}
-
-Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse".
-
-
-- - -
-
-#### `tf.Variable.__neg__(a, *args)` {#Variable.__neg__}
-
-Computes numerical negative value element-wise.
-
-I.e., \\(y = -x\\).
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Variable.__or__(a, *args)` {#Variable.__or__}
-
-Returns the truth value of x OR y element-wise.
-
-*NOTE*: `LogicalOr` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__pow__(a, *args)` {#Variable.__pow__}
-
-Computes the power of one value to another.
-
-Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for
-corresponding elements in `x` and `y`. For example:
-
-```
-# tensor 'x' is [[2, 2], [3, 3]]
-# tensor 'y' is [[8, 16], [2, 3]]
-tf.pow(x, y) ==> [[256, 65536], [9, 27]]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`y`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`.
-
-
-- - -
-
-#### `tf.Variable.__radd__(a, *args)` {#Variable.__radd__}
-
-Returns x + y element-wise.
-
-*NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`, `complex64`, `complex128`, `string`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Variable.__rand__(a, *args)` {#Variable.__rand__}
-
-Returns the truth value of x AND y element-wise.
-
-*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__rdiv__(a, *args)` {#Variable.__rdiv__}
-
-Divide two values using Python 2 semantics. Used for Tensor.__div__.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` returns the quotient of x and y.
-
-
-- - -
-
-#### `tf.Variable.__rfloordiv__(a, *args)` {#Variable.__rfloordiv__}
-
-Divides `x / y` elementwise, rounding toward the most negative integer.
-
-The same as `tf.div(x,y)` for integers, but uses `tf.floor(tf.div(x,y))` for
-floating point arguments so that the result is always an integer (though
-possibly an integer represented as floating point). This op is generated by
-`x // y` floor division in Python 3 and in Python 2.7 with
-`from __future__ import division`.
-
-Note that for efficiency, `floordiv` uses C semantics for negative numbers
-(unlike Python and Numpy).
-
-`x` and `y` must have the same type, and the result will have the same type
-as well.
-
-##### Args:
-
-
-* <b>`x`</b>: `Tensor` numerator of real numeric type.
-* <b>`y`</b>: `Tensor` denominator of real numeric type.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- `x / y` rounded down (except possibly towards zero for negative integers).
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the inputs are complex.
-
-
-- - -
-
-#### `tf.Variable.__rmod__(a, *args)` {#Variable.__rmod__}
-
-Returns element-wise remainder of division. When `x < 0` xor `y < 0` is
-
-true, this follows Python semantics in that the result here is consistent
-with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.
-
-*NOTE*: `FloorMod` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `float32`, `float64`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Variable.__rmul__(a, *args)` {#Variable.__rmul__}
-
-Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse".
-
-
-- - -
-
-#### `tf.Variable.__ror__(a, *args)` {#Variable.__ror__}
-
-Returns the truth value of x OR y element-wise.
-
-*NOTE*: `LogicalOr` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `bool`.
-* <b>`y`</b>: A `Tensor` of type `bool`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `bool`.
-
-
-- - -
-
-#### `tf.Variable.__rpow__(a, *args)` {#Variable.__rpow__}
-
-Computes the power of one value to another.
-
-Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for
-corresponding elements in `x` and `y`. For example:
-
-```
-# tensor 'x' is [[2, 2], [3, 3]]
-# tensor 'y' is [[8, 16], [2, 3]]
-tf.pow(x, y) ==> [[256, 65536], [9, 27]]
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`y`</b>: A `Tensor` of type `float32`, `float64`, `int32`, `int64`, `complex64`,
- or `complex128`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`.
-
-
-- - -
-
-#### `tf.Variable.__rsub__(a, *args)` {#Variable.__rsub__}
-
-Returns x - y element-wise.
-
-*NOTE*: `Sub` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Variable.__rtruediv__(a, *args)` {#Variable.__rtruediv__}
-
-
-
-
-- - -
-
-#### `tf.Variable.__rxor__(a, *args)` {#Variable.__rxor__}
-
-x ^ y = (x | y) & ~(x & y).
-
-
-- - -
-
-#### `tf.Variable.__str__()` {#Variable.__str__}
-
-
-
-
-- - -
-
-#### `tf.Variable.__sub__(a, *args)` {#Variable.__sub__}
-
-Returns x - y element-wise.
-
-*NOTE*: `Sub` supports broadcasting. More about broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
-##### Args:
-
-
-* <b>`x`</b>: A `Tensor`. Must be one of the following types: `half`, `float32`, `float64`, `int32`, `int64`, `complex64`, `complex128`.
-* <b>`y`</b>: A `Tensor`. Must have the same type as `x`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `x`.
-
-
-- - -
-
-#### `tf.Variable.__truediv__(a, *args)` {#Variable.__truediv__}
-
-
-
-
-- - -
-
-#### `tf.Variable.__xor__(a, *args)` {#Variable.__xor__}
-
-x ^ y = (x | y) & ~(x & y).
-
-
-- - -
-
-#### `tf.Variable.from_proto(variable_def, import_scope=None)` {#Variable.from_proto}
-
-Returns a `Variable` object created from `variable_def`.
-
-
-- - -
-
-#### `tf.Variable.initial_value` {#Variable.initial_value}
-
-Returns the Tensor used as the initial value for the variable.
-
-Note that this is different from `initialized_value()` which runs
-the op that initializes the variable before returning its value.
-This method returns the tensor that is used by the op that initializes
-the variable.
-
-##### Returns:
-
- A `Tensor`.
-
-
-- - -
-
-#### `tf.Variable.load(value, session=None)` {#Variable.load}
-
-Load new value into this variable
-
-Writes new value to variable's memory. Doesn't add ops to the graph.
-
-This convenience method requires a session where the graph containing this
-variable has been launched. If no session is passed, the default session is
-used. See the [Session class](../../api_docs/python/client.md#Session) for
-more information on launching a graph and on sessions.
-
-```python
-v = tf.Variable([1, 2])
-init = tf.global_variables_initializer()
-
-with tf.Session() as sess:
- sess.run(init)
- # Usage passing the session explicitly.
- v.load([2, 3], sess)
- print(v.eval(sess)) # prints [2 3]
- # Usage with the default session. The 'with' block
- # above makes 'sess' the default session.
- v.load([3, 4], sess)
- print(v.eval()) # prints [3 4]
-```
-
-##### Args:
-
-
-* <b>`value`</b>: New variable value
-* <b>`session`</b>: The session to use to evaluate this variable. If
- none, the default session is used.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: Session is not passed and no default session
-
-
-- - -
-
-#### `tf.Variable.read_value()` {#Variable.read_value}
-
-Returns the value of this variable, read in the current context.
-
-Can be different from value() if it's on another device, with control
-dependencies, etc.
-
-##### Returns:
-
- A `Tensor` containing the value of the variable.
-
-
-- - -
-
-#### `tf.Variable.set_shape(shape)` {#Variable.set_shape}
-
-Overrides the shape for this variable.
-
-##### Args:
-
-
-* <b>`shape`</b>: the `TensorShape` representing the overridden shape.
-
-
-- - -
-
-#### `tf.Variable.to_proto(export_scope=None)` {#Variable.to_proto}
-
-Converts a `Variable` to a `VariableDef` protocol buffer.
-
-##### Args:
-
-
-* <b>`export_scope`</b>: Optional `string`. Name scope to remove.
-
-##### Returns:
-
- A `VariableDef` protocol buffer, or `None` if the `Variable` is not
- in the specified name scope.
-
-
-- - -
-
-#### `tf.Variable.value()` {#Variable.value}
-
-Returns the last snapshot of this variable.
-
-You usually do not need to call this method as all ops that need the value
-of the variable call it automatically through a `convert_to_tensor()` call.
-
-Returns a `Tensor` which holds the value of the variable. You can not
-assign a new value to this tensor as it is not a reference to the variable.
-
-To avoid copies, if the consumer of the returned value is on the same device
-as the variable, this actually returns the live value of the variable, not
-a copy. Updates to the variable are seen by the consumer. If the consumer
-is on a different device it will get a copy of the variable.
-
-##### Returns:
-
- A `Tensor` containing the value of the variable.
-
-
-
-- - -
-
-### `tf.global_variables()` {#global_variables}
-
-Returns global variables.
-
-Global variables are variables that are shared across machines in a
-distributed environment. The `Variable()` constructor or `get_variable()`
-automatically adds new variables to the graph collection
-`GraphKeys.GLOBAL_VARIABLES`.
-This convenience function returns the contents of that collection.
-
-An alternative to global variables are local variables. See
-[`tf.local_variables()`](../../api_docs/python/state_ops.md#local_variables)
-
-##### Returns:
-
- A list of `Variable` objects.
-
-
-- - -
-
-### `tf.local_variables()` {#local_variables}
-
-Returns local variables.
-
-Local variables - per process variables, usually not saved/restored to
-checkpoint and used for temporary or intermediate values.
-For example, they can be used as counters for metrics computation or
-number of epochs this machine has read data.
-The `tf.contrib.framework.local_variable()` function automatically adds the
-new variable to `GraphKeys.LOCAL_VARIABLES`.
-This convenience function returns the contents of that collection.
-
-An alternative to local variables are global variables. See
-[`tf.global_variables()`](../../api_docs/python/state_ops.md#global_variables)
-
-##### Returns:
-
- A list of local `Variable` objects.
-
-
-- - -
-
-### `tf.model_variables()` {#model_variables}
-
-Returns all variables in the MODEL_VARIABLES collection.
-
-##### Returns:
-
- A list of local Variable objects.
-
-
-- - -
-
-### `tf.trainable_variables()` {#trainable_variables}
-
-Returns all variables created with `trainable=True`.
-
-When passed `trainable=True`, the `Variable()` constructor automatically
-adds new variables to the graph collection
-`GraphKeys.TRAINABLE_VARIABLES`. This convenience function returns the
-contents of that collection.
-
-##### Returns:
-
- A list of Variable objects.
-
-
-- - -
-
-### `tf.moving_average_variables()` {#moving_average_variables}
-
-Returns all variables that maintain their moving averages.
-
-If an `ExponentialMovingAverage` object is created and the `apply()`
-method is called on a list of variables, these variables will
-be added to the `GraphKeys.MOVING_AVERAGE_VARIABLES` collection.
-This convenience function returns the contents of that collection.
-
-##### Returns:
-
- A list of Variable objects.
-
-
-- - -
-
-### `tf.global_variables_initializer()` {#global_variables_initializer}
-
-Returns an Op that initializes global variables.
-
-This is just a shortcut for `variable_initializers(global_variables())`
-
-##### Returns:
-
- An Op that initializes global variables in the graph.
-
-
-- - -
-
-### `tf.local_variables_initializer()` {#local_variables_initializer}
-
-Returns an Op that initializes all local variables.
-
-This is just a shortcut for `variable_initializers(local_variables())`
-
-##### Returns:
-
- An Op that initializes all local variables in the graph.
-
-
-- - -
-
-### `tf.variables_initializer(var_list, name='init')` {#variables_initializer}
-
-Returns an Op that initializes a list of variables.
-
-After you launch the graph in a session, you can run the returned Op to
-initialize all the variables in `var_list`. This Op runs all the
-initializers of the variables in `var_list` in parallel.
-
-Calling `initialize_variables()` is equivalent to passing the list of
-initializers to `Group()`.
-
-If `var_list` is empty, however, the function still returns an Op that can
-be run. That Op just has no effect.
-
-##### Args:
-
-
-* <b>`var_list`</b>: List of `Variable` objects to initialize.
-* <b>`name`</b>: Optional name for the returned operation.
-
-##### Returns:
-
- An Op that run the initializers of all the specified variables.
-
-
-- - -
-
-### `tf.is_variable_initialized(variable)` {#is_variable_initialized}
-
-Tests if a variable has been initialized.
-
-##### Args:
-
-
-* <b>`variable`</b>: A `Variable`.
-
-##### Returns:
-
- Returns a scalar boolean Tensor, `True` if the variable has been
- initialized, `False` otherwise.
-
-
-- - -
-
-### `tf.report_uninitialized_variables(var_list=None, name='report_uninitialized_variables')` {#report_uninitialized_variables}
-
-Adds ops to list the names of uninitialized variables.
-
-When run, it returns a 1-D tensor containing the names of uninitialized
-variables if there are any, or an empty array if there are none.
-
-##### Args:
-
-
-* <b>`var_list`</b>: List of `Variable` objects to check. Defaults to the
- value of `global_variables() + local_variables()`
-* <b>`name`</b>: Optional name of the `Operation`.
-
-##### Returns:
-
- A 1-D tensor containing names of the uninitialized variables, or an empty
- 1-D tensor if there are no variables or no uninitialized variables.
-
-
-- - -
-
-### `tf.assert_variables_initialized(var_list=None)` {#assert_variables_initialized}
-
-Returns an Op to check if variables are initialized.
-
-NOTE: This function is obsolete and will be removed in 6 months. Please
-change your implementation to use `report_uninitialized_variables()`.
-
-When run, the returned Op will raise the exception `FailedPreconditionError`
-if any of the variables has not yet been initialized.
-
-Note: This function is implemented by trying to fetch the values of the
-variables. If one of the variables is not initialized a message may be
-logged by the C++ runtime. This is expected.
-
-##### Args:
-
-
-* <b>`var_list`</b>: List of `Variable` objects to check. Defaults to the
- value of `global_variables().`
-
-##### Returns:
-
- An Op, or None if there are no variables.
-
-
-- - -
-
-### `tf.assign(ref, value, validate_shape=None, use_locking=None, name=None)` {#assign}
-
-Update 'ref' by assigning 'value' to it.
-
-This operation outputs "ref" after the assignment is done.
-This makes it easier to chain operations that need to use the reset value.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`.
- Should be from a `Variable` node. May be uninitialized.
-* <b>`value`</b>: A `Tensor`. Must have the same type as `ref`.
- The value to be assigned to the variable.
-* <b>`validate_shape`</b>: An optional `bool`. Defaults to `True`.
- If true, the operation will validate that the shape
- of 'value' matches the shape of the Tensor being assigned to. If false,
- 'ref' will take on the shape of 'value'.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `True`.
- If True, the assignment will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as "ref". Returned as a convenience for operations that want
- to use the new value after the variable has been reset.
-
-
-- - -
-
-### `tf.assign_add(ref, value, use_locking=None, name=None)` {#assign_add}
-
-Update 'ref' by adding 'value' to it.
-
-This operation outputs "ref" after the update is done.
-This makes it easier to chain operations that need to use the reset value.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types:
- `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`,
- `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Should be from a `Variable` node.
-* <b>`value`</b>: A `Tensor`. Must have the same type as `ref`.
- The value to be added to the variable.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- If True, the addition will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as "ref". Returned as a convenience for operations that want
- to use the new value after the variable has been updated.
-
-
-- - -
-
-### `tf.assign_sub(ref, value, use_locking=None, name=None)` {#assign_sub}
-
-Update 'ref' by subtracting 'value' from it.
-
-This operation outputs "ref" after the update is done.
-This makes it easier to chain operations that need to use the reset value.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types:
- `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`,
- `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Should be from a `Variable` node.
-* <b>`value`</b>: A `Tensor`. Must have the same type as `ref`.
- The value to be subtracted to the variable.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- If True, the subtraction will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as "ref". Returned as a convenience for operations that want
- to use the new value after the variable has been updated.
-
-
-- - -
-
-### `class tf.train.Saver` {#Saver}
-
-Saves and restores variables.
-
-See [Variables](../../how_tos/variables/index.md)
-for an overview of variables, saving and restoring.
-
-The `Saver` class adds ops to save and restore variables to and from
-*checkpoints*. It also provides convenience methods to run these ops.
-
-Checkpoints are binary files in a proprietary format which map variable names
-to tensor values. The best way to examine the contents of a checkpoint is to
-load it using a `Saver`.
-
-Savers can automatically number checkpoint filenames with a provided counter.
-This lets you keep multiple checkpoints at different steps while training a
-model. For example you can number the checkpoint filenames with the training
-step number. To avoid filling up disks, savers manage checkpoint files
-automatically. For example, they can keep only the N most recent files, or
-one checkpoint for every N hours of training.
-
-You number checkpoint filenames by passing a value to the optional
-`global_step` argument to `save()`:
-
-```python
-saver.save(sess, 'my-model', global_step=0) ==> filename: 'my-model-0'
-...
-saver.save(sess, 'my-model', global_step=1000) ==> filename: 'my-model-1000'
-```
-
-Additionally, optional arguments to the `Saver()` constructor let you control
-the proliferation of checkpoint files on disk:
-
-* `max_to_keep` indicates the maximum number of recent checkpoint files to
- keep. As new files are created, older files are deleted. If None or 0,
- all checkpoint files are kept. Defaults to 5 (that is, the 5 most recent
- checkpoint files are kept.)
-
-* `keep_checkpoint_every_n_hours`: In addition to keeping the most recent
- `max_to_keep` checkpoint files, you might want to keep one checkpoint file
- for every N hours of training. This can be useful if you want to later
- analyze how a model progressed during a long training session. For
- example, passing `keep_checkpoint_every_n_hours=2` ensures that you keep
- one checkpoint file for every 2 hours of training. The default value of
- 10,000 hours effectively disables the feature.
-
-Note that you still have to call the `save()` method to save the model.
-Passing these arguments to the constructor will not save variables
-automatically for you.
-
-A training program that saves regularly looks like:
-
-```python
-...
-# Create a saver.
-saver = tf.train.Saver(...variables...)
-# Launch the graph and train, saving the model every 1,000 steps.
-sess = tf.Session()
-for step in xrange(1000000):
- sess.run(..training_op..)
- if step % 1000 == 0:
- # Append the step number to the checkpoint name:
- saver.save(sess, 'my-model', global_step=step)
-```
-
-In addition to checkpoint files, savers keep a protocol buffer on disk with
-the list of recent checkpoints. This is used to manage numbered checkpoint
-files and by `latest_checkpoint()`, which makes it easy to discover the path
-to the most recent checkpoint. That protocol buffer is stored in a file named
-'checkpoint' next to the checkpoint files.
-
-If you create several savers, you can specify a different filename for the
-protocol buffer file in the call to `save()`.
-
-- - -
-
-#### `tf.train.Saver.__init__(var_list=None, reshape=False, sharded=False, max_to_keep=5, keep_checkpoint_every_n_hours=10000.0, name=None, restore_sequentially=False, saver_def=None, builder=None, defer_build=False, allow_empty=False, write_version=2, pad_step_number=False)` {#Saver.__init__}
-
-Creates a `Saver`.
-
-The constructor adds ops to save and restore variables.
-
-`var_list` specifies the variables that will be saved and restored. It can
-be passed as a `dict` or a list:
-
-* A `dict` of names to variables: The keys are the names that will be
- used to save or restore the variables in the checkpoint files.
-* A list of variables: The variables will be keyed with their op name in
- the checkpoint files.
-
-For example:
-
-```python
-v1 = tf.Variable(..., name='v1')
-v2 = tf.Variable(..., name='v2')
-
-# Pass the variables as a dict:
-saver = tf.train.Saver({'v1': v1, 'v2': v2})
-
-# Or pass them as a list.
-saver = tf.train.Saver([v1, v2])
-# Passing a list is equivalent to passing a dict with the variable op names
-# as keys:
-saver = tf.train.Saver({v.op.name: v for v in [v1, v2]})
-```
-
-The optional `reshape` argument, if `True`, allows restoring a variable from
-a save file where the variable had a different shape, but the same number
-of elements and type. This is useful if you have reshaped a variable and
-want to reload it from an older checkpoint.
-
-The optional `sharded` argument, if `True`, instructs the saver to shard
-checkpoints per device.
-
-##### Args:
-
-
-* <b>`var_list`</b>: A list of `Variable`/`SaveableObject`, or a dictionary mapping
- names to `SaveableObject`s. If `None`, defaults to the list of all
- saveable objects.
-* <b>`reshape`</b>: If `True`, allows restoring parameters from a checkpoint
- where the variables have a different shape.
-* <b>`sharded`</b>: If `True`, shard the checkpoints, one per device.
-* <b>`max_to_keep`</b>: Maximum number of recent checkpoints to keep.
- Defaults to 5.
-* <b>`keep_checkpoint_every_n_hours`</b>: How often to keep checkpoints.
- Defaults to 10,000 hours.
-* <b>`name`</b>: String. Optional name to use as a prefix when adding operations.
-* <b>`restore_sequentially`</b>: A `Bool`, which if true, causes restore of different
- variables to happen sequentially within each device. This can lower
- memory usage when restoring very large models.
-* <b>`saver_def`</b>: Optional `SaverDef` proto to use instead of running the
- builder. This is only useful for specialty code that wants to recreate
- a `Saver` object for a previously built `Graph` that had a `Saver`.
- The `saver_def` proto should be the one returned by the
- `as_saver_def()` call of the `Saver` that was created for that `Graph`.
-* <b>`builder`</b>: Optional `SaverBuilder` to use if a `saver_def` was not provided.
- Defaults to `BaseSaverBuilder()`.
-* <b>`defer_build`</b>: If `True`, defer adding the save and restore ops to the
- `build()` call. In that case `build()` should be called before
- finalizing the graph or using the saver.
-* <b>`allow_empty`</b>: If `False` (default) raise an error if there are no
- variables in the graph. Otherwise, construct the saver anyway and make
- it a no-op.
-* <b>`write_version`</b>: controls what format to use when saving checkpoints. It
- also affects certain filepath matching logic. The V2 format is the
- recommended choice: it is much more optimized than V1 in terms of
- memory required and latency incurred during restore. Regardless of
- this flag, the Saver is able to restore from both V2 and V1 checkpoints.
-* <b>`pad_step_number`</b>: if True, pads the global step number in the checkpoint
- filepaths to some fixed width (8 by default). This is turned off by
- default.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` is invalid.
-* <b>`ValueError`</b>: If any of the keys or values in `var_list` are not unique.
-
-
-- - -
-
-#### `tf.train.Saver.save(sess, save_path, global_step=None, latest_filename=None, meta_graph_suffix='meta', write_meta_graph=True, write_state=True)` {#Saver.save}
-
-Saves variables.
-
-This method runs the ops added by the constructor for saving variables.
-It requires a session in which the graph was launched. The variables to
-save must also have been initialized.
-
-The method returns the path of the newly created checkpoint file. This
-path can be passed directly to a call to `restore()`.
-
-##### Args:
-
-
-* <b>`sess`</b>: A Session to use to save the variables.
-* <b>`save_path`</b>: String. Path to the checkpoint filename. If the saver is
- `sharded`, this is the prefix of the sharded checkpoint filename.
-* <b>`global_step`</b>: If provided the global step number is appended to
- `save_path` to create the checkpoint filename. The optional argument
- can be a `Tensor`, a `Tensor` name or an integer.
-* <b>`latest_filename`</b>: Optional name for the protocol buffer file that will
- contains the list of most recent checkpoint filenames. That file,
- kept in the same directory as the checkpoint files, is automatically
- managed by the saver to keep track of recent checkpoints. Defaults to
- 'checkpoint'.
-* <b>`meta_graph_suffix`</b>: Suffix for `MetaGraphDef` file. Defaults to 'meta'.
-* <b>`write_meta_graph`</b>: `Boolean` indicating whether or not to write the meta
- graph file.
-* <b>`write_state`</b>: `Boolean` indicating whether or not to write the
- `CheckpointStateProto`.
-
-##### Returns:
-
- A string: path at which the variables were saved. If the saver is
- sharded, this string ends with: '-?????-of-nnnnn' where 'nnnnn'
- is the number of shards created.
- If the saver is empty, returns None.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `sess` is not a `Session`.
-* <b>`ValueError`</b>: If `latest_filename` contains path components, or if it
- collides with `save_path`.
-* <b>`RuntimeError`</b>: If save and restore ops weren't built.
-
-
-- - -
-
-#### `tf.train.Saver.restore(sess, save_path)` {#Saver.restore}
-
-Restores previously saved variables.
-
-This method runs the ops added by the constructor for restoring variables.
-It requires a session in which the graph was launched. The variables to
-restore do not have to have been initialized, as restoring is itself a way
-to initialize variables.
-
-The `save_path` argument is typically a value previously returned from a
-`save()` call, or a call to `latest_checkpoint()`.
-
-##### Args:
-
-
-* <b>`sess`</b>: A `Session` to use to restore the parameters.
-* <b>`save_path`</b>: Path where parameters were previously saved.
-
-
-
-Other utility methods.
-
-- - -
-
-#### `tf.train.Saver.last_checkpoints` {#Saver.last_checkpoints}
-
-List of not-yet-deleted checkpoint filenames.
-
-You can pass any of the returned values to `restore()`.
-
-##### Returns:
-
- A list of checkpoint filenames, sorted from oldest to newest.
-
-
-- - -
-
-#### `tf.train.Saver.set_last_checkpoints_with_time(last_checkpoints_with_time)` {#Saver.set_last_checkpoints_with_time}
-
-Sets the list of old checkpoint filenames and timestamps.
-
-##### Args:
-
-
-* <b>`last_checkpoints_with_time`</b>: A list of tuples of checkpoint filenames and
- timestamps.
-
-##### Raises:
-
-
-* <b>`AssertionError`</b>: If last_checkpoints_with_time is not a list.
-
-
-- - -
-
-#### `tf.train.Saver.recover_last_checkpoints(checkpoint_paths)` {#Saver.recover_last_checkpoints}
-
-Recovers the internal saver state after a crash.
-
-This method is useful for recovering the "self._last_checkpoints" state.
-
-Globs for the checkpoints pointed to by `checkpoint_paths`. If the files
-exist, use their mtime as the checkpoint timestamp.
-
-##### Args:
-
-
-* <b>`checkpoint_paths`</b>: a list of checkpoint paths.
-
-
-- - -
-
-#### `tf.train.Saver.as_saver_def()` {#Saver.as_saver_def}
-
-Generates a `SaverDef` representation of this saver.
-
-##### Returns:
-
- A `SaverDef` proto.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.train.Saver.build()` {#Saver.build}
-
-Builds saver_def.
-
-
-- - -
-
-#### `tf.train.Saver.export_meta_graph(filename=None, collection_list=None, as_text=False, export_scope=None, clear_devices=False)` {#Saver.export_meta_graph}
-
-Writes `MetaGraphDef` to save_path/filename.
-
-##### Args:
-
-
-* <b>`filename`</b>: Optional meta_graph filename including the path.
-* <b>`collection_list`</b>: List of string keys to collect.
-* <b>`as_text`</b>: If `True`, writes the meta_graph as an ASCII proto.
-* <b>`export_scope`</b>: Optional `string`. Name scope to remove.
-* <b>`clear_devices`</b>: Whether or not to clear the device field for an `Operation`
- or `Tensor` during export.
-
-##### Returns:
-
- A `MetaGraphDef` proto.
-
-
-- - -
-
-#### `tf.train.Saver.from_proto(saver_def, import_scope=None)` {#Saver.from_proto}
-
-Returns a `Saver` object created from `saver_def`.
-
-##### Args:
-
-
-* <b>`saver_def`</b>: a `SaveDef` protocol buffer.
-* <b>`import_scope`</b>: Optional `string`. Name scope to use.
-
-##### Returns:
-
- A `Saver` built from saver_def.
-
-
-- - -
-
-#### `tf.train.Saver.set_last_checkpoints(last_checkpoints)` {#Saver.set_last_checkpoints}
-
-DEPRECATED: Use set_last_checkpoints_with_time.
-
-Sets the list of old checkpoint filenames.
-
-##### Args:
-
-
-* <b>`last_checkpoints`</b>: A list of checkpoint filenames.
-
-##### Raises:
-
-
-* <b>`AssertionError`</b>: If last_checkpoints is not a list.
-
-
-- - -
-
-#### `tf.train.Saver.to_proto(export_scope=None)` {#Saver.to_proto}
-
-Converts this `Saver` to a `SaverDef` protocol buffer.
-
-##### Args:
-
-
-* <b>`export_scope`</b>: Optional `string`. Name scope to remove.
-
-##### Returns:
-
- A `SaverDef` protocol buffer.
-
-
-
-- - -
-
-### `tf.train.latest_checkpoint(checkpoint_dir, latest_filename=None)` {#latest_checkpoint}
-
-Finds the filename of latest saved checkpoint file.
-
-##### Args:
-
-
-* <b>`checkpoint_dir`</b>: Directory where the variables were saved.
-* <b>`latest_filename`</b>: Optional name for the protocol buffer file that
- contains the list of most recent checkpoint filenames.
- See the corresponding argument to `Saver.save()`.
-
-##### Returns:
-
- The full path to the latest checkpoint or `None` if no checkpoint was found.
-
-
-- - -
-
-### `tf.train.get_checkpoint_state(checkpoint_dir, latest_filename=None)` {#get_checkpoint_state}
-
-Returns CheckpointState proto from the "checkpoint" file.
-
-If the "checkpoint" file contains a valid CheckpointState
-proto, returns it.
-
-##### Args:
-
-
-* <b>`checkpoint_dir`</b>: The directory of checkpoints.
-* <b>`latest_filename`</b>: Optional name of the checkpoint file. Default to
- 'checkpoint'.
-
-##### Returns:
-
- A CheckpointState if the state was available, None
- otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the checkpoint read doesn't have model_checkpoint_path set.
-
-
-- - -
-
-### `tf.train.update_checkpoint_state(save_dir, model_checkpoint_path, all_model_checkpoint_paths=None, latest_filename=None)` {#update_checkpoint_state}
-
-Updates the content of the 'checkpoint' file.
-
-This updates the checkpoint file containing a CheckpointState
-proto.
-
-##### Args:
-
-
-* <b>`save_dir`</b>: Directory where the model was saved.
-* <b>`model_checkpoint_path`</b>: The checkpoint file.
-* <b>`all_model_checkpoint_paths`</b>: List of strings. Paths to all not-yet-deleted
- checkpoints, sorted from oldest to newest. If this is a non-empty list,
- the last element must be equal to model_checkpoint_path. These paths
- are also saved in the CheckpointState proto.
-* <b>`latest_filename`</b>: Optional name of the checkpoint file. Default to
- 'checkpoint'.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If the save paths conflict.
-
-
-- - -
-
-### `tf.get_variable(name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True, use_resource=None, custom_getter=None)` {#get_variable}
-
-Gets an existing variable with these parameters or create a new one.
-
-This function prefixes the name with the current variable scope
-and performs reuse checks. See the
-[Variable Scope How To](../../how_tos/variable_scope/index.md)
-for an extensive description of how reusing works. Here is a basic example:
-
-```python
-with tf.variable_scope("foo"):
- v = tf.get_variable("v", [1]) # v.name == "foo/v:0"
- w = tf.get_variable("w", [1]) # w.name == "foo/w:0"
-with tf.variable_scope("foo", reuse=True):
- v1 = tf.get_variable("v") # The same as v above.
-```
-
-If initializer is `None` (the default), the default initializer passed in
-the variable scope will be used. If that one is `None` too, a
-`glorot_uniform_initializer` will be used. The initializer can also be
-a Tensor, in which case the variable is initialized to this value and shape.
-
-Similarly, if the regularizer is `None` (the default), the default regularizer
-passed in the variable scope will be used (if that is `None` too,
-then by default no regularization is performed).
-
-If a partitioner is provided, a `PartitionedVariable` is returned.
-Accessing this object as a `Tensor` returns the shards concatenated along
-the partition axis.
-
-Some useful partitioners are available. See, e.g.,
-`variable_axis_size_partitioner` and `min_max_variable_partitioner`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name of the new or existing variable.
-* <b>`shape`</b>: Shape of the new or existing variable.
-* <b>`dtype`</b>: Type of the new or existing variable (defaults to `DT_FLOAT`).
-* <b>`initializer`</b>: Initializer for the variable if one is created.
-* <b>`regularizer`</b>: A (Tensor -> Tensor or None) function; the result of
- applying it on a newly created variable will be added to the collection
- @{tf.GraphKeys.REGULARIZATION_LOSSES} and can be used for regularization.
-* <b>`trainable`</b>: If `True` also add the variable to the graph collection
- `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).
-* <b>`collections`</b>: List of graph collections keys to add the Variable to.
- Defaults to `[GraphKeys.GLOBAL_VARIABLES]` (see `tf.Variable`).
-* <b>`caching_device`</b>: Optional device string or function describing where the
- Variable should be cached for reading. Defaults to the Variable's
- device. If not `None`, caches on another device. Typical use is to
- cache on the device where the Ops using the Variable reside, to
- deduplicate copying through `Switch` and other conditional statements.
-* <b>`partitioner`</b>: Optional callable that accepts a fully defined `TensorShape`
- and `dtype` of the Variable to be created, and returns a list of
- partitions for each axis (currently only one axis can be partitioned).
-* <b>`validate_shape`</b>: If False, allows the variable to be initialized with a
- value of unknown shape. If True, the default, the shape of initial_value
- must be known.
-* <b>`use_resource`</b>: If False, creates a regular Variable. If true, creates an
- experimental ResourceVariable instead with well-defined semantics.
- Defaults to False (will later change to True).
-* <b>`custom_getter`</b>: Callable that takes as a first argument the true getter, and
- allows overwriting the internal get_variable method.
- The signature of `custom_getter` should match that of this method,
- but the most future-proof version will allow for changes:
- `def custom_getter(getter, *args, **kwargs)`. Direct access to
- all `get_variable` parameters is also allowed:
- `def custom_getter(getter, name, *args, **kwargs)`. A simple identity
- custom getter that simply creates variables with modified names is:
- ```python
- def custom_getter(getter, name, *args, **kwargs):
- return getter(name + '_suffix', *args, **kwargs)
- ```
-
-##### Returns:
-
- The created or existing `Variable` (or `PartitionedVariable`, if a
- partitioner was used).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: when creating a new variable and shape is not declared,
- when violating reuse during variable creation, or when `initializer` dtype
- and `dtype` don't match. Reuse is set inside `variable_scope`.
-
-
-- - -
-
-### `tf.get_local_variable(*args, **kwargs)` {#get_local_variable}
-
-Gets an existing *local* variable or creates a new one.
-
-Behavior is the same as in `get_variable`, except that variables are
-added to the `LOCAL_VARIABLES` collection and `trainable` is set to
-`False`.
-This function prefixes the name with the current variable scope
-and performs reuse checks. See the
-[Variable Scope How To](../../how_tos/variable_scope/index.md)
-for an extensive description of how reusing works. Here is a basic example:
-
-```python
-with tf.variable_scope("foo"):
- v = tf.get_variable("v", [1]) # v.name == "foo/v:0"
- w = tf.get_variable("w", [1]) # w.name == "foo/w:0"
-with tf.variable_scope("foo", reuse=True):
- v1 = tf.get_variable("v") # The same as v above.
-```
-
-If initializer is `None` (the default), the default initializer passed in
-the variable scope will be used. If that one is `None` too, a
-`glorot_uniform_initializer` will be used. The initializer can also be
-a Tensor, in which case the variable is initialized to this value and shape.
-
-Similarly, if the regularizer is `None` (the default), the default regularizer
-passed in the variable scope will be used (if that is `None` too,
-then by default no regularization is performed).
-
-If a partitioner is provided, a `PartitionedVariable` is returned.
-Accessing this object as a `Tensor` returns the shards concatenated along
-the partition axis.
-
-Some useful partitioners are available. See, e.g.,
-`variable_axis_size_partitioner` and `min_max_variable_partitioner`.
-
-##### Args:
-
-
-* <b>`name`</b>: The name of the new or existing variable.
-* <b>`shape`</b>: Shape of the new or existing variable.
-* <b>`dtype`</b>: Type of the new or existing variable (defaults to `DT_FLOAT`).
-* <b>`initializer`</b>: Initializer for the variable if one is created.
-* <b>`regularizer`</b>: A (Tensor -> Tensor or None) function; the result of
- applying it on a newly created variable will be added to the collection
- @{tf.GraphKeys.REGULARIZATION_LOSSES} and can be used for regularization.
-* <b>`collections`</b>: List of graph collections keys to add the Variable to.
- Defaults to `[GraphKeys.LOCAL_VARIABLES]` (see `tf.Variable`).
-* <b>`caching_device`</b>: Optional device string or function describing where the
- Variable should be cached for reading. Defaults to the Variable's
- device. If not `None`, caches on another device. Typical use is to
- cache on the device where the Ops using the Variable reside, to
- deduplicate copying through `Switch` and other conditional statements.
-* <b>`partitioner`</b>: Optional callable that accepts a fully defined `TensorShape`
- and `dtype` of the Variable to be created, and returns a list of
- partitions for each axis (currently only one axis can be partitioned).
-* <b>`validate_shape`</b>: If False, allows the variable to be initialized with a
- value of unknown shape. If True, the default, the shape of initial_value
- must be known.
-* <b>`use_resource`</b>: If False, creates a regular Variable. If true, creates an
- experimental ResourceVariable instead with well-defined semantics.
- Defaults to False (will later change to True).
-* <b>`custom_getter`</b>: Callable that takes as a first argument the true getter, and
- allows overwriting the internal get_variable method.
- The signature of `custom_getter` should match that of this method,
- but the most future-proof version will allow for changes:
- `def custom_getter(getter, *args, **kwargs)`. Direct access to
- all `get_variable` parameters is also allowed:
- `def custom_getter(getter, name, *args, **kwargs)`. A simple identity
- custom getter that simply creates variables with modified names is:
- ```python
- def custom_getter(getter, name, *args, **kwargs):
- return getter(name + '_suffix', *args, **kwargs)
- ```
-
-##### Returns:
-
- The created or existing `Variable` (or `PartitionedVariable`, if a
- partitioner was used).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: when creating a new variable and shape is not declared,
- when violating reuse during variable creation, or when `initializer` dtype
- and `dtype` don't match. Reuse is set inside `variable_scope`.
-
-
-- - -
-
-### `class tf.VariableScope` {#VariableScope}
-
-Variable scope object to carry defaults to provide to `get_variable`.
-
-Many of the arguments we need for `get_variable` in a variable store are most
-easily handled with a context. This object is used for the defaults.
-
-Attributes:
- name: name of the current scope, used as prefix in get_variable.
- initializer: default initializer passed to get_variable.
- regularizer: default regularizer passed to get_variable.
- reuse: Boolean or None, setting the reuse in get_variable.
- caching_device: string, callable, or None: the caching device passed to
- get_variable.
- partitioner: callable or `None`: the partitioner passed to `get_variable`.
- custom_getter: default custom getter passed to get_variable.
- name_scope: The name passed to `tf.name_scope`.
- dtype: default type passed to get_variable (defaults to DT_FLOAT).
- use_resource: if False, create a normal Variable; if True create an
- experimental ResourceVariable with well-defined semantics. Defaults
- to False (will later change to True).
-- - -
-
-#### `tf.VariableScope.__init__(reuse, name='', initializer=None, regularizer=None, caching_device=None, partitioner=None, custom_getter=None, name_scope='', dtype=tf.float32, use_resource=None)` {#VariableScope.__init__}
-
-Creates a new VariableScope with the given properties.
-
-
-- - -
-
-#### `tf.VariableScope.caching_device` {#VariableScope.caching_device}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.custom_getter` {#VariableScope.custom_getter}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.dtype` {#VariableScope.dtype}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.get_variable(var_store, name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True, use_resource=None, custom_getter=None)` {#VariableScope.get_variable}
-
-Gets an existing variable with this name or create a new one.
-
-
-- - -
-
-#### `tf.VariableScope.initializer` {#VariableScope.initializer}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.name` {#VariableScope.name}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.original_name_scope` {#VariableScope.original_name_scope}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.partitioner` {#VariableScope.partitioner}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.regularizer` {#VariableScope.regularizer}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.reuse` {#VariableScope.reuse}
-
-
-
-
-- - -
-
-#### `tf.VariableScope.reuse_variables()` {#VariableScope.reuse_variables}
-
-Reuse variables in this scope.
-
-
-- - -
-
-#### `tf.VariableScope.set_caching_device(caching_device)` {#VariableScope.set_caching_device}
-
-Set caching_device for this scope.
-
-
-- - -
-
-#### `tf.VariableScope.set_custom_getter(custom_getter)` {#VariableScope.set_custom_getter}
-
-Set custom getter for this scope.
-
-
-- - -
-
-#### `tf.VariableScope.set_dtype(dtype)` {#VariableScope.set_dtype}
-
-Set data type for this scope.
-
-
-- - -
-
-#### `tf.VariableScope.set_initializer(initializer)` {#VariableScope.set_initializer}
-
-Set initializer for this scope.
-
-
-- - -
-
-#### `tf.VariableScope.set_partitioner(partitioner)` {#VariableScope.set_partitioner}
-
-Set partitioner for this scope.
-
-
-- - -
-
-#### `tf.VariableScope.set_regularizer(regularizer)` {#VariableScope.set_regularizer}
-
-Set regularizer for this scope.
-
-
-- - -
-
-#### `tf.VariableScope.set_use_resource(use_resource)` {#VariableScope.set_use_resource}
-
-Sets whether to use ResourceVariables for this scope.
-
-
-- - -
-
-#### `tf.VariableScope.use_resource` {#VariableScope.use_resource}
-
-
-
-
-
-- - -
-
-### `tf.variable_scope(name_or_scope, default_name=None, values=None, initializer=None, regularizer=None, caching_device=None, partitioner=None, custom_getter=None, reuse=None, dtype=None, use_resource=None)` {#variable_scope}
-
-Returns a context manager for defining ops that creates variables (layers).
-
-This context manager validates that the (optional) `values` are from
-the same graph, ensures that graph is the default graph, and pushes a
-name scope and a variable scope.
-
-If `name_or_scope` is not None, it is used as is. If `scope` is None, then
-`default_name` is used. In that case, if the same name has been previously
-used in the same scope, it will made unique be appending `_N` to it.
-
-Variable scope allows to create new variables and to share already created
-ones while providing checks to not create or share by accident. For details,
-see the [Variable Scope How To](../../how_tos/variable_scope/index.md),
-here we present only a few basic examples.
-
-Simple example of how to create a new variable:
-
-```python
-with tf.variable_scope("foo"):
- with tf.variable_scope("bar"):
- v = tf.get_variable("v", [1])
- assert v.name == "foo/bar/v:0"
-```
-
-Basic example of sharing a variable:
-
-```python
-with tf.variable_scope("foo"):
- v = tf.get_variable("v", [1])
-with tf.variable_scope("foo", reuse=True):
- v1 = tf.get_variable("v", [1])
-assert v1 == v
-```
-
-Sharing a variable by capturing a scope and setting reuse:
-
-```python
-with tf.variable_scope("foo") as scope:
- v = tf.get_variable("v", [1])
- scope.reuse_variables()
- v1 = tf.get_variable("v", [1])
-assert v1 == v
-```
-
-To prevent accidental sharing of variables, we raise an exception when
-getting an existing variable in a non-reusing scope.
-
-```python
-with tf.variable_scope("foo"):
- v = tf.get_variable("v", [1])
- v1 = tf.get_variable("v", [1])
- # Raises ValueError("... v already exists ...").
-```
-
-Similarly, we raise an exception when trying to get a variable that
-does not exist in reuse mode.
-
-```python
-with tf.variable_scope("foo", reuse=True):
- v = tf.get_variable("v", [1])
- # Raises ValueError("... v does not exists ...").
-```
-
-Note that the `reuse` flag is inherited: if we open a reusing scope,
-then all its sub-scopes become reusing as well.
-
-##### Args:
-
-
-* <b>`name_or_scope`</b>: `string` or `VariableScope`: the scope to open.
-* <b>`default_name`</b>: The default name to use if the `name_or_scope` argument is
- `None`, this name will be uniquified. If name_or_scope is provided it
- won't be used and therefore it is not required and can be None.
-* <b>`values`</b>: The list of `Tensor` arguments that are passed to the op function.
-* <b>`initializer`</b>: default initializer for variables within this scope.
-* <b>`regularizer`</b>: default regularizer for variables within this scope.
-* <b>`caching_device`</b>: default caching device for variables within this scope.
-* <b>`partitioner`</b>: default partitioner for variables within this scope.
-* <b>`custom_getter`</b>: default custom getter for variables within this scope.
-* <b>`reuse`</b>: `True` or `None`; if `True`, we go into reuse mode for this scope as
- well as all sub-scopes; if `None`, we just inherit the parent scope reuse.
-* <b>`dtype`</b>: type of variables created in this scope (defaults to the type
- in the passed scope, or inherited from parent scope).
-* <b>`use_resource`</b>: If False, all variables will be regular Variables. If True,
- experimental ResourceVariables with well-defined semantics will be used
- instead. Defaults to False (will later change to True).
-
-##### Returns:
-
- A scope that can be to captured and reused.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: when trying to reuse within a create scope, or create within
- a reuse scope, or if reuse is not `None` or `True`.
-* <b>`TypeError`</b>: when the types of some arguments are not appropriate.
-
-
-- - -
-
-### `tf.variable_op_scope(values, name_or_scope, default_name=None, initializer=None, regularizer=None, caching_device=None, partitioner=None, custom_getter=None, reuse=None, dtype=None, use_resource=None)` {#variable_op_scope}
-
-Deprecated: context manager for defining an op that creates variables.
-
-
-- - -
-
-### `tf.get_variable_scope()` {#get_variable_scope}
-
-Returns the current variable scope.
-
-
-- - -
-
-### `tf.make_template(name_, func_, create_scope_now_=False, unique_name_=None, custom_getter_=None, **kwargs)` {#make_template}
-
-Given an arbitrary function, wrap it so that it does variable sharing.
-
-This wraps `func_` in a Template and partially evaluates it. Templates are
-functions that create variables the first time they are called and reuse them
-thereafter. In order for `func_` to be compatible with a `Template` it must
-have the following properties:
-
-* The function should create all trainable variables and any variables that
- should be reused by calling `tf.get_variable`. If a trainable variable is
- created using `tf.Variable`, then a ValueError will be thrown. Variables
- that are intended to be locals can be created by specifying
- `tf.Variable(..., trainable=false)`.
-* The function may use variable scopes and other templates internally to
- create and reuse variables, but it shouldn't use `tf.global_variables` to
- capture variables that are defined outside of the scope of the function.
-* Internal scopes and variable names should not depend on any arguments that
- are not supplied to `make_template`. In general you will get a ValueError
- telling you that you are trying to reuse a variable that doesn't exist
- if you make a mistake.
-
-In the following example, both `z` and `w` will be scaled by the same `y`. It
-is important to note that if we didn't assign `scalar_name` and used a
-different name for z and w that a `ValueError` would be thrown because it
-couldn't reuse the variable.
-
-```python
-def my_op(x, scalar_name):
- var1 = tf.get_variable(scalar_name,
- shape=[],
- initializer=tf.constant_initializer(1))
- return x * var1
-
-scale_by_y = tf.make_template('scale_by_y', my_op, scalar_name='y')
-
-z = scale_by_y(input1)
-w = scale_by_y(input2)
-```
-
-As a safe-guard, the returned function will raise a `ValueError` after the
-first call if trainable variables are created by calling `tf.Variable`.
-
-If all of these are true, then 2 properties are enforced by the template:
-
-1. Calling the same template multiple times will share all non-local
- variables.
-2. Two different templates are guaranteed to be unique, unless you reenter the
- same variable scope as the initial definition of a template and redefine
- it. An examples of this exception:
-
-```python
-def my_op(x, scalar_name):
- var1 = tf.get_variable(scalar_name,
- shape=[],
- initializer=tf.constant_initializer(1))
- return x * var1
-
-with tf.variable_scope('scope') as vs:
- scale_by_y = tf.make_template('scale_by_y', my_op, scalar_name='y')
- z = scale_by_y(input1)
- w = scale_by_y(input2)
-
-# Creates a template that reuses the variables above.
-with tf.variable_scope(vs, reuse=True):
- scale_by_y2 = tf.make_template('scale_by_y', my_op, scalar_name='y')
- z2 = scale_by_y2(input1)
- w2 = scale_by_y2(input2)
-```
-
-Depending on the value of `create_scope_now_`, the full variable scope may be
-captured either at the time of first call or at the time of construction. If
-this option is set to True, then all Tensors created by repeated calls to the
-template will have an extra trailing _N+1 to their name, as the first time the
-scope is entered in the Template constructor no Tensors are created.
-
-Note: `name_`, `func_` and `create_scope_now_` have a trailing underscore to
-reduce the likelihood of collisions with kwargs.
-
-##### Args:
-
-
-* <b>`name_`</b>: A name for the scope created by this template. If necessary, the name
- will be made unique by appending `_N` to the name.
-* <b>`func_`</b>: The function to wrap.
-* <b>`create_scope_now_`</b>: Boolean controlling whether the scope should be created
- when the template is constructed or when the template is called. Default
- is False, meaning the scope is created when the template is called.
-* <b>`unique_name_`</b>: When used, it overrides name_ and is not made unique. If a
- template of the same scope/unique_name already exists and reuse is false,
- an error is raised. Defaults to None.
-* <b>`custom_getter_`</b>: Optional custom getter for variables used in `func_`. See
- the [`get_variable`](#get_variable) `custom_getter` documentation for
- more information.
-* <b>`**kwargs`</b>: Keyword arguments to apply to `func_`.
-
-##### Returns:
-
- A function to encapsulate a set of variables which should be created once
- and reused. An enclosing scope will created, either where `make_template`
- is called, or wherever the result is called, depending on the value of
- `create_scope_now_`. Regardless of the value, the first time the template
- is called it will enter the scope with no reuse, and call `func_` to create
- variables, which are guaranteed to be unique. All subsequent calls will
- re-enter the scope and reuse those variables.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the name is None.
-
-
-- - -
-
-### `tf.no_regularizer(_)` {#no_regularizer}
-
-Use this function to prevent regularization of variables.
-
-
-- - -
-
-### `class tf.constant_initializer` {#constant_initializer}
-
-Initializer that generates tensors with constant values.
-
-The resulting tensor is populated with values of type `dtype`, as
-specified by arguments `value` following the desired `shape` of the
-new tensor (see examples below).
-
-The argument `value` can be a constant value, or a list of values of type
-`dtype`. If `value` is a list, then the length of the list must be less
-than or equal to the number of elements implied by the desired shape of the
-tensor. In the case where the total number of elements in `value` is less
-than the number of elements required by the tensor shape, the last element
-in `value` will be used to fill the remaining entries. If the total number of
-elements in `value` is greater than the number of elements required by the
-tensor shape, the initializer will raise a `ValueError`.
-
-Args:
- value: A Python scalar, list of values, or a N-dimensional numpy array. All
- elements of the initialized variable will be set to the corresponding
- value in the `value` argument.
- dtype: The data type.
- verify_shape: Boolean that enables verification of the shape of `value`. If
- `True`, the initializer will throw an error if the shape of `value` is not
- compatible with the shape of the initialized tensor.
-
-Examples:
- The following example can be rewritten using a numpy.ndarray instead
- of the `value` list, even reshaped, as shown in the two commented lines
- below the `value` list initialization.
-
-```python
- >>> import numpy as np
- >>> import tensorflow as tf
-
- >>> value = [0, 1, 2, 3, 4, 5, 6, 7]
- >>> # value = np.array(value)
- >>> # value = value.reshape([2, 4])
- >>> init = tf.constant_initializer(value)
-
- >>> print('fitting shape:')
- >>> with tf.Session():
- >>> x = tf.get_variable('x', shape=[2, 4], initializer=init)
- >>> x.initializer.run()
- >>> print(x.eval())
-
- fitting shape:
- [[ 0. 1. 2. 3.]
- [ 4. 5. 6. 7.]]
-
- >>> print('larger shape:')
- >>> with tf.Session():
- >>> x = tf.get_variable('x', shape=[3, 4], initializer=init)
- >>> x.initializer.run()
- >>> print(x.eval())
-
- larger shape:
- [[ 0. 1. 2. 3.]
- [ 4. 5. 6. 7.]
- [ 7. 7. 7. 7.]]
-
- >>> print('smaller shape:')
- >>> with tf.Session():
- >>> x = tf.get_variable('x', shape=[2, 3], initializer=init)
-
- ValueError: Too many elements provided. Needed at most 6, but received 8
-
- >>> print('shape verification:')
- >>> init_verify = tf.constant_initializer(value, verify_shape=True)
- >>> with tf.Session():
- >>> x = tf.get_variable('x', shape=[3, 4], initializer=init_verify)
-
- TypeError: Expected Tensor's shape: (3, 4), got (8,).
-```
-- - -
-
-#### `tf.constant_initializer.__call__(shape, dtype=None, partition_info=None)` {#constant_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.constant_initializer.__init__(value=0, dtype=tf.float32, verify_shape=False)` {#constant_initializer.__init__}
-
-
-
-
-
-- - -
-
-### `class tf.random_normal_initializer` {#random_normal_initializer}
-
-Initializer that generates tensors with a normal distribution.
-
-Args:
- mean: a python scalar or a scalar tensor. Mean of the random values
- to generate.
- stddev: a python scalar or a scalar tensor. Standard deviation of the
- random values to generate.
- seed: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
- dtype: The data type. Only floating point types are supported.
-- - -
-
-#### `tf.random_normal_initializer.__call__(shape, dtype=None, partition_info=None)` {#random_normal_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.random_normal_initializer.__init__(mean=0.0, stddev=1.0, seed=None, dtype=tf.float32)` {#random_normal_initializer.__init__}
-
-
-
-
-
-- - -
-
-### `class tf.truncated_normal_initializer` {#truncated_normal_initializer}
-
-Initializer that generates a truncated normal distribution.
-
-These values are similar to values from a `random_normal_initializer`
-except that values more than two standard deviations from the mean
-are discarded and re-drawn. This is the recommended initializer for
-neural network weights and filters.
-
-Args:
- mean: a python scalar or a scalar tensor. Mean of the random values
- to generate.
- stddev: a python scalar or a scalar tensor. Standard deviation of the
- random values to generate.
- seed: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
- dtype: The data type. Only floating point types are supported.
-- - -
-
-#### `tf.truncated_normal_initializer.__call__(shape, dtype=None, partition_info=None)` {#truncated_normal_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.truncated_normal_initializer.__init__(mean=0.0, stddev=1.0, seed=None, dtype=tf.float32)` {#truncated_normal_initializer.__init__}
-
-
-
-
-
-- - -
-
-### `class tf.random_uniform_initializer` {#random_uniform_initializer}
-
-Initializer that generates tensors with a uniform distribution.
-
-Args:
- minval: A python scalar or a scalar tensor. Lower bound of the range
- of random values to generate.
- maxval: A python scalar or a scalar tensor. Upper bound of the range
- of random values to generate. Defaults to 1 for float types.
- seed: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
- dtype: The data type.
-- - -
-
-#### `tf.random_uniform_initializer.__call__(shape, dtype=None, partition_info=None)` {#random_uniform_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.random_uniform_initializer.__init__(minval=0, maxval=None, seed=None, dtype=tf.float32)` {#random_uniform_initializer.__init__}
-
-
-
-
-
-- - -
-
-### `class tf.uniform_unit_scaling_initializer` {#uniform_unit_scaling_initializer}
-
-Initializer that generates tensors without scaling variance.
-
-When initializing a deep network, it is in principle advantageous to keep
-the scale of the input variance constant, so it does not explode or diminish
-by reaching the final layer. If the input is `x` and the operation `x * W`,
-and we want to initialize `W` uniformly at random, we need to pick `W` from
-
- [-sqrt(3) / sqrt(dim), sqrt(3) / sqrt(dim)]
-
-to keep the scale intact, where `dim = W.shape[0]` (the size of the input).
-A similar calculation for convolutional networks gives an analogous result
-with `dim` equal to the product of the first 3 dimensions. When
-nonlinearities are present, we need to multiply this by a constant `factor`.
-See [Sussillo et al., 2014](https://arxiv.org/abs/1412.6558)
-([pdf](http://arxiv.org/pdf/1412.6558.pdf)) for deeper motivation, experiments
-and the calculation of constants. In section 2.3 there, the constants were
-numerically computed: for a linear layer it's 1.0, relu: ~1.43, tanh: ~1.15.
-
-Args:
- factor: Float. A multiplicative factor by which the values will be scaled.
- seed: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
- dtype: The data type. Only floating point types are supported.
-- - -
-
-#### `tf.uniform_unit_scaling_initializer.__call__(shape, dtype=None, partition_info=None)` {#uniform_unit_scaling_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.uniform_unit_scaling_initializer.__init__(factor=1.0, seed=None, dtype=tf.float32)` {#uniform_unit_scaling_initializer.__init__}
-
-
-
-
-
-- - -
-
-### `class tf.zeros_initializer` {#zeros_initializer}
-
-Initializer that generates tensors initialized to 0.
-- - -
-
-#### `tf.zeros_initializer.__call__(shape, dtype=None, partition_info=None)` {#zeros_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.zeros_initializer.__init__(dtype=tf.float32)` {#zeros_initializer.__init__}
-
-
-
-
-
-- - -
-
-### `class tf.ones_initializer` {#ones_initializer}
-
-Initializer that generates tensors initialized to 1.
-- - -
-
-#### `tf.ones_initializer.__call__(shape, dtype=None, partition_info=None)` {#ones_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.ones_initializer.__init__(dtype=tf.float32)` {#ones_initializer.__init__}
-
-
-
-
-
-- - -
-
-### `class tf.orthogonal_initializer` {#orthogonal_initializer}
-
-Initializer that generates an orthogonal matrix.
-
-If the shape of the tensor to initialize is two-dimensional, i is initialized
-with an orthogonal matrix obtained from the singular value decomposition of a
-matrix of uniform random numbers.
-
-If the shape of the tensor to initialize is more than two-dimensional,
-a matrix of shape `(shape[0] * ... * shape[n - 2], shape[n - 1])`
-is initialized, where `n` is the length of the shape vector.
-The matrix is subsequently reshaped to give a tensor of the desired shape.
-
-Args:
- gain: multiplicative factor to apply to the orthogonal matrix
- dtype: The type of the output.
- seed: A Python integer. Used to create random seeds. See
- [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed)
- for behavior.
-- - -
-
-#### `tf.orthogonal_initializer.__call__(shape, dtype=None, partition_info=None)` {#orthogonal_initializer.__call__}
-
-
-
-
-- - -
-
-#### `tf.orthogonal_initializer.__init__(gain=1.0, dtype=tf.float32, seed=None)` {#orthogonal_initializer.__init__}
-
-
-
-
-
-- - -
-
-### `tf.fixed_size_partitioner(num_shards, axis=0)` {#fixed_size_partitioner}
-
-Partitioner to specify a fixed number of shards along given axis.
-
-##### Args:
-
-
-* <b>`num_shards`</b>: `int`, number of shards to partition variable.
-* <b>`axis`</b>: `int`, axis to partition on.
-
-##### Returns:
-
- A partition function usable as the `partitioner` argument to
- `variable_scope`, `get_variable`, and `get_partitioned_variable_list`.
-
-
-- - -
-
-### `tf.variable_axis_size_partitioner(max_shard_bytes, axis=0, bytes_per_string_element=16, max_shards=None)` {#variable_axis_size_partitioner}
-
-Get a partitioner for VariableScope to keep shards below `max_shard_bytes`.
-
-This partitioner will shard a Variable along one axis, attempting to keep
-the maximum shard size below `max_shard_bytes`. In practice, this is not
-always possible when sharding along only one axis. When this happens,
-this axis is sharded as much as possible (i.e., every dimension becomes
-a separate shard).
-
-If the partitioner hits the `max_shards` limit, then each shard may end up
-larger than `max_shard_bytes`. By default `max_shards` equals `None` and no
-limit on the number of shards is enforced.
-
-One reasonable value for `max_shard_bytes` is `(64 << 20) - 1`, or almost
-`64MB`, to keep below the protobuf byte limit.
-
-##### Args:
-
-
-* <b>`max_shard_bytes`</b>: The maximum size any given shard is allowed to be.
-* <b>`axis`</b>: The axis to partition along. Default: outermost axis.
-* <b>`bytes_per_string_element`</b>: If the `Variable` is of type string, this provides
- an estimate of how large each scalar in the `Variable` is.
-* <b>`max_shards`</b>: The maximum number of shards in int created taking precedence
- over `max_shard_bytes`.
-
-##### Returns:
-
- A partition function usable as the `partitioner` argument to
- `variable_scope`, `get_variable`, and `get_partitioned_variable_list`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If any of the byte counts are non-positive.
-
-
-- - -
-
-### `tf.min_max_variable_partitioner(max_partitions=1, axis=0, min_slice_size=262144, bytes_per_string_element=16)` {#min_max_variable_partitioner}
-
-Partitioner to allocate minimum size per slice.
-
-Returns a partitioner that partitions the variable of given shape and dtype
-such that each partition has a minimum of `min_slice_size` slice of the
-variable. The maximum number of such partitions (upper bound) is given by
-`max_partitions`.
-
-##### Args:
-
-
-* <b>`max_partitions`</b>: Upper bound on the number of partitions. Defaults to 1.
-* <b>`axis`</b>: Axis along which to partition the variable. Defaults to 0.
-* <b>`min_slice_size`</b>: Minimum size of the variable slice per partition. Defaults
- to 256K.
-* <b>`bytes_per_string_element`</b>: If the `Variable` is of type string, this provides
- an estimate of how large each scalar in the `Variable` is.
-
-##### Returns:
-
- A partition function usable as the `partitioner` argument to
- `variable_scope`, `get_variable`, and `get_partitioned_variable_list`.
-
-
-- - -
-
-### `tf.scatter_update(ref, indices, updates, use_locking=None, name=None)` {#scatter_update}
-
-Applies sparse updates to a variable reference.
-
-This operation computes
-
- # Scalar indices
- ref[indices, ...] = updates[...]
-
- # Vector indices (for each i)
- ref[indices[i], ...] = updates[i, ...]
-
- # High rank indices (for each i, ..., j)
- ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]
-
-This operation outputs `ref` after the update is done.
-This makes it easier to chain operations that need to use the reset value.
-
-If values in `ref` is to be updated more than once, because there are
-duplicate entries in `indices`, the order at which the updates happen
-for each value is undefined.
-
-Requires `updates.shape = indices.shape + ref.shape[1:]`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/ScatterUpdate.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Should be from a `Variable` node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A tensor of indices into the first dimension of `ref`.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A tensor of updated values to store in `ref`.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `True`.
- If True, the assignment will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as `ref`. Returned as a convenience for operations that want
- to use the updated values after the update is done.
-
-
-- - -
-
-### `tf.scatter_add(ref, indices, updates, use_locking=None, name=None)` {#scatter_add}
-
-Adds sparse updates to a variable reference.
-
-This operation computes
-
- # Scalar indices
- ref[indices, ...] += updates[...]
-
- # Vector indices (for each i)
- ref[indices[i], ...] += updates[i, ...]
-
- # High rank indices (for each i, ..., j)
- ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]
-
-This operation outputs `ref` after the update is done.
-This makes it easier to chain operations that need to use the reset value.
-
-Duplicate entries are handled correctly: if multiple `indices` reference
-the same location, their contributions add.
-
-Requires `updates.shape = indices.shape + ref.shape[1:]`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/ScatterAdd.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Should be from a `Variable` node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A tensor of indices into the first dimension of `ref`.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A tensor of updated values to add to `ref`.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- If True, the addition will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as `ref`. Returned as a convenience for operations that want
- to use the updated values after the update is done.
-
-
-- - -
-
-### `tf.scatter_sub(ref, indices, updates, use_locking=None, name=None)` {#scatter_sub}
-
-Subtracts sparse updates to a variable reference.
-
- # Scalar indices
- ref[indices, ...] -= updates[...]
-
- # Vector indices (for each i)
- ref[indices[i], ...] -= updates[i, ...]
-
- # High rank indices (for each i, ..., j)
- ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]
-
-This operation outputs `ref` after the update is done.
-This makes it easier to chain operations that need to use the reset value.
-
-Duplicate entries are handled correctly: if multiple `indices` reference
-the same location, their (negated) contributions add.
-
-Requires `updates.shape = indices.shape + ref.shape[1:]`.
-
-<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;">
-<img style="width:100%" src="../../images/ScatterSub.png" alt>
-</div>
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Should be from a `Variable` node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A tensor of indices into the first dimension of `ref`.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A tensor of updated values to subtract from `ref`.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- If True, the subtraction will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as `ref`. Returned as a convenience for operations that want
- to use the updated values after the update is done.
-
-
-- - -
-
-### `tf.scatter_mul(ref, indices, updates, use_locking=None, name=None)` {#scatter_mul}
-
-Multiplies sparse updates into a variable reference.
-
-This operation computes
-
- # Scalar indices
- ref[indices, ...] *= updates[...]
-
- # Vector indices (for each i)
- ref[indices[i], ...] *= updates[i, ...]
-
- # High rank indices (for each i, ..., j)
- ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...]
-
-This operation outputs `ref` after the update is done.
-This makes it easier to chain operations that need to use the reset value.
-
-Duplicate entries are handled correctly: if multiple `indices` reference
-the same location, their contributions multiply.
-
-Requires `updates.shape = indices.shape + ref.shape[1:]`.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Should be from a `Variable` node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A tensor of indices into the first dimension of `ref`.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A tensor of updated values to multiply to `ref`.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- If True, the operation will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as `ref`. Returned as a convenience for operations that want
- to use the updated values after the update is done.
-
-
-- - -
-
-### `tf.scatter_div(ref, indices, updates, use_locking=None, name=None)` {#scatter_div}
-
-Divides a variable reference by sparse updates.
-
-This operation computes
-
- # Scalar indices
- ref[indices, ...] /= updates[...]
-
- # Vector indices (for each i)
- ref[indices[i], ...] /= updates[i, ...]
-
- # High rank indices (for each i, ..., j)
- ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]
-
-This operation outputs `ref` after the update is done.
-This makes it easier to chain operations that need to use the reset value.
-
-Duplicate entries are handled correctly: if multiple `indices` reference
-the same location, their contributions divide.
-
-Requires `updates.shape = indices.shape + ref.shape[1:]`.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- Should be from a `Variable` node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A tensor of indices into the first dimension of `ref`.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A tensor of values that `ref` is divided by.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- If True, the operation will be protected by a lock;
- otherwise the behavior is undefined, but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- Same as `ref`. Returned as a convenience for operations that want
- to use the updated values after the update is done.
-
-
-- - -
-
-### `tf.scatter_nd_update(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_update}
-
-Applies sparse `updates` to individual values or slices within a given
-
-variable according to `indices`.
-
-`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
-
-`indices` must be integer tensor, containing indices into `ref`.
-It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
-
-The innermost dimension of `indices` (with length `K`) corresponds to
-indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
-dimension of `ref`.
-
-`updates` is `Tensor` of rank `Q-1+P-K` with shape:
-
-```
-[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
-```
-
-For example, say we want to update 4 scattered elements to a rank-1 tensor to
-8 elements. In Python, that update would look like this:
-
- ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
- indices = tf.constant([[4], [3], [1] ,[7]])
- updates = tf.constant([9, 10, 11, 12])
- update = tf.scatter_nd_update(ref, indices, updates)
- with tf.Session() as sess:
- print sess.run(update)
-
-The resulting update to ref would look like this:
-
- [1, 11, 3, 10, 9, 6, 7, 12]
-
-See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to
-slices.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. A mutable Tensor. Should be from a Variable node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A Tensor. Must be one of the following types: int32, int64.
- A tensor of indices into ref.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A Tensor. Must have the same type as ref. A tensor of updated
- values to add to ref.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `True`.
- An optional bool. Defaults to True. If True, the assignment will
- be protected by a lock; otherwise the behavior is undefined,
- but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A mutable `Tensor`. Has the same type as `ref`.
- Same as ref. Returned as a convenience for operations that want to
- use the updated values after the update is done.
-
-
-- - -
-
-### `tf.scatter_nd_add(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_add}
-
-Applies sparse addition between `updates` and individual values or slices
-
-within a given variable according to `indices`.
-
-`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
-
-`indices` must be integer tensor, containing indices into `ref`.
-It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
-
-The innermost dimension of `indices` (with length `K`) corresponds to
-indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
-dimension of `ref`.
-
-`updates` is `Tensor` of rank `Q-1+P-K` with shape:
-
-```
-[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
-```
-
-For example, say we want to add 4 scattered elements to a rank-1 tensor to 8
-elements. In Python, that addition would look like this:
-
- ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
- indices = tf.constant([[4], [3], [1], [7]])
- updates = tf.constant([9, 10, 11, 12])
- add = tf.scatter_nd_add(ref, indices, updates)
- with tf.Session() as sess:
- print sess.run(add)
-
-The resulting update to ref would look like this:
-
- [1, 13, 3, 14, 14, 6, 7, 20]
-
-See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to
-slices.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- A mutable Tensor. Should be from a Variable node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A Tensor. Must be one of the following types: int32, int64.
- A tensor of indices into ref.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A Tensor. Must have the same type as ref. A tensor of updated values
- to add to ref.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- An optional bool. Defaults to True. If True, the assignment will
- be protected by a lock; otherwise the behavior is undefined,
- but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A mutable `Tensor`. Has the same type as `ref`.
- Same as ref. Returned as a convenience for operations that want
- to use the updated values after the update is done.
-
-
-- - -
-
-### `tf.scatter_nd_sub(ref, indices, updates, use_locking=None, name=None)` {#scatter_nd_sub}
-
-Applies sparse subtraction between `updates` and individual values or slices
-
-within a given variable according to `indices`.
-
-`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
-
-`indices` must be integer tensor, containing indices into `ref`.
-It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
-
-The innermost dimension of `indices` (with length `K`) corresponds to
-indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
-dimension of `ref`.
-
-`updates` is `Tensor` of rank `Q-1+P-K` with shape:
-
-```
-[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
-```
-
-For example, say we want to subtract 4 scattered elements from a rank-1 tensor
-with 8 elements. In Python, that subtraction would look like this:
-
- ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
- indices = tf.constant([[4], [3], [1], [7]])
- updates = tf.constant([9, 10, 11, 12])
- sub = tf.scatter_nd_sub(ref, indices, updates)
- with tf.Session() as sess:
- print sess.run(sub)
-
-The resulting update to ref would look like this:
-
- [1, -9, 3, -6, -4, 6, 7, -4]
-
-See [tf.scatter_nd](#scatter_nd) for more details about how to make updates to
-slices.
-
-##### Args:
-
-
-* <b>`ref`</b>: A mutable `Tensor`. Must be one of the following types: `float32`, `float64`, `int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.
- A mutable Tensor. Should be from a Variable node.
-* <b>`indices`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- A Tensor. Must be one of the following types: int32, int64.
- A tensor of indices into ref.
-* <b>`updates`</b>: A `Tensor`. Must have the same type as `ref`.
- A Tensor. Must have the same type as ref. A tensor of updated values
- to subtract from ref.
-* <b>`use_locking`</b>: An optional `bool`. Defaults to `False`.
- An optional bool. Defaults to True. If True, the assignment will
- be protected by a lock; otherwise the behavior is undefined,
- but may exhibit less contention.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A mutable `Tensor`. Has the same type as `ref`.
- Same as ref. Returned as a convenience for operations that want
- to use the updated values after the update is done.
-
-
-- - -
-
-### `tf.sparse_mask(a, mask_indices, name=None)` {#sparse_mask}
-
-Masks elements of `IndexedSlices`.
-
-Given an `IndexedSlices` instance `a`, returns another `IndexedSlices` that
-contains a subset of the slices of `a`. Only the slices at indices not
-specified in `mask_indices` are returned.
-
-This is useful when you need to extract a subset of slices in an
-`IndexedSlices` object.
-
-For example:
-
-```python
-# `a` contains slices at indices [12, 26, 37, 45] from a large tensor
-# with shape [1000, 10]
-a.indices => [12, 26, 37, 45]
-tf.shape(a.values) => [4, 10]
-
-# `b` will be the subset of `a` slices at its second and third indices, so
-# we want to mask its first and last indices (which are at absolute
-# indices 12, 45)
-b = tf.sparse_mask(a, [12, 45])
-
-b.indices => [26, 37]
-tf.shape(b.values) => [2, 10]
-
-```
-
-##### Args:
-
-
-* <b>`a`</b>: An `IndexedSlices` instance.
-* <b>`mask_indices`</b>: Indices of elements to mask.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The masked `IndexedSlices` instance.
-
-
-- - -
-
-### `class tf.IndexedSlices` {#IndexedSlices}
-
-A sparse representation of a set of tensor slices at given indices.
-
-This class is a simple wrapper for a pair of `Tensor` objects:
-
-* `values`: A `Tensor` of any dtype with shape `[D0, D1, ..., Dn]`.
-* `indices`: A 1-D integer `Tensor` with shape `[D0]`.
-
-An `IndexedSlices` is typically used to represent a subset of a larger
-tensor `dense` of shape `[LARGE0, D1, .. , DN]` where `LARGE0 >> D0`.
-The values in `indices` are the indices in the first dimension of
-the slices that have been extracted from the larger tensor.
-
-The dense tensor `dense` represented by an `IndexedSlices` `slices` has
-
-```python
-dense[slices.indices[i], :, :, :, ...] = slices.values[i, :, :, :, ...]
-```
-
-The `IndexedSlices` class is used principally in the definition of
-gradients for operations that have sparse gradients
-(e.g. [`tf.gather`](../../api_docs/python/array_ops.md#gather)).
-
-Contrast this representation with
-[`SparseTensor`](../../api_docs/python/sparse_ops.md#SparseTensor),
-which uses multi-dimensional indices and scalar values.
-- - -
-
-#### `tf.IndexedSlices.__init__(values, indices, dense_shape=None)` {#IndexedSlices.__init__}
-
-Creates an `IndexedSlices`.
-
-
-- - -
-
-#### `tf.IndexedSlices.__neg__()` {#IndexedSlices.__neg__}
-
-
-
-
-- - -
-
-#### `tf.IndexedSlices.__str__()` {#IndexedSlices.__str__}
-
-
-
-
-- - -
-
-#### `tf.IndexedSlices.dense_shape` {#IndexedSlices.dense_shape}
-
-A 1-D `Tensor` containing the shape of the corresponding dense tensor.
-
-
-- - -
-
-#### `tf.IndexedSlices.device` {#IndexedSlices.device}
-
-The name of the device on which `values` will be produced, or `None`.
-
-
-- - -
-
-#### `tf.IndexedSlices.dtype` {#IndexedSlices.dtype}
-
-The `DType` of elements in this tensor.
-
-
-- - -
-
-#### `tf.IndexedSlices.graph` {#IndexedSlices.graph}
-
-The `Graph` that contains the values, indices, and shape tensors.
-
-
-- - -
-
-#### `tf.IndexedSlices.indices` {#IndexedSlices.indices}
-
-A 1-D `Tensor` containing the indices of the slices.
-
-
-- - -
-
-#### `tf.IndexedSlices.name` {#IndexedSlices.name}
-
-The name of this `IndexedSlices`.
-
-
-- - -
-
-#### `tf.IndexedSlices.op` {#IndexedSlices.op}
-
-The `Operation` that produces `values` as an output.
-
-
-- - -
-
-#### `tf.IndexedSlices.values` {#IndexedSlices.values}
-
-A `Tensor` containing the values of the slices.
-
-
-
-- - -
-
-### `tf.initialize_all_tables(*args, **kwargs)` {#initialize_all_tables}
-
-Returns an Op that initializes all tables of the default graph. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02.
-Instructions for updating:
-Use `tf.tables_initializer` instead.
-
-##### Args:
-
-
-* <b>`name`</b>: Optional name for the initialization op.
-
-##### Returns:
-
- An Op that initializes all tables. Note that if there are
- not tables the returned Op is a NoOp.
-
-
-- - -
-
-### `tf.tables_initializer(name='init_all_tables')` {#tables_initializer}
-
-Returns an Op that initializes all tables of the default graph.
-
-##### Args:
-
-
-* <b>`name`</b>: Optional name for the initialization op.
-
-##### Returns:
-
- An Op that initializes all tables. Note that if there are
- not tables the returned Op is a NoOp.
-
-
-- - -
-
-### `tf.train.export_meta_graph(filename=None, meta_info_def=None, graph_def=None, saver_def=None, collection_list=None, as_text=False, graph=None, export_scope=None, clear_devices=False, **kwargs)` {#export_meta_graph}
-
-Returns `MetaGraphDef` proto. Optionally writes it to filename.
-
-This function exports the graph, saver, and collection objects into
-`MetaGraphDef` protocol buffer with the intention of it being imported
-at a later time or location to restart training, run inference, or be
-a subgraph.
-
-##### Args:
-
-
-* <b>`filename`</b>: Optional filename including the path for writing the
- generated `MetaGraphDef` protocol buffer.
-* <b>`meta_info_def`</b>: `MetaInfoDef` protocol buffer.
-* <b>`graph_def`</b>: `GraphDef` protocol buffer.
-* <b>`saver_def`</b>: `SaverDef` protocol buffer.
-* <b>`collection_list`</b>: List of string keys to collect.
-* <b>`as_text`</b>: If `True`, writes the `MetaGraphDef` as an ASCII proto.
-* <b>`graph`</b>: The `Graph` to import into. If `None`, use the default graph.
-* <b>`export_scope`</b>: Optional `string`. Name scope under which to extract
- the subgraph. The scope name will be striped from the node definitions
- for easy import later into new name scopes. If `None`, the whole graph
- is exported. graph_def and export_scope cannot both be specified.
-* <b>`clear_devices`</b>: Whether or not to clear the device field for an `Operation`
- or `Tensor` during export.
-* <b>`**kwargs`</b>: Optional keyed arguments.
-
-##### Returns:
-
- A `MetaGraphDef` proto.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: When the `GraphDef` is larger than 2GB.
-
-
-- - -
-
-### `tf.train.import_meta_graph(meta_graph_or_file, clear_devices=False, import_scope=None, **kwargs)` {#import_meta_graph}
-
-Recreates a Graph saved in a `MetaGraphDef` proto.
-
-This function takes a `MetaGraphDef` protocol buffer as input. If
-the argument is a file containing a `MetaGraphDef` protocol buffer ,
-it constructs a protocol buffer from the file content. The function
-then adds all the nodes from the `graph_def` field to the
-current graph, recreates all the collections, and returns a saver
-constructed from the `saver_def` field.
-
-In combination with `export_meta_graph()`, this function can be used to
-
-* Serialize a graph along with other Python objects such as `QueueRunner`,
- `Variable` into a `MetaGraphDef`.
-
-* Restart training from a saved graph and checkpoints.
-
-* Run inference from a saved graph and checkpoints.
-
-```Python
-...
-# Create a saver.
-saver = tf.train.Saver(...variables...)
-# Remember the training_op we want to run by adding it to a collection.
-tf.add_to_collection('train_op', train_op)
-sess = tf.Session()
-for step in xrange(1000000):
- sess.run(train_op)
- if step % 1000 == 0:
- # Saves checkpoint, which by default also exports a meta_graph
- # named 'my-model-global_step.meta'.
- saver.save(sess, 'my-model', global_step=step)
-```
-
-Later we can continue training from this saved `meta_graph` without building
-the model from scratch.
-
-```Python
-with tf.Session() as sess:
- new_saver = tf.train.import_meta_graph('my-save-dir/my-model-10000.meta')
- new_saver.restore(sess, 'my-save-dir/my-model-10000')
- # tf.get_collection() returns a list. In this example we only want the
- # first one.
- train_op = tf.get_collection('train_op')[0]
- for step in xrange(1000000):
- sess.run(train_op)
-```
-
-NOTE: Restarting training from saved `meta_graph` only works if the
-device assignments have not changed.
-
-##### Args:
-
-
-* <b>`meta_graph_or_file`</b>: `MetaGraphDef` protocol buffer or filename (including
- the path) containing a `MetaGraphDef`.
-* <b>`clear_devices`</b>: Whether or not to clear the device field for an `Operation`
- or `Tensor` during import.
-* <b>`import_scope`</b>: Optional `string`. Name scope to add. Only used when
- initializing from protocol buffer.
-* <b>`**kwargs`</b>: Optional keyed arguments.
-
-##### Returns:
-
- A saver constructed from `saver_def` in `MetaGraphDef` or None.
-
- A None value is returned if no variables exist in the `MetaGraphDef`
- (i.e., there are no variables to restore).
-
-
-- - -
-
-### `tf.all_variables(*args, **kwargs)` {#all_variables}
-
-See `tf.global_variables`. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02.
-Instructions for updating:
-Please use tf.global_variables instead.
-
-
-- - -
-
-### `tf.initialize_all_variables(*args, **kwargs)` {#initialize_all_variables}
-
-See `tf.global_variables_initializer`. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02.
-Instructions for updating:
-Use `tf.global_variables_initializer` instead.
-
-
-- - -
-
-### `tf.initialize_local_variables(*args, **kwargs)` {#initialize_local_variables}
-
-See `tf.local_variables_initializer`. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02.
-Instructions for updating:
-Use `tf.local_variables_initializer` instead.
-
-
-- - -
-
-### `tf.initialize_variables(*args, **kwargs)` {#initialize_variables}
-
-See `tf.variables_initializer`. (deprecated)
-
-THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02.
-Instructions for updating:
-Use `tf.variables_initializer` instead.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/string_ops.md b/tensorflow/g3doc/api_docs/python/string_ops.md
deleted file mode 100644
index ad170934a3..0000000000
--- a/tensorflow/g3doc/api_docs/python/string_ops.md
+++ /dev/null
@@ -1,392 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Strings
-
-Note: Functions taking `Tensor` arguments can also take anything accepted by
-[`tf.convert_to_tensor`](framework.md#convert_to_tensor).
-
-[TOC]
-
-Operations for working with string Tensors.
-
-See the @{$python/string_ops} guide.
-
-- - -
-
-### `tf.string_to_hash_bucket_fast(input, num_buckets, name=None)` {#string_to_hash_bucket_fast}
-
-Converts each string in the input Tensor to its hash mod by a number of buckets.
-
-The hash function is deterministic on the content of the string within the
-process and will never change. However, it is not suitable for cryptography.
-This function may be used when CPU time is scarce and inputs are trusted or
-unimportant. There is a risk of adversaries constructing inputs that all hash
-to the same bucket. To prevent this problem, use a strong hash function with
-`tf.string_to_hash_bucket_strong`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `string`. The strings to assign a hash bucket.
-* <b>`num_buckets`</b>: An `int` that is `>= 1`. The number of buckets.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `int64`.
- A Tensor of the same shape as the input `string_tensor`.
-
-
-- - -
-
-### `tf.string_to_hash_bucket_strong(input, num_buckets, key, name=None)` {#string_to_hash_bucket_strong}
-
-Converts each string in the input Tensor to its hash mod by a number of buckets.
-
-The hash function is deterministic on the content of the string within the
-process. The hash function is a keyed hash function, where attribute `key`
-defines the key of the hash function. `key` is an array of 2 elements.
-
-A strong hash is important when inputs may be malicious, e.g. URLs with
-additional components. Adversaries could try to make their inputs hash to the
-same bucket for a denial-of-service attack or to skew the results. A strong
-hash prevents this by making it dificult, if not infeasible, to compute inputs
-that hash to the same bucket. This comes at a cost of roughly 4x higher compute
-time than `tf.string_to_hash_bucket_fast`.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `string`. The strings to assign a hash bucket.
-* <b>`num_buckets`</b>: An `int` that is `>= 1`. The number of buckets.
-* <b>`key`</b>: A list of `ints`.
- The key for the keyed hash function passed as a list of two uint64
- elements.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `int64`.
- A Tensor of the same shape as the input `string_tensor`.
-
-
-- - -
-
-### `tf.string_to_hash_bucket(string_tensor, num_buckets, name=None)` {#string_to_hash_bucket}
-
-Converts each string in the input Tensor to its hash mod by a number of buckets.
-
-The hash function is deterministic on the content of the string within the
-process.
-
-Note that the hash function may change from time to time.
-This functionality will be deprecated and it's recommended to use
-`tf.string_to_hash_bucket_fast()` or `tf.string_to_hash_bucket_strong()`.
-
-##### Args:
-
-
-* <b>`string_tensor`</b>: A `Tensor` of type `string`.
-* <b>`num_buckets`</b>: An `int` that is `>= 1`. The number of buckets.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `int64`.
- A Tensor of the same shape as the input `string_tensor`.
-
-
-- - -
-
-### `tf.reduce_join(inputs, axis=None, keep_dims=False, separator='', name=None, reduction_indices=None)` {#reduce_join}
-
-Joins a string Tensor across the given dimensions.
-
-Computes the string join across dimensions in the given string Tensor of shape
-`[d_0, d_1, ..., d_n-1]`. Returns a new Tensor created by joining the input
-strings with the given separator (default: empty string). Negative indices are
-counted backwards from the end, with `-1` being equivalent to `n - 1`.
-
-For example:
-
-```
-# tensor `a` is [["a", "b"], ["c", "d"]]
-tf.reduce_join(a, 0) ==> ["ac", "bd"]
-tf.reduce_join(a, 1) ==> ["ab", "cd"]
-tf.reduce_join(a, -2) = tf.reduce_join(a, 0) ==> ["ac", "bd"]
-tf.reduce_join(a, -1) = tf.reduce_join(a, 1) ==> ["ab", "cd"]
-tf.reduce_join(a, 0, keep_dims=True) ==> [["ac", "bd"]]
-tf.reduce_join(a, 1, keep_dims=True) ==> [["ab"], ["cd"]]
-tf.reduce_join(a, 0, separator=".") ==> ["a.c", "b.d"]
-tf.reduce_join(a, [0, 1]) ==> ["acbd"]
-tf.reduce_join(a, [1, 0]) ==> ["abcd"]
-tf.reduce_join(a, []) ==> ["abcd"]
-```
-
-##### Args:
-
-
-* <b>`inputs`</b>: A `Tensor` of type `string`.
- The input to be joined. All reduced indices must have non-zero size.
-* <b>`axis`</b>: A `Tensor` of type `int32`.
- The dimensions to reduce over. Dimensions are reduced in the
- order specified. Omitting `axis` is equivalent to passing
- `[n-1, n-2, ..., 0]`. Negative indices from `-n` to `-1` are supported.
-* <b>`keep_dims`</b>: An optional `bool`. Defaults to `False`.
- If `True`, retain reduced dimensions with length `1`.
-* <b>`separator`</b>: An optional `string`. Defaults to `""`.
- The separator to use when joining.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`.
- Has shape equal to that of the input with reduced dimensions removed or
- set to `1` depending on `keep_dims`.
-
-
-- - -
-
-### `tf.string_join(inputs, separator=None, name=None)` {#string_join}
-
-Joins the strings in the given list of string tensors into one tensor;
-
-with the given separator (default is an empty separator).
-
-##### Args:
-
-
-* <b>`inputs`</b>: A list of at least 1 `Tensor` objects of type `string`.
- A list of string tensors. The tensors must all have the same shape,
- or be scalars. Scalars may be mixed in; these will be broadcast to the shape
- of non-scalar inputs.
-* <b>`separator`</b>: An optional `string`. Defaults to `""`.
- string, an optional join separator.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`.
-
-
-- - -
-
-### `tf.string_split(source, delimiter=' ')` {#string_split}
-
-Split elements of `source` based on `delimiter` into a `SparseTensor`.
-
-Let N be the size of source (typically N will be the batch size). Split each
-element of `source` based on `delimiter` and return a `SparseTensor`
-containing the splitted tokens. Empty tokens are ignored.
-
-If `delimiter` is an empty string, each element of the `source` is split
-into individual strings, each containing one byte. (This includes splitting
-multibyte sequences of UTF-8.) If delimiter contains multiple bytes, it is
-treated as a set of delimiters with each considered a potential split point.
-
-For example:
-N = 2, source[0] is 'hello world' and source[1] is 'a b c', then the output
-will be
-
-st.indices = [0, 0;
- 0, 1;
- 1, 0;
- 1, 1;
- 1, 2]
-st.shape = [2, 3]
-st.values = ['hello', 'world', 'a', 'b', 'c']
-
-##### Args:
-
-
-* <b>`source`</b>: `1-D` string `Tensor`, the strings to split.
-* <b>`delimiter`</b>: `0-D` string `Tensor`, the delimiter character, the string should
- be length 0 or 1.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If delimiter is not a string.
-
-##### Returns:
-
- A `SparseTensor` of rank `2`, the strings split according to the delimiter.
- The first column of the indices corresponds to the row in `source` and the
- second column corresponds to the index of the split component in this row.
-
-
-- - -
-
-### `tf.substr(input, pos, len, name=None)` {#substr}
-
-Return substrings from `Tensor` of strings.
-
-For each string in the input `Tensor`, creates a substring starting at index
-`pos` with a total length of `len`.
-
-If `len` defines a substring that would extend beyond the length of the input
-string, then as many characters as possible are used.
-
-If `pos` is negative or specifies a character index larger than any of the input
-strings, then an `InvalidArgumentError` is thrown.
-
-`pos` and `len` must have the same shape, otherwise a `ValueError` is thrown on
-Op creation.
-
-*NOTE*: `Substr` supports broadcasting up to two dimensions. More about
-broadcasting
-[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
-
----
-
-Examples
-
-Using scalar `pos` and `len`:
-
-```
-input = [b'Hello', b'World']
-position = 1
-length = 3
-
-output = [b'ell', b'orl']
-```
-
-Using `pos` and `len` with same shape as `input`:
-
-```
-input = [[b'ten', b'eleven', b'twelve'],
- [b'thirteen', b'fourteen', b'fifteen'],
- [b'sixteen', b'seventeen', b'eighteen']]
-position = [[1, 2, 3],
- [1, 2, 3],
- [1, 2, 3]]
-length = [[2, 3, 4],
- [4, 3, 2],
- [5, 5, 5]]
-
-output = [[b'en', b'eve', b'lve'],
- [b'hirt', b'urt', b'te'],
- [b'ixtee', b'vente', b'hteen']]
-```
-
-Broadcasting `pos` and `len` onto `input`:
-
-```
-input = [[b'ten', b'eleven', b'twelve'],
- [b'thirteen', b'fourteen', b'fifteen'],
- [b'sixteen', b'seventeen', b'eighteen'],
- [b'nineteen', b'twenty', b'twentyone']]
-position = [1, 2, 3]
-length = [1, 2, 3]
-
-output = [[b'e', b'ev', b'lve'],
- [b'h', b'ur', b'tee'],
- [b'i', b've', b'hte'],
- [b'i', b'en', b'nty']]
-```
-
-Broadcasting `input` onto `pos` and `len`:
-
-```
-input = b'thirteen'
-position = [1, 5, 7]
-length = [3, 2, 1]
-
-output = [b'hir', b'ee', b'n"]
-```
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `string`. Tensor of strings
-* <b>`pos`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`.
- Scalar defining the position of first character in each substring
-* <b>`len`</b>: A `Tensor`. Must have the same type as `pos`.
- Scalar defining the number of characters to include in each substring
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`. Tensor of substrings
-
-
-- - -
-
-### `tf.as_string(input, precision=None, scientific=None, shortest=None, width=None, fill=None, name=None)` {#as_string}
-
-Converts each entry in the given tensor to strings. Supports many numeric
-
-types and boolean.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`. Must be one of the following types: `int32`, `int64`, `complex64`, `float32`, `float64`, `bool`, `int8`.
-* <b>`precision`</b>: An optional `int`. Defaults to `-1`.
- The post-decimal precision to use for floating point numbers.
- Only used if precision > -1.
-* <b>`scientific`</b>: An optional `bool`. Defaults to `False`.
- Use scientific notation for floating point numbers.
-* <b>`shortest`</b>: An optional `bool`. Defaults to `False`.
- Use shortest representation (either scientific or standard) for
- floating point numbers.
-* <b>`width`</b>: An optional `int`. Defaults to `-1`.
- Pad pre-decimal numbers to this width.
- Applies to both floating point and integer numbers.
- Only used if width > -1.
-* <b>`fill`</b>: An optional `string`. Defaults to `""`.
- The value to pad if width > -1. If empty, pads with spaces.
- Another typical value is '0'. String cannot be longer than 1 character.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`.
-
-
-- - -
-
-### `tf.encode_base64(input, pad=None, name=None)` {#encode_base64}
-
-Encode strings into web-safe base64 format.
-
-Refer to the following article for more information on base64 format:
-en.wikipedia.org/wiki/Base64. Base64 strings may have padding with '=' at the
-end so that the encoded has length multiple of 4. See Padding section of the
-link above.
-
-Web-safe means that the encoder uses - and _ instead of + and /.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `string`. Strings to be encoded.
-* <b>`pad`</b>: An optional `bool`. Defaults to `False`.
- Bool whether padding is applied at the ends.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`. Input strings encoded in base64.
-
-
-- - -
-
-### `tf.decode_base64(input, name=None)` {#decode_base64}
-
-Decode web-safe base64-encoded strings.
-
-Input may or may not have padding at the end. See EncodeBase64 for padding.
-Web-safe means that input must use - and _ instead of + and /.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor` of type `string`. Base64 strings to decode.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor` of type `string`. Decoded strings.
-
-
diff --git a/tensorflow/g3doc/api_docs/python/summary.md b/tensorflow/g3doc/api_docs/python/summary.md
deleted file mode 100644
index e7b3fe3511..0000000000
--- a/tensorflow/g3doc/api_docs/python/summary.md
+++ /dev/null
@@ -1,1004 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Summary Operations
-[TOC]
-
-Tensor summaries for exporting information about a model.
-
-See the @{$python/summary} guide.
-
-- - -
-
-### `class tf.summary.FileWriter` {#FileWriter}
-
-Writes `Summary` protocol buffers to event files.
-
-The `FileWriter` class provides a mechanism to create an event file in a
-given directory and add summaries and events to it. The class updates the
-file contents asynchronously. This allows a training program to call methods
-to add data to the file directly from the training loop, without slowing down
-training.
-- - -
-
-#### `tf.summary.FileWriter.__init__(logdir, graph=None, max_queue=10, flush_secs=120, graph_def=None)` {#FileWriter.__init__}
-
-Creates a `FileWriter` and an event file.
-
-On construction the summary writer creates a new event file in `logdir`.
-This event file will contain `Event` protocol buffers constructed when you
-call one of the following functions: `add_summary()`, `add_session_log()`,
-`add_event()`, or `add_graph()`.
-
-If you pass a `Graph` to the constructor it is added to
-the event file. (This is equivalent to calling `add_graph()` later).
-
-TensorBoard will pick the graph from the file and display it graphically so
-you can interactively explore the graph you built. You will usually pass
-the graph from the session in which you launched it:
-
-```python
-...create a graph...
-# Launch the graph in a session.
-sess = tf.Session()
-# Create a summary writer, add the 'graph' to the event file.
-writer = tf.summary.FileWriter(<some-directory>, sess.graph)
-```
-
-The other arguments to the constructor control the asynchronous writes to
-the event file:
-
-* `flush_secs`: How often, in seconds, to flush the added summaries
- and events to disk.
-* `max_queue`: Maximum number of summaries or events pending to be
- written to disk before one of the 'add' calls block.
-
-##### Args:
-
-
-* <b>`logdir`</b>: A string. Directory where event file will be written.
-* <b>`graph`</b>: A `Graph` object, such as `sess.graph`.
-* <b>`max_queue`</b>: Integer. Size of the queue for pending events and summaries.
-* <b>`flush_secs`</b>: Number. How often, in seconds, to flush the
- pending events and summaries to disk.
-* <b>`graph_def`</b>: DEPRECATED: Use the `graph` argument instead.
-
-
-- - -
-
-#### `tf.summary.FileWriter.add_event(event)` {#FileWriter.add_event}
-
-Adds an event to the event file.
-
-##### Args:
-
-
-* <b>`event`</b>: An `Event` protocol buffer.
-
-
-- - -
-
-#### `tf.summary.FileWriter.add_graph(graph, global_step=None, graph_def=None)` {#FileWriter.add_graph}
-
-Adds a `Graph` to the event file.
-
-The graph described by the protocol buffer will be displayed by
-TensorBoard. Most users pass a graph in the constructor instead.
-
-##### Args:
-
-
-* <b>`graph`</b>: A `Graph` object, such as `sess.graph`.
-* <b>`global_step`</b>: Number. Optional global step counter to record with the
- graph.
-* <b>`graph_def`</b>: DEPRECATED. Use the `graph` parameter instead.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both graph and graph_def are passed to the method.
-
-
-- - -
-
-#### `tf.summary.FileWriter.add_meta_graph(meta_graph_def, global_step=None)` {#FileWriter.add_meta_graph}
-
-Adds a `MetaGraphDef` to the event file.
-
-The `MetaGraphDef` allows running the given graph via
-`saver.import_meta_graph()`.
-
-##### Args:
-
-
-* <b>`meta_graph_def`</b>: A `MetaGraphDef` object, often as retured by
- `saver.export_meta_graph()`.
-* <b>`global_step`</b>: Number. Optional global step counter to record with the
- graph.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If both `meta_graph_def` is not an instance of `MetaGraphDef`.
-
-
-- - -
-
-#### `tf.summary.FileWriter.add_run_metadata(run_metadata, tag, global_step=None)` {#FileWriter.add_run_metadata}
-
-Adds a metadata information for a single session.run() call.
-
-##### Args:
-
-
-* <b>`run_metadata`</b>: A `RunMetadata` protobuf object.
-* <b>`tag`</b>: The tag name for this metadata.
-* <b>`global_step`</b>: Number. Optional global step counter to record with the
- StepStats.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the provided tag was already used for this type of event.
-
-
-- - -
-
-#### `tf.summary.FileWriter.add_session_log(session_log, global_step=None)` {#FileWriter.add_session_log}
-
-Adds a `SessionLog` protocol buffer to the event file.
-
-This method wraps the provided session in an `Event` protocol buffer
-and adds it to the event file.
-
-##### Args:
-
-
-* <b>`session_log`</b>: A `SessionLog` protocol buffer.
-* <b>`global_step`</b>: Number. Optional global step value to record with the
- summary.
-
-
-- - -
-
-#### `tf.summary.FileWriter.add_summary(summary, global_step=None)` {#FileWriter.add_summary}
-
-Adds a `Summary` protocol buffer to the event file.
-
-This method wraps the provided summary in an `Event` protocol buffer
-and adds it to the event file.
-
-You can pass the result of evaluating any summary op, using
-[`Session.run()`](client.md#Session.run) or
-[`Tensor.eval()`](framework.md#Tensor.eval), to this
-function. Alternatively, you can pass a `tf.Summary` protocol
-buffer that you populate with your own data. The latter is
-commonly done to report evaluation results in event files.
-
-##### Args:
-
-
-* <b>`summary`</b>: A `Summary` protocol buffer, optionally serialized as a string.
-* <b>`global_step`</b>: Number. Optional global step value to record with the
- summary.
-
-
-- - -
-
-#### `tf.summary.FileWriter.close()` {#FileWriter.close}
-
-Flushes the event file to disk and close the file.
-
-Call this method when you do not need the summary writer anymore.
-
-
-- - -
-
-#### `tf.summary.FileWriter.flush()` {#FileWriter.flush}
-
-Flushes the event file to disk.
-
-Call this method to make sure that all pending events have been written to
-disk.
-
-
-- - -
-
-#### `tf.summary.FileWriter.get_logdir()` {#FileWriter.get_logdir}
-
-Returns the directory where event file will be written.
-
-
-- - -
-
-#### `tf.summary.FileWriter.reopen()` {#FileWriter.reopen}
-
-Reopens the EventFileWriter.
-
-Can be called after `close()` to add more events in the same directory.
-The events will go into a new events file.
-
-Does nothing if the EventFileWriter was not closed.
-
-
-
-- - -
-
-### `class tf.summary.FileWriterCache` {#FileWriterCache}
-
-Cache for file writers.
-
-This class caches file writers, one per directory.
-- - -
-
-#### `tf.summary.FileWriterCache.clear()` {#FileWriterCache.clear}
-
-Clear cached summary writers. Currently only used for unit tests.
-
-
-- - -
-
-#### `tf.summary.FileWriterCache.get(logdir)` {#FileWriterCache.get}
-
-Returns the FileWriter for the specified directory.
-
-##### Args:
-
-
-* <b>`logdir`</b>: str, name of the directory.
-
-##### Returns:
-
- A `FileWriter`.
-
-
-
-- - -
-
-### `tf.summary.tensor_summary(name, tensor, summary_description=None, collections=None)` {#tensor_summary}
-
-Outputs a `Summary` protocol buffer with a serialized tensor.proto.
-
-The generated
-[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)
-has one summary value containing the input tensor.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the generated node. Will also serve as the series name in
- TensorBoard.
-* <b>`tensor`</b>: A tensor of any type and shape to serialize.
-* <b>`summary_description`</b>: Optional summary_pb2.SummaryDescription()
-* <b>`collections`</b>: Optional list of graph collections keys. The new summary op is
- added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.
-
-##### Returns:
-
- A scalar `Tensor` of type `string`. The serialized `Summary` protocol
- buffer.
-
-
-- - -
-
-### `tf.summary.scalar(name, tensor, collections=None)` {#scalar}
-
-Outputs a `Summary` protocol buffer containing a single scalar value.
-
-The generated Summary has a Tensor.proto containing the input Tensor.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the generated node. Will also serve as the series name in
- TensorBoard.
-* <b>`tensor`</b>: A real numeric Tensor containing a single value.
-* <b>`collections`</b>: Optional list of graph collections keys. The new summary op is
- added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.
-
-##### Returns:
-
- A scalar `Tensor` of type `string`. Which contains a `Summary` protobuf.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If tensor has the wrong shape or type.
-
-
-- - -
-
-### `tf.summary.histogram(name, values, collections=None)` {#histogram}
-
-Outputs a `Summary` protocol buffer with a histogram.
-
-The generated
-[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)
-has one summary value containing a histogram for `values`.
-
-This op reports an `InvalidArgument` error if any value is not finite.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the generated node. Will also serve as a series name in
- TensorBoard.
-* <b>`values`</b>: A real numeric `Tensor`. Any shape. Values to use to
- build the histogram.
-* <b>`collections`</b>: Optional list of graph collections keys. The new summary op is
- added to these collections. Defaults to `[GraphKeys.SUMMARIES]`.
-
-##### Returns:
-
- A scalar `Tensor` of type `string`. The serialized `Summary` protocol
- buffer.
-
-
-- - -
-
-### `tf.summary.audio(name, tensor, sample_rate, max_outputs=3, collections=None)` {#audio}
-
-Outputs a `Summary` protocol buffer with audio.
-
-The summary has up to `max_outputs` summary values containing audio. The
-audio is built from `tensor` which must be 3-D with shape `[batch_size,
-frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are
-assumed to be in the range of `[-1.0, 1.0]` with a sample rate of
-`sample_rate`.
-
-The `tag` in the outputted Summary.Value protobufs is generated based on the
-name, with a suffix depending on the max_outputs setting:
-
-* If `max_outputs` is 1, the summary value tag is '*name*/audio'.
-* If `max_outputs` is greater than 1, the summary value tags are
- generated sequentially as '*name*/audio/0', '*name*/audio/1', etc
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the generated node. Will also serve as a series name in
- TensorBoard.
-* <b>`tensor`</b>: A 3-D `float32` `Tensor` of shape `[batch_size, frames, channels]`
- or a 2-D `float32` `Tensor` of shape `[batch_size, frames]`.
-* <b>`sample_rate`</b>: A Scalar `float32` `Tensor` indicating the sample rate of the
- signal in hertz.
-* <b>`max_outputs`</b>: Max number of batch elements to generate audio for.
-* <b>`collections`</b>: Optional list of ops.GraphKeys. The collections to add the
- summary to. Defaults to [_ops.GraphKeys.SUMMARIES]
-
-##### Returns:
-
- A scalar `Tensor` of type `string`. The serialized `Summary` protocol
- buffer.
-
-
-- - -
-
-### `tf.summary.image(name, tensor, max_outputs=3, collections=None)` {#image}
-
-Outputs a `Summary` protocol buffer with images.
-
-The summary has up to `max_outputs` summary values containing images. The
-images are built from `tensor` which must be 4-D with shape `[batch_size,
-height, width, channels]` and where `channels` can be:
-
-* 1: `tensor` is interpreted as Grayscale.
-* 3: `tensor` is interpreted as RGB.
-* 4: `tensor` is interpreted as RGBA.
-
-The images have the same number of channels as the input tensor. For float
-input, the values are normalized one image at a time to fit in the range
-`[0, 255]`. `uint8` values are unchanged. The op uses two different
-normalization algorithms:
-
-* If the input values are all positive, they are rescaled so the largest one
- is 255.
-
-* If any input value is negative, the values are shifted so input value 0.0
- is at 127. They are then rescaled so that either the smallest value is 0,
- or the largest one is 255.
-
-The `tag` in the outputted Summary.Value protobufs is generated based on the
-name, with a suffix depending on the max_outputs setting:
-
-* If `max_outputs` is 1, the summary value tag is '*name*/image'.
-* If `max_outputs` is greater than 1, the summary value tags are
- generated sequentially as '*name*/image/0', '*name*/image/1', etc.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the generated node. Will also serve as a series name in
- TensorBoard.
-* <b>`tensor`</b>: A 4-D `uint8` or `float32` `Tensor` of shape `[batch_size, height,
- width, channels]` where `channels` is 1, 3, or 4.
-* <b>`max_outputs`</b>: Max number of batch elements to generate images for.
-* <b>`collections`</b>: Optional list of ops.GraphKeys. The collections to add the
- summary to. Defaults to [_ops.GraphKeys.SUMMARIES]
-
-##### Returns:
-
- A scalar `Tensor` of type `string`. The serialized `Summary` protocol
- buffer.
-
-
-- - -
-
-### `tf.summary.merge(inputs, collections=None, name=None)` {#merge}
-
-Merges summaries.
-
-This op creates a
-[`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)
-protocol buffer that contains the union of all the values in the input
-summaries.
-
-When the Op is run, it reports an `InvalidArgument` error if multiple values
-in the summaries to merge use the same tag.
-
-##### Args:
-
-
-* <b>`inputs`</b>: A list of `string` `Tensor` objects containing serialized `Summary`
- protocol buffers.
-* <b>`collections`</b>: Optional list of graph collections keys. The new summary op is
- added to these collections. Defaults to `[]`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A scalar `Tensor` of type `string`. The serialized `Summary` protocol
- buffer resulting from the merging.
-
-
-- - -
-
-### `tf.summary.merge_all(key='summaries')` {#merge_all}
-
-Merges all summaries collected in the default graph.
-
-##### Args:
-
-
-* <b>`key`</b>: `GraphKey` used to collect the summaries. Defaults to
- `GraphKeys.SUMMARIES`.
-
-##### Returns:
-
- If no summaries were collected, returns None. Otherwise returns a scalar
- `Tensor` of type `string` containing the serialized `Summary` protocol
- buffer resulting from the merging.
-
-
-- - -
-
-### `tf.summary.get_summary_description(node_def)` {#get_summary_description}
-
-Given a TensorSummary node_def, retrieve its SummaryDescription.
-
-When a Summary op is instantiated, a SummaryDescription of associated
-metadata is stored in its NodeDef. This method retrieves the description.
-
-##### Args:
-
-
-* <b>`node_def`</b>: the node_def_pb2.NodeDef of a TensorSummary op
-
-##### Returns:
-
- a summary_pb2.SummaryDescription
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the node is not a summary op.
-
-
-
-## Other Functions and Classes
-- - -
-
-### `class tf.summary.SummaryDescription` {#SummaryDescription}
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.ByteSize()` {#SummaryDescription.ByteSize}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.Clear()` {#SummaryDescription.Clear}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.ClearExtension(extension_handle)` {#SummaryDescription.ClearExtension}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.ClearField(field_name)` {#SummaryDescription.ClearField}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.CopyFrom(other_msg)` {#SummaryDescription.CopyFrom}
-
-Copies the content of the specified message into the current message.
-
-The method clears the current message and then merges the specified
-message using MergeFrom.
-
-##### Args:
-
-
-* <b>`other_msg`</b>: Message to copy into the current one.
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.DiscardUnknownFields()` {#SummaryDescription.DiscardUnknownFields}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.FindInitializationErrors()` {#SummaryDescription.FindInitializationErrors}
-
-Finds required fields which are not initialized.
-
-##### Returns:
-
- A list of strings. Each string is a path to an uninitialized field from
- the top-level message, e.g. "foo.bar[5].baz".
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.FromString(s)` {#SummaryDescription.FromString}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.HasExtension(extension_handle)` {#SummaryDescription.HasExtension}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.HasField(field_name)` {#SummaryDescription.HasField}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.IsInitialized(errors=None)` {#SummaryDescription.IsInitialized}
-
-Checks if all required fields of a message are set.
-
-##### Args:
-
-
-* <b>`errors`</b>: A list which, if provided, will be populated with the field
- paths of all missing required fields.
-
-##### Returns:
-
- True iff the specified message has all required fields set.
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.ListFields()` {#SummaryDescription.ListFields}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.MergeFrom(msg)` {#SummaryDescription.MergeFrom}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.MergeFromString(serialized)` {#SummaryDescription.MergeFromString}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.ParseFromString(serialized)` {#SummaryDescription.ParseFromString}
-
-Parse serialized protocol buffer data into this message.
-
-Like MergeFromString(), except we clear the object first and
-do not return the value that MergeFromString returns.
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.RegisterExtension(extension_handle)` {#SummaryDescription.RegisterExtension}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.SerializePartialToString()` {#SummaryDescription.SerializePartialToString}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.SerializeToString()` {#SummaryDescription.SerializeToString}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.SetInParent()` {#SummaryDescription.SetInParent}
-
-Sets the _cached_byte_size_dirty bit to true,
-and propagates this to our listener iff this was a state change.
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.WhichOneof(oneof_name)` {#SummaryDescription.WhichOneof}
-
-Returns the name of the currently set field inside a oneof, or None.
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__deepcopy__(memo=None)` {#SummaryDescription.__deepcopy__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__eq__(other)` {#SummaryDescription.__eq__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__getstate__()` {#SummaryDescription.__getstate__}
-
-Support the pickle protocol.
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__hash__()` {#SummaryDescription.__hash__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__init__(**kwargs)` {#SummaryDescription.__init__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__ne__(other_msg)` {#SummaryDescription.__ne__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__repr__()` {#SummaryDescription.__repr__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__setstate__(state)` {#SummaryDescription.__setstate__}
-
-Support the pickle protocol.
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__str__()` {#SummaryDescription.__str__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.__unicode__()` {#SummaryDescription.__unicode__}
-
-
-
-
-- - -
-
-#### `tf.summary.SummaryDescription.type_hint` {#SummaryDescription.type_hint}
-
-Magic attribute generated for "type_hint" proto field.
-
-
-
-- - -
-
-### `class tf.summary.TaggedRunMetadata` {#TaggedRunMetadata}
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.ByteSize()` {#TaggedRunMetadata.ByteSize}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.Clear()` {#TaggedRunMetadata.Clear}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.ClearExtension(extension_handle)` {#TaggedRunMetadata.ClearExtension}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.ClearField(field_name)` {#TaggedRunMetadata.ClearField}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.CopyFrom(other_msg)` {#TaggedRunMetadata.CopyFrom}
-
-Copies the content of the specified message into the current message.
-
-The method clears the current message and then merges the specified
-message using MergeFrom.
-
-##### Args:
-
-
-* <b>`other_msg`</b>: Message to copy into the current one.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.DiscardUnknownFields()` {#TaggedRunMetadata.DiscardUnknownFields}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.FindInitializationErrors()` {#TaggedRunMetadata.FindInitializationErrors}
-
-Finds required fields which are not initialized.
-
-##### Returns:
-
- A list of strings. Each string is a path to an uninitialized field from
- the top-level message, e.g. "foo.bar[5].baz".
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.FromString(s)` {#TaggedRunMetadata.FromString}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.HasExtension(extension_handle)` {#TaggedRunMetadata.HasExtension}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.HasField(field_name)` {#TaggedRunMetadata.HasField}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.IsInitialized(errors=None)` {#TaggedRunMetadata.IsInitialized}
-
-Checks if all required fields of a message are set.
-
-##### Args:
-
-
-* <b>`errors`</b>: A list which, if provided, will be populated with the field
- paths of all missing required fields.
-
-##### Returns:
-
- True iff the specified message has all required fields set.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.ListFields()` {#TaggedRunMetadata.ListFields}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.MergeFrom(msg)` {#TaggedRunMetadata.MergeFrom}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.MergeFromString(serialized)` {#TaggedRunMetadata.MergeFromString}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.ParseFromString(serialized)` {#TaggedRunMetadata.ParseFromString}
-
-Parse serialized protocol buffer data into this message.
-
-Like MergeFromString(), except we clear the object first and
-do not return the value that MergeFromString returns.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.RegisterExtension(extension_handle)` {#TaggedRunMetadata.RegisterExtension}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.SerializePartialToString()` {#TaggedRunMetadata.SerializePartialToString}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.SerializeToString()` {#TaggedRunMetadata.SerializeToString}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.SetInParent()` {#TaggedRunMetadata.SetInParent}
-
-Sets the _cached_byte_size_dirty bit to true,
-and propagates this to our listener iff this was a state change.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.WhichOneof(oneof_name)` {#TaggedRunMetadata.WhichOneof}
-
-Returns the name of the currently set field inside a oneof, or None.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__deepcopy__(memo=None)` {#TaggedRunMetadata.__deepcopy__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__eq__(other)` {#TaggedRunMetadata.__eq__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__getstate__()` {#TaggedRunMetadata.__getstate__}
-
-Support the pickle protocol.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__hash__()` {#TaggedRunMetadata.__hash__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__init__(**kwargs)` {#TaggedRunMetadata.__init__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__ne__(other_msg)` {#TaggedRunMetadata.__ne__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__repr__()` {#TaggedRunMetadata.__repr__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__setstate__(state)` {#TaggedRunMetadata.__setstate__}
-
-Support the pickle protocol.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__str__()` {#TaggedRunMetadata.__str__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.__unicode__()` {#TaggedRunMetadata.__unicode__}
-
-
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.run_metadata` {#TaggedRunMetadata.run_metadata}
-
-Magic attribute generated for "run_metadata" proto field.
-
-
-- - -
-
-#### `tf.summary.TaggedRunMetadata.tag` {#TaggedRunMetadata.tag}
-
-Magic attribute generated for "tag" proto field.
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/tensor_array_ops.md b/tensorflow/g3doc/api_docs/python/tensor_array_ops.md
deleted file mode 100644
index b605a3a199..0000000000
--- a/tensorflow/g3doc/api_docs/python/tensor_array_ops.md
+++ /dev/null
@@ -1,297 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# TensorArray Operations
-
-Note: Functions taking `Tensor` arguments can also take anything accepted by
-[`tf.convert_to_tensor`](framework.md#convert_to_tensor).
-
-[TOC]
-
-TensorArray: a dynamically sized array of Tensors.
-
-- - -
-
-### `class tf.TensorArray` {#TensorArray}
-
-Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays.
-
-This class is meant to be used with dynamic iteration primitives such as
-`while_loop` and `map_fn`. It supports gradient back-propagation via special
-"flow" control flow dependencies.
-- - -
-
-#### `tf.TensorArray.__init__(dtype, size=None, dynamic_size=None, clear_after_read=None, tensor_array_name=None, handle=None, flow=None, infer_shape=True, element_shape=None, name=None)` {#TensorArray.__init__}
-
-Construct a new TensorArray or wrap an existing TensorArray handle.
-
-A note about the parameter `name`:
-
-The name of the `TensorArray` (even if passed in) is uniquified: each time
-a new `TensorArray` is created at runtime it is assigned its own name for
-the duration of the run. This avoids name collisions if a `TensorArray`
-is created within a `while_loop`.
-
-##### Args:
-
-
-* <b>`dtype`</b>: (required) data type of the TensorArray.
-* <b>`size`</b>: (optional) int32 scalar `Tensor`: the size of the TensorArray.
- Required if handle is not provided.
-* <b>`dynamic_size`</b>: (optional) Python bool: If true, writes to the TensorArray
- can grow the TensorArray past its initial size. Default: False.
-* <b>`clear_after_read`</b>: Boolean (optional, default: True). If True, clear
- TensorArray values after reading them. This disables read-many
- semantics, but allows early release of memory.
-* <b>`tensor_array_name`</b>: (optional) Python string: the name of the TensorArray.
- This is used when creating the TensorArray handle. If this value is
- set, handle should be None.
-* <b>`handle`</b>: (optional) A `Tensor` handle to an existing TensorArray. If this
- is set, tensor_array_name should be None.
-* <b>`flow`</b>: (optional) A float `Tensor` scalar coming from an existing
- `TensorArray.flow`.
-* <b>`infer_shape`</b>: (optional, default: True) If True, shape inference
- is enabled. In this case, all elements must have the same shape.
-* <b>`element_shape`</b>: (optional, default: None) A `TensorShape` object specifying
- the shape constraints of each of the elements of the TensorArray.
- Need not be fully defined.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if both handle and tensor_array_name are provided.
-* <b>`TypeError`</b>: if handle is provided but is not a Tensor.
-
-
-- - -
-
-#### `tf.TensorArray.close(name=None)` {#TensorArray.close}
-
-Close the current TensorArray.
-
-
-- - -
-
-#### `tf.TensorArray.concat(name=None)` {#TensorArray.concat}
-
-Return the values in the TensorArray as a concatenated `Tensor`.
-
-All of the values must have been written, their ranks must match, and
-and their shapes must all match for all dimensions except the first.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- All the tensors in the TensorArray concatenated into one tensor.
-
-
-- - -
-
-#### `tf.TensorArray.dtype` {#TensorArray.dtype}
-
-The data type of this TensorArray.
-
-
-- - -
-
-#### `tf.TensorArray.flow` {#TensorArray.flow}
-
-The flow `Tensor` forcing ops leading to this TensorArray state.
-
-
-- - -
-
-#### `tf.TensorArray.gather(indices, name=None)` {#TensorArray.gather}
-
-Return selected values in the TensorArray as a packed `Tensor`.
-
-All of selected values must have been written and their shapes
-must all match.
-
-##### Args:
-
-
-* <b>`indices`</b>: A `1-D` `Tensor` taking values in `[0, max_value)`. If
- the `TensorArray` is not dynamic, `max_value=size()`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The in the `TensorArray` selected by `indices`, packed into one tensor.
-
-
-- - -
-
-#### `tf.TensorArray.grad(source, flow=None, name=None)` {#TensorArray.grad}
-
-
-
-
-- - -
-
-#### `tf.TensorArray.handle` {#TensorArray.handle}
-
-The reference to the TensorArray.
-
-
-- - -
-
-#### `tf.TensorArray.identity()` {#TensorArray.identity}
-
-Returns a TensorArray with the same content and properties.
-
-##### Returns:
-
- A new TensorArray object with flow that ensures the control dependencies
- from the contexts will become control dependencies for writes, reads, etc.
- Use this object all for subsequent operations.
-
-
-- - -
-
-#### `tf.TensorArray.read(index, name=None)` {#TensorArray.read}
-
-Read the value at location `index` in the TensorArray.
-
-##### Args:
-
-
-* <b>`index`</b>: 0-D. int32 tensor with the index to read from.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- The tensor at index `index`.
-
-
-- - -
-
-#### `tf.TensorArray.scatter(indices, value, name=None)` {#TensorArray.scatter}
-
-Scatter the values of a `Tensor` in specific indices of a `TensorArray`.
-
-##### Args:
-
-
-* <b>`indices`</b>: A `1-D` `Tensor` taking values in `[0, max_value)`. If
- the `TensorArray` is not dynamic, `max_value=size()`.
-* <b>`value`</b>: (N+1)-D. Tensor of type `dtype`. The Tensor to unpack.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A new TensorArray object with flow that ensures the scatter occurs.
- Use this object all for subsequent operations.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape inference fails.
-
-
-- - -
-
-#### `tf.TensorArray.size(name=None)` {#TensorArray.size}
-
-Return the size of the TensorArray.
-
-
-- - -
-
-#### `tf.TensorArray.split(value, lengths, name=None)` {#TensorArray.split}
-
-Split the values of a `Tensor` into the TensorArray.
-
-##### Args:
-
-
-* <b>`value`</b>: (N+1)-D. Tensor of type `dtype`. The Tensor to split.
-* <b>`lengths`</b>: 1-D. int32 vector with the lengths to use when splitting
- `value` along its first dimension.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A new TensorArray object with flow that ensures the split occurs.
- Use this object all for subsequent operations.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape inference fails.
-
-
-- - -
-
-#### `tf.TensorArray.stack(name=None)` {#TensorArray.stack}
-
-Return the values in the TensorArray as a stacked `Tensor`.
-
-All of the values must have been written and their shapes must all match.
-If input shapes have rank-`R`, then output shape will have rank-`(R+1)`.
-
-##### Args:
-
-
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- All the tensors in the TensorArray stacked into one tensor.
-
-
-- - -
-
-#### `tf.TensorArray.unstack(value, name=None)` {#TensorArray.unstack}
-
-Unstack the values of a `Tensor` in the TensorArray.
-
-If input value shapes have rank-`R`, then the output TensorArray will
-contain elements whose shapes are rank-`(R-1)`.
-
-##### Args:
-
-
-* <b>`value`</b>: (N+1)-D. Tensor of type `dtype`. The Tensor to unstack.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A new TensorArray object with flow that ensures the unstack occurs.
- Use this object all for subsequent operations.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if the shape inference fails.
-
-
-- - -
-
-#### `tf.TensorArray.write(index, value, name=None)` {#TensorArray.write}
-
-Write `value` into index `index` of the TensorArray.
-
-##### Args:
-
-
-* <b>`index`</b>: 0-D. int32 scalar with the index to write to.
-* <b>`value`</b>: N-D. Tensor of type `dtype`. The Tensor to write to this index.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A new TensorArray object with flow that ensures the write occurs.
- Use this object all for subsequent operations.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if there are more writers than specified.
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/test.md b/tensorflow/g3doc/api_docs/python/test.md
deleted file mode 100644
index 189a368ad2..0000000000
--- a/tensorflow/g3doc/api_docs/python/test.md
+++ /dev/null
@@ -1,1133 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Testing
-[TOC]
-
-Testing. See the @{$python/test} guide.
-
-- - -
-
-### `tf.test.main(argv=None)` {#main}
-
-Runs all unit tests.
-
-
-- - -
-
-### `class tf.test.TestCase` {#TestCase}
-
-Base class for tests that need to test TensorFlow.
-- - -
-
-#### `tf.test.TestCase.__call__(*args, **kwds)` {#TestCase.__call__}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.__eq__(other)` {#TestCase.__eq__}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.__hash__()` {#TestCase.__hash__}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.__init__(methodName='runTest')` {#TestCase.__init__}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.__ne__(other)` {#TestCase.__ne__}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.__repr__()` {#TestCase.__repr__}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.__str__()` {#TestCase.__str__}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.addCleanup(function, *args, **kwargs)` {#TestCase.addCleanup}
-
-Add a function, with arguments, to be called when the test is
-completed. Functions added are called on a LIFO basis and are
-called after tearDown on test failure or success.
-
-Cleanup items are called even if setUp fails (unlike tearDown).
-
-
-- - -
-
-#### `tf.test.TestCase.addTypeEqualityFunc(typeobj, function)` {#TestCase.addTypeEqualityFunc}
-
-Add a type specific assertEqual style function to compare a type.
-
-This method is for use by TestCase subclasses that need to register
-their own type equality functions to provide nicer error messages.
-
-##### Args:
-
-
-* <b>`typeobj`</b>: The data type to call this function on when both values
- are of the same type in assertEqual().
-* <b>`function`</b>: The callable taking two arguments and an optional
- msg= argument that raises self.failureException with a
- useful error message when the two arguments are not equal.
-
-
-- - -
-
-#### `tf.test.TestCase.assertAllClose(a, b, rtol=1e-06, atol=1e-06)` {#TestCase.assertAllClose}
-
-Asserts that two numpy arrays have near values.
-
-##### Args:
-
-
-* <b>`a`</b>: a numpy ndarray or anything can be converted to one.
-* <b>`b`</b>: a numpy ndarray or anything can be converted to one.
-* <b>`rtol`</b>: relative tolerance
-* <b>`atol`</b>: absolute tolerance
-
-
-- - -
-
-#### `tf.test.TestCase.assertAllCloseAccordingToType(a, b, rtol=1e-06, atol=1e-06, float_rtol=1e-06, float_atol=1e-06, half_rtol=0.001, half_atol=0.001)` {#TestCase.assertAllCloseAccordingToType}
-
-Like assertAllClose, but also suitable for comparing fp16 arrays.
-
-In particular, the tolerance is reduced to 1e-3 if at least
-one of the arguments is of type float16.
-
-##### Args:
-
-
-* <b>`a`</b>: a numpy ndarray or anything can be converted to one.
-* <b>`b`</b>: a numpy ndarray or anything can be converted to one.
-* <b>`rtol`</b>: relative tolerance
-* <b>`atol`</b>: absolute tolerance
-* <b>`float_rtol`</b>: relative tolerance for float32
-* <b>`float_atol`</b>: absolute tolerance for float32
-* <b>`half_rtol`</b>: relative tolerance for float16
-* <b>`half_atol`</b>: absolute tolerance for float16
-
-
-- - -
-
-#### `tf.test.TestCase.assertAllEqual(a, b)` {#TestCase.assertAllEqual}
-
-Asserts that two numpy arrays have the same values.
-
-##### Args:
-
-
-* <b>`a`</b>: a numpy ndarray or anything can be converted to one.
-* <b>`b`</b>: a numpy ndarray or anything can be converted to one.
-
-
-- - -
-
-#### `tf.test.TestCase.assertAlmostEqual(first, second, places=None, msg=None, delta=None)` {#TestCase.assertAlmostEqual}
-
-Fail if the two objects are unequal as determined by their
-difference rounded to the given number of decimal places
-(default 7) and comparing to zero, or by comparing that the
-between the two objects is more than the given delta.
-
-Note that decimal places (from zero) are usually not the same
-as significant digits (measured from the most signficant digit).
-
-If the two objects compare equal then they will automatically
-compare almost equal.
-
-
-- - -
-
-#### `tf.test.TestCase.assertAlmostEquals(first, second, places=None, msg=None, delta=None)` {#TestCase.assertAlmostEquals}
-
-Fail if the two objects are unequal as determined by their
-difference rounded to the given number of decimal places
-(default 7) and comparing to zero, or by comparing that the
-between the two objects is more than the given delta.
-
-Note that decimal places (from zero) are usually not the same
-as significant digits (measured from the most signficant digit).
-
-If the two objects compare equal then they will automatically
-compare almost equal.
-
-
-- - -
-
-#### `tf.test.TestCase.assertArrayNear(farray1, farray2, err)` {#TestCase.assertArrayNear}
-
-Asserts that two float arrays are near each other.
-
-Checks that for all elements of farray1 and farray2
-|f1 - f2| < err. Asserts a test failure if not.
-
-##### Args:
-
-
-* <b>`farray1`</b>: a list of float values.
-* <b>`farray2`</b>: a list of float values.
-* <b>`err`</b>: a float value.
-
-
-- - -
-
-#### `tf.test.TestCase.assertDeviceEqual(device1, device2)` {#TestCase.assertDeviceEqual}
-
-Asserts that the two given devices are the same.
-
-##### Args:
-
-
-* <b>`device1`</b>: A string device name or TensorFlow `DeviceSpec` object.
-* <b>`device2`</b>: A string device name or TensorFlow `DeviceSpec` object.
-
-
-- - -
-
-#### `tf.test.TestCase.assertDictContainsSubset(expected, actual, msg=None)` {#TestCase.assertDictContainsSubset}
-
-Checks whether actual is a superset of expected.
-
-
-- - -
-
-#### `tf.test.TestCase.assertDictEqual(d1, d2, msg=None)` {#TestCase.assertDictEqual}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.assertEqual(first, second, msg=None)` {#TestCase.assertEqual}
-
-Fail if the two objects are unequal as determined by the '=='
-operator.
-
-
-- - -
-
-#### `tf.test.TestCase.assertEquals(first, second, msg=None)` {#TestCase.assertEquals}
-
-Fail if the two objects are unequal as determined by the '=='
-operator.
-
-
-- - -
-
-#### `tf.test.TestCase.assertFalse(expr, msg=None)` {#TestCase.assertFalse}
-
-Check that the expression is false.
-
-
-- - -
-
-#### `tf.test.TestCase.assertGreater(a, b, msg=None)` {#TestCase.assertGreater}
-
-Just like self.assertTrue(a > b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertGreaterEqual(a, b, msg=None)` {#TestCase.assertGreaterEqual}
-
-Just like self.assertTrue(a >= b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertIn(member, container, msg=None)` {#TestCase.assertIn}
-
-Just like self.assertTrue(a in b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertIs(expr1, expr2, msg=None)` {#TestCase.assertIs}
-
-Just like self.assertTrue(a is b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertIsInstance(obj, cls, msg=None)` {#TestCase.assertIsInstance}
-
-Same as self.assertTrue(isinstance(obj, cls)), with a nicer
-default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertIsNone(obj, msg=None)` {#TestCase.assertIsNone}
-
-Same as self.assertTrue(obj is None), with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertIsNot(expr1, expr2, msg=None)` {#TestCase.assertIsNot}
-
-Just like self.assertTrue(a is not b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertIsNotNone(obj, msg=None)` {#TestCase.assertIsNotNone}
-
-Included for symmetry with assertIsNone.
-
-
-- - -
-
-#### `tf.test.TestCase.assertItemsEqual(expected_seq, actual_seq, msg=None)` {#TestCase.assertItemsEqual}
-
-An unordered sequence specific comparison. It asserts that
-actual_seq and expected_seq have the same element counts.
-Equivalent to::
-
- self.assertEqual(Counter(iter(actual_seq)),
- Counter(iter(expected_seq)))
-
-Asserts that each element has the same count in both sequences.
-
-##### Example:
-
- - [0, 1, 1] and [1, 0, 1] compare equal.
- - [0, 0, 1] and [0, 1] compare unequal.
-
-
-- - -
-
-#### `tf.test.TestCase.assertLess(a, b, msg=None)` {#TestCase.assertLess}
-
-Just like self.assertTrue(a < b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertLessEqual(a, b, msg=None)` {#TestCase.assertLessEqual}
-
-Just like self.assertTrue(a <= b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertListEqual(list1, list2, msg=None)` {#TestCase.assertListEqual}
-
-A list-specific equality assertion.
-
-##### Args:
-
-
-* <b>`list1`</b>: The first list to compare.
-* <b>`list2`</b>: The second list to compare.
-* <b>`msg`</b>: Optional message to use on failure instead of a list of
- differences.
-
-
-- - -
-
-#### `tf.test.TestCase.assertMultiLineEqual(first, second, msg=None)` {#TestCase.assertMultiLineEqual}
-
-Assert that two multi-line strings are equal.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNDArrayNear(ndarray1, ndarray2, err)` {#TestCase.assertNDArrayNear}
-
-Asserts that two numpy arrays have near values.
-
-##### Args:
-
-
-* <b>`ndarray1`</b>: a numpy ndarray.
-* <b>`ndarray2`</b>: a numpy ndarray.
-* <b>`err`</b>: a float. The maximum absolute difference allowed.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNear(f1, f2, err, msg=None)` {#TestCase.assertNear}
-
-Asserts that two floats are near each other.
-
-Checks that |f1 - f2| < err and asserts a test failure
-if not.
-
-##### Args:
-
-
-* <b>`f1`</b>: A float value.
-* <b>`f2`</b>: A float value.
-* <b>`err`</b>: A float value.
-* <b>`msg`</b>: An optional string message to append to the failure message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNotAlmostEqual(first, second, places=None, msg=None, delta=None)` {#TestCase.assertNotAlmostEqual}
-
-Fail if the two objects are equal as determined by their
-difference rounded to the given number of decimal places
-(default 7) and comparing to zero, or by comparing that the
-between the two objects is less than the given delta.
-
-Note that decimal places (from zero) are usually not the same
-as significant digits (measured from the most signficant digit).
-
-Objects that are equal automatically fail.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNotAlmostEquals(first, second, places=None, msg=None, delta=None)` {#TestCase.assertNotAlmostEquals}
-
-Fail if the two objects are equal as determined by their
-difference rounded to the given number of decimal places
-(default 7) and comparing to zero, or by comparing that the
-between the two objects is less than the given delta.
-
-Note that decimal places (from zero) are usually not the same
-as significant digits (measured from the most signficant digit).
-
-Objects that are equal automatically fail.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNotEqual(first, second, msg=None)` {#TestCase.assertNotEqual}
-
-Fail if the two objects are equal as determined by the '!='
-operator.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNotEquals(first, second, msg=None)` {#TestCase.assertNotEquals}
-
-Fail if the two objects are equal as determined by the '!='
-operator.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNotIn(member, container, msg=None)` {#TestCase.assertNotIn}
-
-Just like self.assertTrue(a not in b), but with a nicer default message.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNotIsInstance(obj, cls, msg=None)` {#TestCase.assertNotIsInstance}
-
-Included for symmetry with assertIsInstance.
-
-
-- - -
-
-#### `tf.test.TestCase.assertNotRegexpMatches(text, unexpected_regexp, msg=None)` {#TestCase.assertNotRegexpMatches}
-
-Fail the test if the text matches the regular expression.
-
-
-- - -
-
-#### `tf.test.TestCase.assertProtoEquals(expected_message_maybe_ascii, message)` {#TestCase.assertProtoEquals}
-
-Asserts that message is same as parsed expected_message_ascii.
-
-Creates another prototype of message, reads the ascii message into it and
-then compares them using self._AssertProtoEqual().
-
-##### Args:
-
-
-* <b>`expected_message_maybe_ascii`</b>: proto message in original or ascii form
-* <b>`message`</b>: the message to validate
-
-
-- - -
-
-#### `tf.test.TestCase.assertProtoEqualsVersion(expected, actual, producer=21, min_consumer=0)` {#TestCase.assertProtoEqualsVersion}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.assertRaises(excClass, callableObj=None, *args, **kwargs)` {#TestCase.assertRaises}
-
-Fail unless an exception of class excClass is raised
-by callableObj when invoked with arguments args and keyword
-arguments kwargs. If a different type of exception is
-raised, it will not be caught, and the test case will be
-deemed to have suffered an error, exactly as for an
-unexpected exception.
-
-If called with callableObj omitted or None, will return a
-context object used like this::
-
- with self.assertRaises(SomeException):
- do_something()
-
-The context manager keeps a reference to the exception as
-the 'exception' attribute. This allows you to inspect the
-exception after the assertion::
-
- with self.assertRaises(SomeException) as cm:
- do_something()
- the_exception = cm.exception
- self.assertEqual(the_exception.error_code, 3)
-
-
-- - -
-
-#### `tf.test.TestCase.assertRaisesOpError(expected_err_re_or_predicate)` {#TestCase.assertRaisesOpError}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.assertRaisesRegexp(expected_exception, expected_regexp, callable_obj=None, *args, **kwargs)` {#TestCase.assertRaisesRegexp}
-
-Asserts that the message in a raised exception matches a regexp.
-
-##### Args:
-
-
-* <b>`expected_exception`</b>: Exception class expected to be raised.
-* <b>`expected_regexp`</b>: Regexp (re pattern object or string) expected
- to be found in error message.
-* <b>`callable_obj`</b>: Function to be called.
-* <b>`args`</b>: Extra args.
-* <b>`kwargs`</b>: Extra kwargs.
-
-
-- - -
-
-#### `tf.test.TestCase.assertRaisesWithPredicateMatch(exception_type, expected_err_re_or_predicate)` {#TestCase.assertRaisesWithPredicateMatch}
-
-Returns a context manager to enclose code expected to raise an exception.
-
-If the exception is an OpError, the op stack is also included in the message
-predicate search.
-
-##### Args:
-
-
-* <b>`exception_type`</b>: The expected type of exception that should be raised.
-* <b>`expected_err_re_or_predicate`</b>: If this is callable, it should be a function
- of one argument that inspects the passed-in exception and
- returns True (success) or False (please fail the test). Otherwise, the
- error message is expected to match this regular expression partially.
-
-##### Returns:
-
- A context manager to surround code that is expected to raise an
- exception.
-
-
-- - -
-
-#### `tf.test.TestCase.assertRegexpMatches(text, expected_regexp, msg=None)` {#TestCase.assertRegexpMatches}
-
-Fail the test unless the text matches the regular expression.
-
-
-- - -
-
-#### `tf.test.TestCase.assertSequenceEqual(seq1, seq2, msg=None, seq_type=None)` {#TestCase.assertSequenceEqual}
-
-An equality assertion for ordered sequences (like lists and tuples).
-
-For the purposes of this function, a valid ordered sequence type is one
-which can be indexed, has a length, and has an equality operator.
-
-##### Args:
-
-
-* <b>`seq1`</b>: The first sequence to compare.
-* <b>`seq2`</b>: The second sequence to compare.
-* <b>`seq_type`</b>: The expected datatype of the sequences, or None if no
- datatype should be enforced.
-* <b>`msg`</b>: Optional message to use on failure instead of a list of
- differences.
-
-
-- - -
-
-#### `tf.test.TestCase.assertSetEqual(set1, set2, msg=None)` {#TestCase.assertSetEqual}
-
-A set-specific equality assertion.
-
-##### Args:
-
-
-* <b>`set1`</b>: The first set to compare.
-* <b>`set2`</b>: The second set to compare.
-* <b>`msg`</b>: Optional message to use on failure instead of a list of
- differences.
-
-assertSetEqual uses ducktyping to support different types of sets, and
-is optimized for sets specifically (parameters must support a
-difference method).
-
-
-- - -
-
-#### `tf.test.TestCase.assertShapeEqual(np_array, tf_tensor)` {#TestCase.assertShapeEqual}
-
-Asserts that a Numpy ndarray and a TensorFlow tensor have the same shape.
-
-##### Args:
-
-
-* <b>`np_array`</b>: A Numpy ndarray or Numpy scalar.
-* <b>`tf_tensor`</b>: A Tensor.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the arguments have the wrong type.
-
-
-- - -
-
-#### `tf.test.TestCase.assertStartsWith(actual, expected_start, msg=None)` {#TestCase.assertStartsWith}
-
-Assert that actual.startswith(expected_start) is True.
-
-##### Args:
-
-
-* <b>`actual`</b>: str
-* <b>`expected_start`</b>: str
-* <b>`msg`</b>: Optional message to report on failure.
-
-
-- - -
-
-#### `tf.test.TestCase.assertTrue(expr, msg=None)` {#TestCase.assertTrue}
-
-Check that the expression is true.
-
-
-- - -
-
-#### `tf.test.TestCase.assertTupleEqual(tuple1, tuple2, msg=None)` {#TestCase.assertTupleEqual}
-
-A tuple-specific equality assertion.
-
-##### Args:
-
-
-* <b>`tuple1`</b>: The first tuple to compare.
-* <b>`tuple2`</b>: The second tuple to compare.
-* <b>`msg`</b>: Optional message to use on failure instead of a list of
- differences.
-
-
-- - -
-
-#### `tf.test.TestCase.assert_(expr, msg=None)` {#TestCase.assert_}
-
-Check that the expression is true.
-
-
-- - -
-
-#### `tf.test.TestCase.checkedThread(target, args=None, kwargs=None)` {#TestCase.checkedThread}
-
-Returns a Thread wrapper that asserts 'target' completes successfully.
-
-This method should be used to create all threads in test cases, as
-otherwise there is a risk that a thread will silently fail, and/or
-assertions made in the thread will not be respected.
-
-##### Args:
-
-
-* <b>`target`</b>: A callable object to be executed in the thread.
-* <b>`args`</b>: The argument tuple for the target invocation. Defaults to ().
-* <b>`kwargs`</b>: A dictionary of keyword arguments for the target invocation.
- Defaults to {}.
-
-##### Returns:
-
- A wrapper for threading.Thread that supports start() and join() methods.
-
-
-- - -
-
-#### `tf.test.TestCase.countTestCases()` {#TestCase.countTestCases}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.debug()` {#TestCase.debug}
-
-Run the test without collecting errors in a TestResult
-
-
-- - -
-
-#### `tf.test.TestCase.defaultTestResult()` {#TestCase.defaultTestResult}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.doCleanups()` {#TestCase.doCleanups}
-
-Execute all cleanup functions. Normally called for you after
-tearDown.
-
-
-- - -
-
-#### `tf.test.TestCase.fail(msg=None)` {#TestCase.fail}
-
-Fail immediately, with the given message.
-
-
-- - -
-
-#### `tf.test.TestCase.failIf(*args, **kwargs)` {#TestCase.failIf}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.failIfAlmostEqual(*args, **kwargs)` {#TestCase.failIfAlmostEqual}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.failIfEqual(*args, **kwargs)` {#TestCase.failIfEqual}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.failUnless(*args, **kwargs)` {#TestCase.failUnless}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.failUnlessAlmostEqual(*args, **kwargs)` {#TestCase.failUnlessAlmostEqual}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.failUnlessEqual(*args, **kwargs)` {#TestCase.failUnlessEqual}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.failUnlessRaises(*args, **kwargs)` {#TestCase.failUnlessRaises}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.get_temp_dir()` {#TestCase.get_temp_dir}
-
-Returns a unique temporary directory for the test to use.
-
-Across different test runs, this method will return a different folder.
-This will ensure that across different runs tests will not be able to
-pollute each others environment.
-
-##### Returns:
-
- string, the path to the unique temporary directory created for this test.
-
-
-- - -
-
-#### `tf.test.TestCase.id()` {#TestCase.id}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.run(result=None)` {#TestCase.run}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.setUp()` {#TestCase.setUp}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.setUpClass(cls)` {#TestCase.setUpClass}
-
-Hook method for setting up class fixture before running tests in the class.
-
-
-- - -
-
-#### `tf.test.TestCase.shortDescription()` {#TestCase.shortDescription}
-
-Returns a one-line description of the test, or None if no
-description has been provided.
-
-The default implementation of this method returns the first line of
-the specified test method's docstring.
-
-
-- - -
-
-#### `tf.test.TestCase.skipTest(reason)` {#TestCase.skipTest}
-
-Skip this test.
-
-
-- - -
-
-#### `tf.test.TestCase.tearDown()` {#TestCase.tearDown}
-
-
-
-
-- - -
-
-#### `tf.test.TestCase.tearDownClass(cls)` {#TestCase.tearDownClass}
-
-Hook method for deconstructing the class fixture after running all tests in the class.
-
-
-- - -
-
-#### `tf.test.TestCase.test_session(graph=None, config=None, use_gpu=False, force_gpu=False)` {#TestCase.test_session}
-
-Returns a TensorFlow Session for use in executing tests.
-
-This method should be used for all functional tests.
-
-This method behaves different than session.Session: for performance reasons
-`test_session` will by default (if `graph` is None) reuse the same session
-across tests. This means you may want to either call the function
-`reset_default_graph()` before tests, or if creating an explicit new graph,
-pass it here (simply setting it with `as_default()` won't do it), which will
-trigger the creation of a new session.
-
-Use the `use_gpu` and `force_gpu` options to control where ops are run. If
-`force_gpu` is True, all ops are pinned to `/gpu:0`. Otherwise, if `use_gpu`
-is True, TensorFlow tries to run as many ops on the GPU as possible. If both
-`force_gpu and `use_gpu` are False, all ops are pinned to the CPU.
-
-Example:
-
- class MyOperatorTest(test_util.TensorFlowTestCase):
- def testMyOperator(self):
- with self.test_session(use_gpu=True):
- valid_input = [1.0, 2.0, 3.0, 4.0, 5.0]
- result = MyOperator(valid_input).eval()
- self.assertEqual(result, [1.0, 2.0, 3.0, 5.0, 8.0]
- invalid_input = [-1.0, 2.0, 7.0]
- with self.assertRaisesOpError("negative input not supported"):
- MyOperator(invalid_input).eval()
-
-##### Args:
-
-
-* <b>`graph`</b>: Optional graph to use during the returned session.
-* <b>`config`</b>: An optional config_pb2.ConfigProto to use to configure the
- session.
-* <b>`use_gpu`</b>: If True, attempt to run as many ops as possible on GPU.
-* <b>`force_gpu`</b>: If True, pin all ops to `/gpu:0`.
-
-##### Returns:
-
- A Session object that should be used as a context manager to surround
- the graph building and execution code in a test case.
-
-
-
-- - -
-
-### `tf.test.test_src_dir_path(relative_path)` {#test_src_dir_path}
-
-Creates an absolute test srcdir path given a relative path.
-
-##### Args:
-
-
-* <b>`relative_path`</b>: a path relative to tensorflow root.
- e.g. "core/platform".
-
-##### Returns:
-
- An absolute path to the linked in runfiles.
-
-
-- - -
-
-### `tf.test.assert_equal_graph_def(actual, expected, checkpoint_v2=False)` {#assert_equal_graph_def}
-
-Asserts that two `GraphDef`s are (mostly) the same.
-
-Compares two `GraphDef` protos for equality, ignoring versions and ordering of
-nodes, attrs, and control inputs. Node names are used to match up nodes
-between the graphs, so the naming of nodes must be consistent.
-
-##### Args:
-
-
-* <b>`actual`</b>: The `GraphDef` we have.
-* <b>`expected`</b>: The `GraphDef` we expected.
-* <b>`checkpoint_v2`</b>: boolean determining whether to ignore randomized attribute
- values that appear in V2 checkpoints.
-
-##### Raises:
-
-
-* <b>`AssertionError`</b>: If the `GraphDef`s do not match.
-* <b>`TypeError`</b>: If either argument is not a `GraphDef`.
-
-
-- - -
-
-### `tf.test.get_temp_dir()` {#get_temp_dir}
-
-Returns a temporary directory for use during tests.
-
-There is no need to delete the directory after the test.
-
-##### Returns:
-
- The temporary directory.
-
-
-- - -
-
-### `tf.test.is_built_with_cuda()` {#is_built_with_cuda}
-
-Returns whether TensorFlow was built with CUDA (GPU) support.
-
-
-- - -
-
-### `tf.test.is_gpu_available(cuda_only=False)` {#is_gpu_available}
-
-Returns whether TensorFlow can access a GPU.
-
-##### Args:
-
-
-* <b>`cuda_only`</b>: limit the search to CUDA gpus.
-
-##### Returns:
-
- True iff a gpu device of the requested kind is available.
-
-
-- - -
-
-### `tf.test.gpu_device_name()` {#gpu_device_name}
-
-Returns the name of a GPU device if available or the empty string.
-
-
-- - -
-
-### `tf.test.compute_gradient(x, x_shape, y, y_shape, x_init_value=None, delta=0.001, init_targets=None, extra_feed_dict=None)` {#compute_gradient}
-
-Computes and returns the theoretical and numerical Jacobian.
-
-If `x` or `y` is complex, the Jacobian will still be real but the
-corresponding Jacobian dimension(s) will be twice as large. This is required
-even if both input and output is complex since TensorFlow graphs are not
-necessarily holomorphic, and may have gradients not expressible as complex
-numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
-with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with
-
- J[:m, :n] = d(Re y)/d(Re x)
- J[:m, n:] = d(Im y)/d(Re x)
- J[m:, :n] = d(Re y)/d(Im x)
- J[m:, n:] = d(Im y)/d(Im x)
-
-##### Args:
-
-
-* <b>`x`</b>: a tensor or list of tensors
-* <b>`x_shape`</b>: the dimensions of x as a tuple or an array of ints. If x is a list,
- then this is the list of shapes.
-
-* <b>`y`</b>: a tensor
-* <b>`y_shape`</b>: the dimensions of y as a tuple or an array of ints.
-* <b>`x_init_value`</b>: (optional) a numpy array of the same shape as "x"
- representing the initial value of x. If x is a list, this should be a list
- of numpy arrays. If this is none, the function will pick a random tensor
- as the initial value.
-* <b>`delta`</b>: (optional) the amount of perturbation.
-* <b>`init_targets`</b>: list of targets to run to initialize model params.
- TODO(mrry): remove this argument.
-* <b>`extra_feed_dict`</b>: dict that allows fixing specified tensor values
- during the Jacobian calculation.
-
-##### Returns:
-
- Two 2-d numpy arrays representing the theoretical and numerical
- Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns
- where "x_size" is the number of elements in x and "y_size" is the
- number of elements in y. If x is a list, returns a list of two numpy arrays.
-
-
-- - -
-
-### `tf.test.compute_gradient_error(x, x_shape, y, y_shape, x_init_value=None, delta=0.001, init_targets=None, extra_feed_dict=None)` {#compute_gradient_error}
-
-Computes the gradient error.
-
-Computes the maximum error for dy/dx between the computed Jacobian and the
-numerically estimated Jacobian.
-
-This function will modify the tensors passed in as it adds more operations
-and hence changing the consumers of the operations of the input tensors.
-
-This function adds operations to the current session. To compute the error
-using a particular device, such as a GPU, use the standard methods for
-setting a device (e.g. using with sess.graph.device() or setting a device
-function in the session constructor).
-
-##### Args:
-
-
-* <b>`x`</b>: a tensor or list of tensors
-* <b>`x_shape`</b>: the dimensions of x as a tuple or an array of ints. If x is a list,
- then this is the list of shapes.
-
-* <b>`y`</b>: a tensor
-* <b>`y_shape`</b>: the dimensions of y as a tuple or an array of ints.
-* <b>`x_init_value`</b>: (optional) a numpy array of the same shape as "x"
- representing the initial value of x. If x is a list, this should be a list
- of numpy arrays. If this is none, the function will pick a random tensor
- as the initial value.
-* <b>`delta`</b>: (optional) the amount of perturbation.
-* <b>`init_targets`</b>: list of targets to run to initialize model params.
- TODO(mrry): Remove this argument.
-* <b>`extra_feed_dict`</b>: dict that allows fixing specified tensor values
- during the Jacobian calculation.
-
-##### Returns:
-
- The maximum error in between the two Jacobians.
-
-
-
-## Other Functions and Classes
-- - -
-
-### `class tf.test.Benchmark` {#Benchmark}
-
-Abstract class that provides helpers for TensorFlow benchmarks.
-- - -
-
-#### `tf.test.Benchmark.is_abstract(cls)` {#Benchmark.is_abstract}
-
-
-
-
-- - -
-
-#### `tf.test.Benchmark.report_benchmark(iters=None, cpu_time=None, wall_time=None, throughput=None, extras=None, name=None)` {#Benchmark.report_benchmark}
-
-Report a benchmark.
-
-##### Args:
-
-
-* <b>`iters`</b>: (optional) How many iterations were run
-* <b>`cpu_time`</b>: (optional) Total cpu time in seconds
-* <b>`wall_time`</b>: (optional) Total wall time in seconds
-* <b>`throughput`</b>: (optional) Throughput (in MB/s)
-* <b>`extras`</b>: (optional) Dict mapping string keys to additional benchmark info.
- Values may be either floats or values that are convertible to strings.
-* <b>`name`</b>: (optional) Override the BenchmarkEntry name with `name`.
- Otherwise it is inferred from the top-level method name.
-
-
-- - -
-
-#### `tf.test.Benchmark.run_op_benchmark(sess, op_or_tensor, feed_dict=None, burn_iters=2, min_iters=10, store_trace=False, store_memory_usage=True, name=None, extras=None, mbs=0)` {#Benchmark.run_op_benchmark}
-
-Run an op or tensor in the given session. Report the results.
-
-##### Args:
-
-
-* <b>`sess`</b>: `Session` object to use for timing.
-* <b>`op_or_tensor`</b>: `Operation` or `Tensor` to benchmark.
-* <b>`feed_dict`</b>: A `dict` of values to feed for each op iteration (see the
- `feed_dict` parameter of `Session.run`).
-* <b>`burn_iters`</b>: Number of burn-in iterations to run.
-* <b>`min_iters`</b>: Minimum number of iterations to use for timing.
-* <b>`store_trace`</b>: Boolean, whether to run an extra untimed iteration and
- store the trace of iteration in the benchmark report.
- The trace will be stored as a string in Google Chrome trace format
- in the extras field "full_trace_chrome_format".
-* <b>`store_memory_usage`</b>: Boolean, whether to run an extra untimed iteration,
- calculate memory usage, and store that in extras fields.
-* <b>`name`</b>: (optional) Override the BenchmarkEntry name with `name`.
- Otherwise it is inferred from the top-level method name.
-* <b>`extras`</b>: (optional) Dict mapping string keys to additional benchmark info.
- Values may be either floats or values that are convertible to strings.
-* <b>`mbs`</b>: (optional) The number of megabytes moved by this op, used to
- calculate the ops throughput.
-
-##### Returns:
-
- A `dict` containing the key-value pairs that were passed to
- `report_benchmark`.
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/tf_debug.md b/tensorflow/g3doc/api_docs/python/tf_debug.md
deleted file mode 100644
index 38a082d408..0000000000
--- a/tensorflow/g3doc/api_docs/python/tf_debug.md
+++ /dev/null
@@ -1,1659 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# TensorFlow Debugger
-[TOC]
-
-Public Python API of TensorFlow Debugger (tfdbg).
-
-See the @{$python/tfdbg} guide.
-
-- - -
-
-### `tf_debug.add_debug_tensor_watch(run_options, node_name, output_slot=0, debug_ops='DebugIdentity', debug_urls=None, global_step=-1)` {#add_debug_tensor_watch}
-
-Add watch on a `Tensor` to `RunOptions`.
-
-N.B.: Under certain circumstances, the `Tensor` may not be actually watched
- (e.g., if the node of the `Tensor` is constant-folded during runtime).
-
-##### Args:
-
-
-* <b>`run_options`</b>: An instance of `config_pb2.RunOptions` to be modified.
-* <b>`node_name`</b>: (`str`) name of the node to watch.
-* <b>`output_slot`</b>: (`int`) output slot index of the tensor from the watched node.
-* <b>`debug_ops`</b>: (`str` or `list` of `str`) name(s) of the debug op(s). Can be a
- `list` of `str` or a single `str`. The latter case is equivalent to a
- `list` of `str` with only one element.
-* <b>`debug_urls`</b>: (`str` or `list` of `str`) URL(s) to send debug values to,
- e.g., `file:///tmp/tfdbg_dump_1`, `grpc://localhost:12345`.
-* <b>`global_step`</b>: (`int`) Optional global_step count for this debug tensor
- watch.
-
-
-- - -
-
-### `tf_debug.watch_graph(run_options, graph, debug_ops='DebugIdentity', debug_urls=None, node_name_regex_whitelist=None, op_type_regex_whitelist=None, global_step=-1)` {#watch_graph}
-
-Add debug watches to `RunOptions` for a TensorFlow graph.
-
-To watch all `Tensor`s on the graph, let both `node_name_regex_whitelist`
-and `op_type_regex_whitelist` be the default (`None`).
-
-N.B.: Under certain circumstances, not all specified `Tensor`s will be
- actually watched (e.g., nodes that are constant-folded during runtime will
- not be watched).
-
-##### Args:
-
-
-* <b>`run_options`</b>: An instance of `config_pb2.RunOptions` to be modified.
-* <b>`graph`</b>: An instance of `ops.Graph`.
-* <b>`debug_ops`</b>: (`str` or `list` of `str`) name(s) of the debug op(s) to use.
-* <b>`debug_urls`</b>: URLs to send debug values to. Can be a list of strings,
- a single string, or None. The case of a single string is equivalent to
- a list consisting of a single string, e.g., `file:///tmp/tfdbg_dump_1`,
- `grpc://localhost:12345`.
-* <b>`node_name_regex_whitelist`</b>: Regular-expression whitelist for node_name,
- e.g., `"(weight_[0-9]+|bias_.*)"`
-* <b>`op_type_regex_whitelist`</b>: Regular-expression whitelist for the op type of
- nodes, e.g., `"(Variable|Add)"`.
- If both `node_name_regex_whitelist` and `op_type_regex_whitelist`
- are set, the two filtering operations will occur in a logical `AND`
- relation. In other words, a node will be included if and only if it
- hits both whitelists.
-* <b>`global_step`</b>: (`int`) Optional global_step count for this debug tensor
- watch.
-
-
-- - -
-
-### `tf_debug.watch_graph_with_blacklists(run_options, graph, debug_ops='DebugIdentity', debug_urls=None, node_name_regex_blacklist=None, op_type_regex_blacklist=None, global_step=-1)` {#watch_graph_with_blacklists}
-
-Add debug tensor watches, blacklisting nodes and op types.
-
-This is similar to `watch_graph()`, but the node names and op types are
-blacklisted, instead of whitelisted.
-
-N.B.: Under certain circumstances, not all specified `Tensor`s will be
- actually watched (e.g., nodes that are constant-folded during runtime will
- not be watched).
-
-##### Args:
-
-
-* <b>`run_options`</b>: An instance of `config_pb2.RunOptions` to be modified.
-* <b>`graph`</b>: An instance of `ops.Graph`.
-* <b>`debug_ops`</b>: (`str` or `list` of `str`) name(s) of the debug op(s) to use.
-* <b>`debug_urls`</b>: URL(s) to send ebug values to, e.g.,
- `file:///tmp/tfdbg_dump_1`, `grpc://localhost:12345`.
-* <b>`node_name_regex_blacklist`</b>: Regular-expression blacklist for node_name.
- This should be a string, e.g., `"(weight_[0-9]+|bias_.*)"`.
-* <b>`op_type_regex_blacklist`</b>: Regular-expression blacklist for the op type of
- nodes, e.g., `"(Variable|Add)"`.
- If both node_name_regex_blacklist and op_type_regex_blacklist
- are set, the two filtering operations will occur in a logical `OR`
- relation. In other words, a node will be excluded if it hits either of
- the two blacklists; a node will be included if and only if it hits
- neither of the blacklists.
-* <b>`global_step`</b>: (`int`) Optional global_step count for this debug tensor
- watch.
-
-
-- - -
-
-### `class tf_debug.DebugTensorDatum` {#DebugTensorDatum}
-
-A single tensor dumped by TensorFlow Debugger (tfdbg).
-
-Contains metadata about the dumped tensor, including `timestamp`,
-`node_name`, `output_slot`, `debug_op`, and path to the dump file
-(`file_path`).
-
-This type does not hold the generally space-expensive tensor value (numpy
-array). Instead, it points to the file from which the tensor value can be
-loaded (with the `get_tensor` method) if needed.
-- - -
-
-#### `tf_debug.DebugTensorDatum.__init__(dump_root, debug_dump_rel_path)` {#DebugTensorDatum.__init__}
-
-`DebugTensorDatum` constructor.
-
-##### Args:
-
-
-* <b>`dump_root`</b>: (`str`) Debug dump root directory.
-* <b>`debug_dump_rel_path`</b>: (`str`) Path to a debug dump file, relative to the
- `dump_root`. For example, suppose the debug dump root
- directory is `/tmp/tfdbg_1` and the dump file is at
- `/tmp/tfdbg_1/ns_1/node_a_0_DebugIdentity_123456789`, then
- the value of the debug_dump_rel_path should be
- `ns_1/node_a_0_DebugIdenity_1234456789`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the base file name of the dump file does not conform to
- the dump file naming pattern:
- `node_name`_`output_slot`_`debug_op`_`timestamp`
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.__repr__()` {#DebugTensorDatum.__repr__}
-
-
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.__str__()` {#DebugTensorDatum.__str__}
-
-
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.debug_op` {#DebugTensorDatum.debug_op}
-
-Name of the debug op.
-
-##### Returns:
-
- (`str`) debug op name (e.g., `DebugIdentity`).
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.dump_size_bytes` {#DebugTensorDatum.dump_size_bytes}
-
-Size of the dump file.
-
-Unit: byte.
-
-##### Returns:
-
- If the dump file exists, size of the dump file, in bytes.
- If the dump file does not exist, None.
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.file_path` {#DebugTensorDatum.file_path}
-
-Path to the file which stores the value of the dumped tensor.
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.get_tensor()` {#DebugTensorDatum.get_tensor}
-
-Get tensor from the dump (`Event`) file.
-
-##### Returns:
-
- The tensor loaded from the dump (`Event`) file.
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.node_name` {#DebugTensorDatum.node_name}
-
-Name of the node from which the tensor value was dumped.
-
-##### Returns:
-
- (`str`) name of the node watched by the debug op.
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.output_slot` {#DebugTensorDatum.output_slot}
-
-Output slot index from which the tensor value was dumped.
-
-##### Returns:
-
- (`int`) output slot index watched by the debug op.
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.tensor_name` {#DebugTensorDatum.tensor_name}
-
-Name of the tensor watched by the debug op.
-
-##### Returns:
-
- (`str`) `Tensor` name, in the form of `node_name`:`output_slot`
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.timestamp` {#DebugTensorDatum.timestamp}
-
-Timestamp of when this tensor value was dumped.
-
-##### Returns:
-
- (`int`) The timestamp in microseconds.
-
-
-- - -
-
-#### `tf_debug.DebugTensorDatum.watch_key` {#DebugTensorDatum.watch_key}
-
-Watch key identities a debug watch on a tensor.
-
-##### Returns:
-
- (`str`) A watch key, in the form of `tensor_name`:`debug_op`.
-
-
-
-- - -
-
-### `class tf_debug.DebugDumpDir` {#DebugDumpDir}
-
-Data set from a debug-dump directory on filesystem.
-
-An instance of `DebugDumpDir` contains all `DebugTensorDatum` instances
-in a tfdbg dump root directory.
-- - -
-
-#### `tf_debug.DebugDumpDir.__init__(dump_root, partition_graphs=None, validate=True)` {#DebugDumpDir.__init__}
-
-`DebugDumpDir` constructor.
-
-##### Args:
-
-
-* <b>`dump_root`</b>: (`str`) path to the dump root directory.
-* <b>`partition_graphs`</b>: A repeated field of GraphDefs representing the
- partition graphs executed by the TensorFlow runtime.
-* <b>`validate`</b>: (`bool`) whether the dump files are to be validated against the
- partition graphs.
-
-##### Raises:
-
-
-* <b>`IOError`</b>: If dump_root does not exist as a directory.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.core_metadata` {#DebugDumpDir.core_metadata}
-
-Metadata about the `Session.run()` call from the core runtime.
-
-Of the three counters available in the return value, `global_step` is
-supplied by the caller of the debugged `Session.run()`, while
-`session_run_count` and `executor_step_count` are determined by the state
-of the core runtime, automatically. For the same fetch list, feed keys and
-debug tensor watch options, the same executor will be used and
-`executor_step_count` should increase by one at a time. However, runs with
-different fetch lists, feed keys and debug_tensor watch options that all
-share the same `Session` object can lead to gaps in `session_run_count`.
-
-##### Returns:
-
- If core metadata are loaded, a `namedtuple` with the fields:
- `global_step`: A global step count supplied by the caller of
- `Session.run()`. It is optional to the caller. If the caller did not
- supply this parameter, its value will be -1.
- `session_run_count`: A counter for Run() calls to the underlying
- TensorFlow `Session` object.
- `executor_step_count`: A counter for invocations of a given runtime
- executor. The same executor is re-used for the same fetched tensors,
- target nodes, input feed keys and debug tensor watch options.
- `input_names`: Names of the input (feed) Tensors.
- `output_names`: Names of the output (fetched) Tensors.
- `target_nodes`: Names of the target nodes.
- If the core metadata have not been loaded, `None`.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.debug_watch_keys(node_name)` {#DebugDumpDir.debug_watch_keys}
-
-Get all tensor watch keys of given node according to partition graphs.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node.
-
-##### Returns:
-
- (`list` of `str`) all debug tensor watch keys. Returns an empty list if
- the node name does not correspond to any debug watch keys.
-
-##### Raises:
-
- `LookupError`: If debug watch information has not been loaded from
- partition graphs yet.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.devices()` {#DebugDumpDir.devices}
-
-Get the list of devices.
-
-##### Returns:
-
- (`list` of `str`) names of the devices.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If node inputs and control inputs have not been loaded
- from partition graphs yet.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.dumped_tensor_data` {#DebugDumpDir.dumped_tensor_data}
-
-
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.find(predicate, first_n=0)` {#DebugDumpDir.find}
-
-Find dumped tensor data by a certain predicate.
-
-##### Args:
-
-
-* <b>`predicate`</b>: A callable that takes two input arguments:
-
- ```python
- def predicate(debug_tensor_datum, tensor):
- # returns a bool
- ```
-
- where `debug_tensor_datum` is an instance of `DebugTensorDatum`, which
- carries the metadata, such as the `Tensor`'s node name, output slot
- timestamp, debug op name, etc.; and `tensor` is the dumped tensor value
- as a `numpy.ndarray`.
-
-* <b>`first_n`</b>: (`int`) return only the first n `DebugTensotDatum` instances (in
- time order) for which the predicate returns True. To return all the
- `DebugTensotDatum` instances, let first_n be <= 0.
-
-##### Returns:
-
- A list of all `DebugTensorDatum` objects in this `DebugDumpDir` object
- for which predicate returns True, sorted in ascending order of the
- timestamp.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.get_dump_sizes_bytes(node_name, output_slot, debug_op)` {#DebugDumpDir.get_dump_sizes_bytes}
-
-Get the sizes of the dump files for a debug-dumped tensor.
-
-Unit of the file size: byte.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node that the tensor is produced by.
-* <b>`output_slot`</b>: (`int`) output slot index of tensor.
-* <b>`debug_op`</b>: (`str`) name of the debug op.
-
-##### Returns:
-
- (`list` of `int`): list of dump file sizes in bytes.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the tensor watch key does not exist in the debug dump data.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.get_rel_timestamps(node_name, output_slot, debug_op)` {#DebugDumpDir.get_rel_timestamps}
-
-Get the relative timestamp from for a debug-dumped tensor.
-
-Relative timestamp means (absolute timestamp - `t0`), where `t0` is the
-absolute timestamp of the first dumped tensor in the dump root. The tensor
-may be dumped multiple times in the dump root directory, so a list of
-relative timestamps (`numpy.ndarray`) is returned.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node that the tensor is produced by.
-* <b>`output_slot`</b>: (`int`) output slot index of tensor.
-* <b>`debug_op`</b>: (`str`) name of the debug op.
-
-##### Returns:
-
- (`list` of `int`) list of relative timestamps.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the tensor watch key does not exist in the debug dump data.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.get_tensor_file_paths(node_name, output_slot, debug_op)` {#DebugDumpDir.get_tensor_file_paths}
-
-Get the file paths from a debug-dumped tensor.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node that the tensor is produced by.
-* <b>`output_slot`</b>: (`int`) output slot index of tensor.
-* <b>`debug_op`</b>: (`str`) name of the debug op.
-
-##### Returns:
-
- List of file path(s) loaded. This is a list because each debugged tensor
- may be dumped multiple times.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the tensor does not exist in the debug-dump data.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.get_tensors(node_name, output_slot, debug_op)` {#DebugDumpDir.get_tensors}
-
-Get the tensor value from for a debug-dumped tensor.
-
-The tensor may be dumped multiple times in the dump root directory, so a
-list of tensors (`numpy.ndarray`) is returned.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node that the tensor is produced by.
-* <b>`output_slot`</b>: (`int`) output slot index of tensor.
-* <b>`debug_op`</b>: (`str`) name of the debug op.
-
-##### Returns:
-
- List of tensors (`numpy.ndarray`) loaded from the debug-dump file(s).
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the tensor does not exist in the debug-dump data.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.loaded_partition_graphs()` {#DebugDumpDir.loaded_partition_graphs}
-
-Test whether partition graphs have been loaded.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.node_attributes(node_name)` {#DebugDumpDir.node_attributes}
-
-Get the attributes of a node.
-
-##### Args:
-
-
-* <b>`node_name`</b>: Name of the node in question.
-
-##### Returns:
-
- Attributes of the node.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If no partition graphs have been loaded.
-* <b>`ValueError`</b>: If no node named node_name exists.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.node_device(node_name)` {#DebugDumpDir.node_device}
-
-Get the device of a node.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node.
-
-##### Returns:
-
- (`str`) name of the device on which the node is placed.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If node inputs and control inputs have not been loaded
- from partition graphs yet.
-* <b>`ValueError`</b>: If the node does not exist in partition graphs.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.node_exists(node_name)` {#DebugDumpDir.node_exists}
-
-Test if a node exists in the partition graphs.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node to be checked.
-
-##### Returns:
-
- A boolean indicating whether the node exists.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If no partition graphs have been loaded yet.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.node_inputs(node_name, is_control=False)` {#DebugDumpDir.node_inputs}
-
-Get the inputs of given node according to partition graphs.
-
-##### Args:
-
-
-* <b>`node_name`</b>: Name of the node.
-* <b>`is_control`</b>: (`bool`) Whether control inputs, rather than non-control
- inputs, are to be returned.
-
-##### Returns:
-
- (`list` of `str`) inputs to the node, as a list of node names.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If node inputs and control inputs have not been loaded
- from partition graphs yet.
-* <b>`ValueError`</b>: If the node does not exist in partition graphs.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.node_op_type(node_name)` {#DebugDumpDir.node_op_type}
-
-Get the op type of given node.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node.
-
-##### Returns:
-
- (`str`) op type of the node.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If node op types have not been loaded
- from partition graphs yet.
-* <b>`ValueError`</b>: If the node does not exist in partition graphs.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.node_recipients(node_name, is_control=False)` {#DebugDumpDir.node_recipients}
-
-Get recipient of the given node's output according to partition graphs.
-
-##### Args:
-
-
-* <b>`node_name`</b>: (`str`) name of the node.
-* <b>`is_control`</b>: (`bool`) whether control outputs, rather than non-control
- outputs, are to be returned.
-
-##### Returns:
-
- (`list` of `str`) all inputs to the node, as a list of node names.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If node inputs and control inputs have not been loaded
- from partition graphs yet.
-* <b>`ValueError`</b>: If the node does not exist in partition graphs.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.node_traceback(element_name)` {#DebugDumpDir.node_traceback}
-
-Try to retrieve the Python traceback of node's construction.
-
-##### Args:
-
-
-* <b>`element_name`</b>: (`str`) Name of a graph element (node or tensor).
-
-##### Returns:
-
- (list) The traceback list object as returned by the `extract_trace`
- method of Python's traceback module.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If Python graph is not available for traceback lookup.
-* <b>`KeyError`</b>: If the node cannot be found in the Python graph loaded.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.nodes()` {#DebugDumpDir.nodes}
-
-Get a list of all nodes from the partition graphs.
-
-##### Returns:
-
- All nodes' names, as a list of str.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If no partition graphs have been loaded.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.partition_graphs()` {#DebugDumpDir.partition_graphs}
-
-Get the partition graphs.
-
-##### Returns:
-
- Partition graphs as repeated fields of GraphDef.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If no partition graphs have been loaded.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.run_feed_keys_info` {#DebugDumpDir.run_feed_keys_info}
-
-Get a str representation of the feed_dict used in the Session.run() call.
-
-##### Returns:
-
- If the information is available, a `str` obtained from `repr(feed_dict)`.
- If the information is not available, `None`.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.run_fetches_info` {#DebugDumpDir.run_fetches_info}
-
-Get a str representation of the fetches used in the Session.run() call.
-
-##### Returns:
-
- If the information is available, a `str` obtained from `repr(fetches)`.
- If the information is not available, `None`.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.set_python_graph(python_graph)` {#DebugDumpDir.set_python_graph}
-
-Provide Python `Graph` object to the wrapper.
-
-Unlike the partition graphs, which are protobuf `GraphDef` objects, `Graph`
-is a Python object and carries additional information such as the traceback
-of the construction of the nodes in the graph.
-
-##### Args:
-
-
-* <b>`python_graph`</b>: (ops.Graph) The Python Graph object.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.size` {#DebugDumpDir.size}
-
-Total number of dumped tensors in the dump root directory.
-
-##### Returns:
-
- (`int`) total number of dumped tensors in the dump root directory.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.t0` {#DebugDumpDir.t0}
-
-Absolute timestamp of the first dumped tensor.
-
-##### Returns:
-
- (`int`) absolute timestamp of the first dumped tensor, in microseconds.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.transitive_inputs(node_name, include_control=True)` {#DebugDumpDir.transitive_inputs}
-
-Get the transitive inputs of given node according to partition graphs.
-
-##### Args:
-
-
-* <b>`node_name`</b>: Name of the node
-* <b>`include_control`</b>: Include control inputs (True by default).
-
-##### Returns:
-
- (`list` of `str`) all transitive inputs to the node, as a list of node
- names.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: If node inputs and control inputs have not been loaded
- from partition graphs yet.
-* <b>`ValueError`</b>: If the node does not exist in partition graphs.
-
-
-- - -
-
-#### `tf_debug.DebugDumpDir.watch_key_to_data(debug_watch_key)` {#DebugDumpDir.watch_key_to_data}
-
-Get all `DebugTensorDatum` instances corresponding to a debug watch key.
-
-##### Args:
-
-
-* <b>`debug_watch_key`</b>: (`str`) debug watch key.
-
-##### Returns:
-
- A list of `DebugTensorDatum` instances that correspond to the debug watch
- key. If the watch key does not exist, returns an empty list.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the debug watch key does not exist.
-
-
-
-- - -
-
-### `tf_debug.load_tensor_from_event_file(event_file_path)` {#load_tensor_from_event_file}
-
-Load a tensor from an event file.
-
-Assumes that the event file contains a `Event` protobuf and the `Event`
-protobuf contains a `Tensor` value.
-
-##### Args:
-
-
-* <b>`event_file_path`</b>: (`str`) path to the event file.
-
-##### Returns:
-
- The tensor value loaded from the event file, as a `numpy.ndarray`. For
- uninitialized Tensors, returns `None`. For Tensors of data types that
- cannot be converted to `numpy.ndarray` (e.g., `tf.resource`), return
- `None`.
-
-
-- - -
-
-### `tf_debug.has_inf_or_nan(datum, tensor)` {#has_inf_or_nan}
-
-A predicate for whether a tensor consists of any bad numerical values.
-
-This predicate is common enough to merit definition in this module.
-Bad numerical values include `nan`s and `inf`s.
-The signature of this function follows the requirement of the method
-`DebugDumpDir.find()`.
-
-##### Args:
-
-
-* <b>`datum`</b>: (`DebugTensorDatum`) Datum metadata.
-* <b>`tensor`</b>: (`numpy.ndarray` or None) Value of the tensor. None represents
- an uninitialized tensor.
-
-##### Returns:
-
- (`bool`) True if and only if tensor consists of any nan or inf values.
-
-
-- - -
-
-### `class tf_debug.DumpingDebugHook` {#DumpingDebugHook}
-
-A debugger hook that dumps debug data to filesystem.
-
-Can be used as a monitor/hook for `tf.train.MonitoredSession`s and
-`tf.contrib.learn`'s `Estimator`s and `Experiment`s.
-- - -
-
-#### `tf_debug.DumpingDebugHook.__enter__()` {#DumpingDebugHook.__enter__}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.__exit__(exec_type, exec_value, exec_tb)` {#DumpingDebugHook.__exit__}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.__init__(session_root, watch_fn=None, log_usage=True)` {#DumpingDebugHook.__init__}
-
-Create a local debugger command-line interface (CLI) hook.
-
-##### Args:
-
-
-* <b>`session_root`</b>: See doc of
- `dumping_wrapper.DumpingDebugWrapperSession.__init__`.
-* <b>`watch_fn`</b>: See doc of
- `dumping_wrapper.DumpingDebugWrapperSession.__init__`.
-* <b>`log_usage`</b>: (bool) Whether usage is to be logged.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.after_create_session(session, coord)` {#DumpingDebugHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.after_run(run_context, run_values)` {#DumpingDebugHook.after_run}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.before_run(run_context)` {#DumpingDebugHook.before_run}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.begin()` {#DumpingDebugHook.begin}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.close()` {#DumpingDebugHook.close}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.end(session)` {#DumpingDebugHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.graph` {#DumpingDebugHook.graph}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.invoke_node_stepper(node_stepper, restore_variable_values_on_exit=True)` {#DumpingDebugHook.invoke_node_stepper}
-
-See doc of BaseDebugWrapperSession.invoke_node_stepper.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.on_run_end(request)` {#DumpingDebugHook.on_run_end}
-
-See doc of BaseDebugWrapperSession.on_run_end.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.on_run_start(request)` {#DumpingDebugHook.on_run_start}
-
-See doc of BaseDebugWrapperSession.on_run_start.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.on_session_init(request)` {#DumpingDebugHook.on_session_init}
-
-See doc of BaseDebugWrapperSession.on_run_start.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.partial_run(handle, fetches, feed_dict=None)` {#DumpingDebugHook.partial_run}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.partial_run_setup(fetches, feeds=None)` {#DumpingDebugHook.partial_run_setup}
-
-Sets up the feeds and fetches for partial runs in the session.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#DumpingDebugHook.run}
-
-Wrapper around Session.run() that inserts tensor watch options.
-
-##### Args:
-
-
-* <b>`fetches`</b>: Same as the `fetches` arg to regular `Session.run()`.
-* <b>`feed_dict`</b>: Same as the `feed_dict` arg to regular `Session.run()`.
-* <b>`options`</b>: Same as the `options` arg to regular `Session.run()`.
-* <b>`run_metadata`</b>: Same as the `run_metadata` arg to regular `Session.run()`.
-
-##### Returns:
-
- Simply forwards the output of the wrapped `Session.run()` call.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: On invalid `OnRunStartAction` value.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.sess_str` {#DumpingDebugHook.sess_str}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugHook.session` {#DumpingDebugHook.session}
-
-
-
-
-
-- - -
-
-### `class tf_debug.DumpingDebugWrapperSession` {#DumpingDebugWrapperSession}
-
-Debug Session wrapper that dumps debug data to filesystem.
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.__enter__()` {#DumpingDebugWrapperSession.__enter__}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.__exit__(exec_type, exec_value, exec_tb)` {#DumpingDebugWrapperSession.__exit__}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.__init__(sess, session_root, watch_fn=None, log_usage=True)` {#DumpingDebugWrapperSession.__init__}
-
-Constructor of DumpingDebugWrapperSession.
-
-##### Args:
-
-
-* <b>`sess`</b>: The TensorFlow `Session` object being wrapped.
-* <b>`session_root`</b>: (`str`) Path to the session root directory. Must be a
- directory that does not exist or an empty directory. If the directory
- does not exist, it will be created by the debugger core during debug
- [`Session.run()`](../../../g3doc/api_docs/python/client.md#session.run)
- calls.
- As the `run()` calls occur, subdirectories will be added to
- `session_root`. The subdirectories' names has the following pattern:
- run_<epoch_time_stamp>_<uuid>
- E.g., run_1480734393835964_ad4c953a85444900ae79fc1b652fb324
-* <b>`watch_fn`</b>: (`Callable`) A Callable that can be used to define per-run
- debug ops and watched tensors. See the doc of
- `NonInteractiveDebugWrapperSession.__init__()` for details.
-* <b>`log_usage`</b>: (`bool`) whether the usage of this class is to be logged.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `session_root` is an existing and non-empty directory or
- if `session_root` is a file.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.close()` {#DumpingDebugWrapperSession.close}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.graph` {#DumpingDebugWrapperSession.graph}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.invoke_node_stepper(node_stepper, restore_variable_values_on_exit=True)` {#DumpingDebugWrapperSession.invoke_node_stepper}
-
-See doc of BaseDebugWrapperSession.invoke_node_stepper.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.on_run_end(request)` {#DumpingDebugWrapperSession.on_run_end}
-
-See doc of BaseDebugWrapperSession.on_run_end.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.on_run_start(request)` {#DumpingDebugWrapperSession.on_run_start}
-
-See doc of BaseDebugWrapperSession.on_run_start.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.on_session_init(request)` {#DumpingDebugWrapperSession.on_session_init}
-
-See doc of BaseDebugWrapperSession.on_run_start.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.partial_run(handle, fetches, feed_dict=None)` {#DumpingDebugWrapperSession.partial_run}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.partial_run_setup(fetches, feeds=None)` {#DumpingDebugWrapperSession.partial_run_setup}
-
-Sets up the feeds and fetches for partial runs in the session.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#DumpingDebugWrapperSession.run}
-
-Wrapper around Session.run() that inserts tensor watch options.
-
-##### Args:
-
-
-* <b>`fetches`</b>: Same as the `fetches` arg to regular `Session.run()`.
-* <b>`feed_dict`</b>: Same as the `feed_dict` arg to regular `Session.run()`.
-* <b>`options`</b>: Same as the `options` arg to regular `Session.run()`.
-* <b>`run_metadata`</b>: Same as the `run_metadata` arg to regular `Session.run()`.
-
-##### Returns:
-
- Simply forwards the output of the wrapped `Session.run()` call.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: On invalid `OnRunStartAction` value.
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.sess_str` {#DumpingDebugWrapperSession.sess_str}
-
-
-
-
-- - -
-
-#### `tf_debug.DumpingDebugWrapperSession.session` {#DumpingDebugWrapperSession.session}
-
-
-
-
-
-- - -
-
-### `class tf_debug.LocalCLIDebugHook` {#LocalCLIDebugHook}
-
-Command-line-interface debugger hook.
-
-Can be used as a monitor/hook for `tf.train.MonitoredSession`s and
-`tf.contrib.learn`'s `Estimator`s and `Experiment`s.
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.__enter__()` {#LocalCLIDebugHook.__enter__}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.__exit__(exec_type, exec_value, exec_tb)` {#LocalCLIDebugHook.__exit__}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.__init__(ui_type='curses')` {#LocalCLIDebugHook.__init__}
-
-Create a local debugger command-line interface (CLI) hook.
-
-##### Args:
-
-
-* <b>`ui_type`</b>: (str) user-interface type.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.add_tensor_filter(filter_name, tensor_filter)` {#LocalCLIDebugHook.add_tensor_filter}
-
-Add a tensor filter.
-
-See doc of `LocalCLIDebugWrapperSession.add_tensor_filter()` for details.
-Override default behavior to accomodate the possibility of this method being
-called prior to the initialization of the underlying
-`LocalCLIDebugWrapperSession` object.
-
-##### Args:
-
-
-* <b>`filter_name`</b>: See doc of `LocalCLIDebugWrapperSession.add_tensor_filter()`
- for details.
-* <b>`tensor_filter`</b>: See doc of
- `LocalCLIDebugWrapperSession.add_tensor_filter()` for details.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.after_create_session(session, coord)` {#LocalCLIDebugHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.after_run(run_context, run_values)` {#LocalCLIDebugHook.after_run}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.before_run(run_context)` {#LocalCLIDebugHook.before_run}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.begin()` {#LocalCLIDebugHook.begin}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.close()` {#LocalCLIDebugHook.close}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.end(session)` {#LocalCLIDebugHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.graph` {#LocalCLIDebugHook.graph}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.invoke_node_stepper(node_stepper, restore_variable_values_on_exit=True)` {#LocalCLIDebugHook.invoke_node_stepper}
-
-Overrides method in base class to implement interactive node stepper.
-
-##### Args:
-
-
-* <b>`node_stepper`</b>: (`stepper.NodeStepper`) The underlying NodeStepper API
- object.
-* <b>`restore_variable_values_on_exit`</b>: (`bool`) Whether any variables whose
- values have been altered during this node-stepper invocation should be
- restored to their old values when this invocation ends.
-
-##### Returns:
-
- The same return values as the `Session.run()` call on the same fetches as
- the NodeStepper.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.on_run_end(request)` {#LocalCLIDebugHook.on_run_end}
-
-Overrides on-run-end callback.
-
-##### Actions taken:
-
- 1) Load the debug dump.
- 2) Bring up the Analyzer CLI.
-
-##### Args:
-
-
-* <b>`request`</b>: An instance of OnSessionInitRequest.
-
-##### Returns:
-
- An instance of OnSessionInitResponse.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.on_run_start(request)` {#LocalCLIDebugHook.on_run_start}
-
-Overrides on-run-start callback.
-
-##### Invoke the CLI to let user choose what action to take:
-
- `run` / `invoke_stepper`.
-
-##### Args:
-
-
-* <b>`request`</b>: An instance of `OnSessionInitRequest`.
-
-##### Returns:
-
- An instance of `OnSessionInitResponse`.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If user chooses to prematurely exit the debugger.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.on_session_init(request)` {#LocalCLIDebugHook.on_session_init}
-
-Overrides on-session-init callback.
-
-##### Args:
-
-
-* <b>`request`</b>: An instance of `OnSessionInitRequest`.
-
-##### Returns:
-
- An instance of `OnSessionInitResponse`.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.partial_run(handle, fetches, feed_dict=None)` {#LocalCLIDebugHook.partial_run}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.partial_run_setup(fetches, feeds=None)` {#LocalCLIDebugHook.partial_run_setup}
-
-Sets up the feeds and fetches for partial runs in the session.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#LocalCLIDebugHook.run}
-
-Wrapper around Session.run() that inserts tensor watch options.
-
-##### Args:
-
-
-* <b>`fetches`</b>: Same as the `fetches` arg to regular `Session.run()`.
-* <b>`feed_dict`</b>: Same as the `feed_dict` arg to regular `Session.run()`.
-* <b>`options`</b>: Same as the `options` arg to regular `Session.run()`.
-* <b>`run_metadata`</b>: Same as the `run_metadata` arg to regular `Session.run()`.
-
-##### Returns:
-
- Simply forwards the output of the wrapped `Session.run()` call.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: On invalid `OnRunStartAction` value.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.sess_str` {#LocalCLIDebugHook.sess_str}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugHook.session` {#LocalCLIDebugHook.session}
-
-
-
-
-
-- - -
-
-### `class tf_debug.LocalCLIDebugWrapperSession` {#LocalCLIDebugWrapperSession}
-
-Concrete subclass of BaseDebugWrapperSession implementing a local CLI.
-
-This class has all the methods that a `session.Session` object has, in order
-to support debugging with minimal code changes. Invoking its `run()` method
-will launch the command-line interface (CLI) of tfdbg.
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.__enter__()` {#LocalCLIDebugWrapperSession.__enter__}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.__exit__(exec_type, exec_value, exec_tb)` {#LocalCLIDebugWrapperSession.__exit__}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.__init__(sess, dump_root=None, log_usage=True, ui_type='curses')` {#LocalCLIDebugWrapperSession.__init__}
-
-Constructor of LocalCLIDebugWrapperSession.
-
-##### Args:
-
-
-* <b>`sess`</b>: The TensorFlow `Session` object being wrapped.
-* <b>`dump_root`</b>: (`str`) optional path to the dump root directory. Must be a
- directory that does not exist or an empty directory. If the directory
- does not exist, it will be created by the debugger core during debug
- `run()` calls and removed afterwards.
-* <b>`log_usage`</b>: (`bool`) whether the usage of this class is to be logged.
-* <b>`ui_type`</b>: (`str`) requested UI type. Currently supported:
- (curses | readline)
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If dump_root is an existing and non-empty directory or if
- dump_root is a file.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.add_tensor_filter(filter_name, tensor_filter)` {#LocalCLIDebugWrapperSession.add_tensor_filter}
-
-Add a tensor filter.
-
-##### Args:
-
-
-* <b>`filter_name`</b>: (`str`) name of the filter.
-* <b>`tensor_filter`</b>: (`callable`) the filter callable. See the doc string of
- `DebugDumpDir.find()` for more details about its signature.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.close()` {#LocalCLIDebugWrapperSession.close}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.graph` {#LocalCLIDebugWrapperSession.graph}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.invoke_node_stepper(node_stepper, restore_variable_values_on_exit=True)` {#LocalCLIDebugWrapperSession.invoke_node_stepper}
-
-Overrides method in base class to implement interactive node stepper.
-
-##### Args:
-
-
-* <b>`node_stepper`</b>: (`stepper.NodeStepper`) The underlying NodeStepper API
- object.
-* <b>`restore_variable_values_on_exit`</b>: (`bool`) Whether any variables whose
- values have been altered during this node-stepper invocation should be
- restored to their old values when this invocation ends.
-
-##### Returns:
-
- The same return values as the `Session.run()` call on the same fetches as
- the NodeStepper.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.on_run_end(request)` {#LocalCLIDebugWrapperSession.on_run_end}
-
-Overrides on-run-end callback.
-
-##### Actions taken:
-
- 1) Load the debug dump.
- 2) Bring up the Analyzer CLI.
-
-##### Args:
-
-
-* <b>`request`</b>: An instance of OnSessionInitRequest.
-
-##### Returns:
-
- An instance of OnSessionInitResponse.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.on_run_start(request)` {#LocalCLIDebugWrapperSession.on_run_start}
-
-Overrides on-run-start callback.
-
-##### Invoke the CLI to let user choose what action to take:
-
- `run` / `invoke_stepper`.
-
-##### Args:
-
-
-* <b>`request`</b>: An instance of `OnSessionInitRequest`.
-
-##### Returns:
-
- An instance of `OnSessionInitResponse`.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If user chooses to prematurely exit the debugger.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.on_session_init(request)` {#LocalCLIDebugWrapperSession.on_session_init}
-
-Overrides on-session-init callback.
-
-##### Args:
-
-
-* <b>`request`</b>: An instance of `OnSessionInitRequest`.
-
-##### Returns:
-
- An instance of `OnSessionInitResponse`.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.partial_run(handle, fetches, feed_dict=None)` {#LocalCLIDebugWrapperSession.partial_run}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.partial_run_setup(fetches, feeds=None)` {#LocalCLIDebugWrapperSession.partial_run_setup}
-
-Sets up the feeds and fetches for partial runs in the session.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#LocalCLIDebugWrapperSession.run}
-
-Wrapper around Session.run() that inserts tensor watch options.
-
-##### Args:
-
-
-* <b>`fetches`</b>: Same as the `fetches` arg to regular `Session.run()`.
-* <b>`feed_dict`</b>: Same as the `feed_dict` arg to regular `Session.run()`.
-* <b>`options`</b>: Same as the `options` arg to regular `Session.run()`.
-* <b>`run_metadata`</b>: Same as the `run_metadata` arg to regular `Session.run()`.
-
-##### Returns:
-
- Simply forwards the output of the wrapped `Session.run()` call.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: On invalid `OnRunStartAction` value.
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.sess_str` {#LocalCLIDebugWrapperSession.sess_str}
-
-
-
-
-- - -
-
-#### `tf_debug.LocalCLIDebugWrapperSession.session` {#LocalCLIDebugWrapperSession.session}
-
-
-
-
-
diff --git a/tensorflow/g3doc/api_docs/python/train.md b/tensorflow/g3doc/api_docs/python/train.md
deleted file mode 100644
index 06f0087ac2..0000000000
--- a/tensorflow/g3doc/api_docs/python/train.md
+++ /dev/null
@@ -1,6664 +0,0 @@
-<!-- This file is machine generated: DO NOT EDIT! -->
-
-# Training
-[TOC]
-
-Support for training models. See the @{$python/train} guide.
-
-- - -
-
-### `class tf.train.Optimizer` {#Optimizer}
-
-Base class for optimizers.
-
-This class defines the API to add Ops to train a model. You never use this
-class directly, but instead instantiate one of its subclasses such as
-`GradientDescentOptimizer`, `AdagradOptimizer`, or `MomentumOptimizer`.
-
-### Usage
-
-```python
-# Create an optimizer with the desired parameters.
-opt = GradientDescentOptimizer(learning_rate=0.1)
-# Add Ops to the graph to minimize a cost by updating a list of variables.
-# "cost" is a Tensor, and the list of variables contains tf.Variable
-# objects.
-opt_op = opt.minimize(cost, var_list=<list of variables>)
-```
-
-In the training program you will just have to run the returned Op.
-
-```python
-# Execute opt_op to do one step of training:
-opt_op.run()
-```
-
-### Processing gradients before applying them.
-
-Calling `minimize()` takes care of both computing the gradients and
-applying them to the variables. If you want to process the gradients
-before applying them you can instead use the optimizer in three steps:
-
-1. Compute the gradients with `compute_gradients()`.
-2. Process the gradients as you wish.
-3. Apply the processed gradients with `apply_gradients()`.
-
-Example:
-
-```python
-# Create an optimizer.
-opt = GradientDescentOptimizer(learning_rate=0.1)
-
-# Compute the gradients for a list of variables.
-grads_and_vars = opt.compute_gradients(loss, <list of variables>)
-
-# grads_and_vars is a list of tuples (gradient, variable). Do whatever you
-# need to the 'gradient' part, for example cap them, etc.
-capped_grads_and_vars = [(MyCapper(gv[0]), gv[1]) for gv in grads_and_vars]
-
-# Ask the optimizer to apply the capped gradients.
-opt.apply_gradients(capped_grads_and_vars)
-```
-
-- - -
-
-#### `tf.train.Optimizer.__init__(use_locking, name)` {#Optimizer.__init__}
-
-Create a new Optimizer.
-
-This must be called by the constructors of subclasses.
-
-##### Args:
-
-
-* <b>`use_locking`</b>: Bool. If True apply use locks to prevent concurrent updates
- to variables.
-* <b>`name`</b>: A non-empty string. The name to use for accumulators created
- for the optimizer.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If name is malformed.
-
-
-
-- - -
-
-#### `tf.train.Optimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#Optimizer.minimize}
-
-Add operations to minimize `loss` by updating `var_list`.
-
-This method simply combines calls `compute_gradients()` and
-`apply_gradients()`. If you want to process the gradient before applying
-them call `compute_gradients()` and `apply_gradients()` explicitly instead
-of using this function.
-
-##### Args:
-
-
-* <b>`loss`</b>: A `Tensor` containing the value to minimize.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`name`</b>: Optional name for the returned operation.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- An Operation that updates the variables in `var_list`. If `global_step`
- was not `None`, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
-
-
-- - -
-
-#### `tf.train.Optimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#Optimizer.compute_gradients}
-
-Compute gradients of `loss` for the variables in `var_list`.
-
-This is the first part of `minimize()`. It returns a list
-of (gradient, variable) pairs where "gradient" is the gradient
-for "variable". Note that "gradient" can be a `Tensor`, an
-`IndexedSlices`, or `None` if there is no gradient for the
-given variable.
-
-##### Args:
-
-
-* <b>`loss`</b>: A Tensor containing the value to minimize.
-* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKey.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- A list of (gradient, variable) pairs. Variable is always present, but
- gradient can be `None`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
-* <b>`ValueError`</b>: If some arguments are invalid.
-
-
-- - -
-
-#### `tf.train.Optimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#Optimizer.apply_gradients}
-
-Apply gradients to variables.
-
-This is the second part of `minimize()`. It returns an `Operation` that
-applies gradients.
-
-##### Args:
-
-
-* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
- `compute_gradients()`.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`name`</b>: Optional name for the returned operation. Default to the
- name passed to the `Optimizer` constructor.
-
-##### Returns:
-
- An `Operation` that applies the specified gradients. If `global_step`
- was not None, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
-* <b>`ValueError`</b>: If none of the variables have gradients.
-
-
-
-### Gating Gradients
-
-Both `minimize()` and `compute_gradients()` accept a `gate_gradients`
-argument that controls the degree of parallelism during the application of
-the gradients.
-
-The possible values are: `GATE_NONE`, `GATE_OP`, and `GATE_GRAPH`.
-
-<b>`GATE_NONE`</b>: Compute and apply gradients in parallel. This provides
-the maximum parallelism in execution, at the cost of some non-reproducibility
-in the results. For example the two gradients of `matmul` depend on the input
-values: With `GATE_NONE` one of the gradients could be applied to one of the
-inputs _before_ the other gradient is computed resulting in non-reproducible
-results.
-
-<b>`GATE_OP`</b>: For each Op, make sure all gradients are computed before
-they are used. This prevents race conditions for Ops that generate gradients
-for multiple inputs where the gradients depend on the inputs.
-
-<b>`GATE_GRAPH`</b>: Make sure all gradients for all variables are computed
-before any one of them is used. This provides the least parallelism but can
-be useful if you want to process all gradients before applying any of them.
-
-### Slots
-
-Some optimizer subclasses, such as `MomentumOptimizer` and `AdagradOptimizer`
-allocate and manage additional variables associated with the variables to
-train. These are called <i>Slots</i>. Slots have names and you can ask the
-optimizer for the names of the slots that it uses. Once you have a slot name
-you can ask the optimizer for the variable it created to hold the slot value.
-
-This can be useful if you want to log debug a training algorithm, report stats
-about the slots, etc.
-
-- - -
-
-#### `tf.train.Optimizer.get_slot_names()` {#Optimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-See `get_slot()`.
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.train.Optimizer.get_slot(var, name)` {#Optimizer.get_slot}
-
-Return a slot named `name` created for `var` by the Optimizer.
-
-Some `Optimizer` subclasses use additional variables. For example
-`Momentum` and `Adagrad` use variables to accumulate updates. This method
-gives access to these `Variable` objects if for some reason you need them.
-
-Use `get_slot_names()` to get the list of slot names created by the
-`Optimizer`.
-
-##### Args:
-
-
-* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
-* <b>`name`</b>: A string.
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.train.Optimizer.get_name()` {#Optimizer.get_name}
-
-
-
-
-
-- - -
-
-### `class tf.train.GradientDescentOptimizer` {#GradientDescentOptimizer}
-
-Optimizer that implements the gradient descent algorithm.
-
-- - -
-
-#### `tf.train.GradientDescentOptimizer.__init__(learning_rate, use_locking=False, name='GradientDescent')` {#GradientDescentOptimizer.__init__}
-
-Construct a new gradient descent optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A Tensor or a floating point value. The learning
- rate to use.
-* <b>`use_locking`</b>: If True use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "GradientDescent".
-
-
-
-- - -
-
-### `class tf.train.AdadeltaOptimizer` {#AdadeltaOptimizer}
-
-Optimizer that implements the Adadelta algorithm.
-
-See [M. D. Zeiler](http://arxiv.org/abs/1212.5701)
-([pdf](http://arxiv.org/pdf/1212.5701v1.pdf))
-- - -
-
-#### `tf.train.AdadeltaOptimizer.__init__(learning_rate=0.001, rho=0.95, epsilon=1e-08, use_locking=False, name='Adadelta')` {#AdadeltaOptimizer.__init__}
-
-Construct a new Adadelta optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A `Tensor` or a floating point value. The learning rate.
-* <b>`rho`</b>: A `Tensor` or a floating point value. The decay rate.
-* <b>`epsilon`</b>: A `Tensor` or a floating point value. A constant epsilon used
- to better conditioning the grad update.
-* <b>`use_locking`</b>: If `True` use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "Adadelta".
-
-
-- - -
-
-#### `tf.train.AdadeltaOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdadeltaOptimizer.apply_gradients}
-
-Apply gradients to variables.
-
-This is the second part of `minimize()`. It returns an `Operation` that
-applies gradients.
-
-##### Args:
-
-
-* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
- `compute_gradients()`.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`name`</b>: Optional name for the returned operation. Default to the
- name passed to the `Optimizer` constructor.
-
-##### Returns:
-
- An `Operation` that applies the specified gradients. If `global_step`
- was not None, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
-* <b>`ValueError`</b>: If none of the variables have gradients.
-
-
-- - -
-
-#### `tf.train.AdadeltaOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdadeltaOptimizer.compute_gradients}
-
-Compute gradients of `loss` for the variables in `var_list`.
-
-This is the first part of `minimize()`. It returns a list
-of (gradient, variable) pairs where "gradient" is the gradient
-for "variable". Note that "gradient" can be a `Tensor`, an
-`IndexedSlices`, or `None` if there is no gradient for the
-given variable.
-
-##### Args:
-
-
-* <b>`loss`</b>: A Tensor containing the value to minimize.
-* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKey.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- A list of (gradient, variable) pairs. Variable is always present, but
- gradient can be `None`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
-* <b>`ValueError`</b>: If some arguments are invalid.
-
-
-- - -
-
-#### `tf.train.AdadeltaOptimizer.get_name()` {#AdadeltaOptimizer.get_name}
-
-
-
-
-- - -
-
-#### `tf.train.AdadeltaOptimizer.get_slot(var, name)` {#AdadeltaOptimizer.get_slot}
-
-Return a slot named `name` created for `var` by the Optimizer.
-
-Some `Optimizer` subclasses use additional variables. For example
-`Momentum` and `Adagrad` use variables to accumulate updates. This method
-gives access to these `Variable` objects if for some reason you need them.
-
-Use `get_slot_names()` to get the list of slot names created by the
-`Optimizer`.
-
-##### Args:
-
-
-* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
-* <b>`name`</b>: A string.
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-- - -
-
-#### `tf.train.AdadeltaOptimizer.get_slot_names()` {#AdadeltaOptimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-See `get_slot()`.
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.train.AdadeltaOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdadeltaOptimizer.minimize}
-
-Add operations to minimize `loss` by updating `var_list`.
-
-This method simply combines calls `compute_gradients()` and
-`apply_gradients()`. If you want to process the gradient before applying
-them call `compute_gradients()` and `apply_gradients()` explicitly instead
-of using this function.
-
-##### Args:
-
-
-* <b>`loss`</b>: A `Tensor` containing the value to minimize.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`name`</b>: Optional name for the returned operation.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- An Operation that updates the variables in `var_list`. If `global_step`
- was not `None`, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
-
-
-
-- - -
-
-### `class tf.train.AdagradOptimizer` {#AdagradOptimizer}
-
-Optimizer that implements the Adagrad algorithm.
-
-See this [paper](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf)
-or this
-[intro](http://cs.stanford.edu/~ppasupat/a9online/uploads/proximal_notes.pdf).
-- - -
-
-#### `tf.train.AdagradOptimizer.__init__(learning_rate, initial_accumulator_value=0.1, use_locking=False, name='Adagrad')` {#AdagradOptimizer.__init__}
-
-Construct a new Adagrad optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A `Tensor` or a floating point value. The learning rate.
-* <b>`initial_accumulator_value`</b>: A floating point value.
- Starting value for the accumulators, must be positive.
-* <b>`use_locking`</b>: If `True` use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "Adagrad".
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `initial_accumulator_value` is invalid.
-
-
-- - -
-
-#### `tf.train.AdagradOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdagradOptimizer.apply_gradients}
-
-Apply gradients to variables.
-
-This is the second part of `minimize()`. It returns an `Operation` that
-applies gradients.
-
-##### Args:
-
-
-* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
- `compute_gradients()`.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`name`</b>: Optional name for the returned operation. Default to the
- name passed to the `Optimizer` constructor.
-
-##### Returns:
-
- An `Operation` that applies the specified gradients. If `global_step`
- was not None, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
-* <b>`ValueError`</b>: If none of the variables have gradients.
-
-
-- - -
-
-#### `tf.train.AdagradOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdagradOptimizer.compute_gradients}
-
-Compute gradients of `loss` for the variables in `var_list`.
-
-This is the first part of `minimize()`. It returns a list
-of (gradient, variable) pairs where "gradient" is the gradient
-for "variable". Note that "gradient" can be a `Tensor`, an
-`IndexedSlices`, or `None` if there is no gradient for the
-given variable.
-
-##### Args:
-
-
-* <b>`loss`</b>: A Tensor containing the value to minimize.
-* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKey.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- A list of (gradient, variable) pairs. Variable is always present, but
- gradient can be `None`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
-* <b>`ValueError`</b>: If some arguments are invalid.
-
-
-- - -
-
-#### `tf.train.AdagradOptimizer.get_name()` {#AdagradOptimizer.get_name}
-
-
-
-
-- - -
-
-#### `tf.train.AdagradOptimizer.get_slot(var, name)` {#AdagradOptimizer.get_slot}
-
-Return a slot named `name` created for `var` by the Optimizer.
-
-Some `Optimizer` subclasses use additional variables. For example
-`Momentum` and `Adagrad` use variables to accumulate updates. This method
-gives access to these `Variable` objects if for some reason you need them.
-
-Use `get_slot_names()` to get the list of slot names created by the
-`Optimizer`.
-
-##### Args:
-
-
-* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
-* <b>`name`</b>: A string.
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-- - -
-
-#### `tf.train.AdagradOptimizer.get_slot_names()` {#AdagradOptimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-See `get_slot()`.
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.train.AdagradOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdagradOptimizer.minimize}
-
-Add operations to minimize `loss` by updating `var_list`.
-
-This method simply combines calls `compute_gradients()` and
-`apply_gradients()`. If you want to process the gradient before applying
-them call `compute_gradients()` and `apply_gradients()` explicitly instead
-of using this function.
-
-##### Args:
-
-
-* <b>`loss`</b>: A `Tensor` containing the value to minimize.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`name`</b>: Optional name for the returned operation.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- An Operation that updates the variables in `var_list`. If `global_step`
- was not `None`, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
-
-
-
-- - -
-
-### `class tf.train.AdagradDAOptimizer` {#AdagradDAOptimizer}
-
-Adagrad Dual Averaging algorithm for sparse linear models.
-
-See this [paper](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf).
-
-This optimizer takes care of regularization of unseen features in a mini batch
-by updating them when they are seen with a closed form update rule that is
-equivalent to having updated them on every mini-batch.
-
-AdagradDA is typically used when there is a need for large sparsity in the
-trained model. This optimizer only guarantees sparsity for linear models. Be
-careful when using AdagradDA for deep networks as it will require careful
-initialization of the gradient accumulators for it to train.
-- - -
-
-#### `tf.train.AdagradDAOptimizer.__init__(learning_rate, global_step, initial_gradient_squared_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='AdagradDA')` {#AdagradDAOptimizer.__init__}
-
-Construct a new AdagradDA optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A `Tensor` or a floating point value. The learning rate.
-* <b>`global_step`</b>: A `Tensor` containing the current training step number.
-* <b>`initial_gradient_squared_accumulator_value`</b>: A floating point value.
- Starting value for the accumulators, must be positive.
-* <b>`l1_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`l2_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`use_locking`</b>: If `True` use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "AdagradDA".
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `initial_gradient_squared_accumulator_value` is
- invalid.
-
-
-- - -
-
-#### `tf.train.AdagradDAOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdagradDAOptimizer.apply_gradients}
-
-Apply gradients to variables.
-
-This is the second part of `minimize()`. It returns an `Operation` that
-applies gradients.
-
-##### Args:
-
-
-* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
- `compute_gradients()`.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`name`</b>: Optional name for the returned operation. Default to the
- name passed to the `Optimizer` constructor.
-
-##### Returns:
-
- An `Operation` that applies the specified gradients. If `global_step`
- was not None, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
-* <b>`ValueError`</b>: If none of the variables have gradients.
-
-
-- - -
-
-#### `tf.train.AdagradDAOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdagradDAOptimizer.compute_gradients}
-
-Compute gradients of `loss` for the variables in `var_list`.
-
-This is the first part of `minimize()`. It returns a list
-of (gradient, variable) pairs where "gradient" is the gradient
-for "variable". Note that "gradient" can be a `Tensor`, an
-`IndexedSlices`, or `None` if there is no gradient for the
-given variable.
-
-##### Args:
-
-
-* <b>`loss`</b>: A Tensor containing the value to minimize.
-* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKey.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- A list of (gradient, variable) pairs. Variable is always present, but
- gradient can be `None`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
-* <b>`ValueError`</b>: If some arguments are invalid.
-
-
-- - -
-
-#### `tf.train.AdagradDAOptimizer.get_name()` {#AdagradDAOptimizer.get_name}
-
-
-
-
-- - -
-
-#### `tf.train.AdagradDAOptimizer.get_slot(var, name)` {#AdagradDAOptimizer.get_slot}
-
-Return a slot named `name` created for `var` by the Optimizer.
-
-Some `Optimizer` subclasses use additional variables. For example
-`Momentum` and `Adagrad` use variables to accumulate updates. This method
-gives access to these `Variable` objects if for some reason you need them.
-
-Use `get_slot_names()` to get the list of slot names created by the
-`Optimizer`.
-
-##### Args:
-
-
-* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
-* <b>`name`</b>: A string.
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-- - -
-
-#### `tf.train.AdagradDAOptimizer.get_slot_names()` {#AdagradDAOptimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-See `get_slot()`.
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.train.AdagradDAOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdagradDAOptimizer.minimize}
-
-Add operations to minimize `loss` by updating `var_list`.
-
-This method simply combines calls `compute_gradients()` and
-`apply_gradients()`. If you want to process the gradient before applying
-them call `compute_gradients()` and `apply_gradients()` explicitly instead
-of using this function.
-
-##### Args:
-
-
-* <b>`loss`</b>: A `Tensor` containing the value to minimize.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`name`</b>: Optional name for the returned operation.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- An Operation that updates the variables in `var_list`. If `global_step`
- was not `None`, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
-
-
-
-- - -
-
-### `class tf.train.MomentumOptimizer` {#MomentumOptimizer}
-
-Optimizer that implements the Momentum algorithm.
-
-- - -
-
-#### `tf.train.MomentumOptimizer.__init__(learning_rate, momentum, use_locking=False, name='Momentum', use_nesterov=False)` {#MomentumOptimizer.__init__}
-
-Construct a new Momentum optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A `Tensor` or a floating point value. The learning rate.
-* <b>`momentum`</b>: A `Tensor` or a floating point value. The momentum.
-* <b>`use_locking`</b>: If `True` use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "Momentum".
-* <b>`use_nesterov`</b>: If `True` use Nesterov Momentum.
- See [Sutskever et. al., 2013](
-* <b>`http`</b>: //jmlr.org/proceedings/papers/v28/sutskever13.pdf)
-
-
-
-- - -
-
-### `class tf.train.AdamOptimizer` {#AdamOptimizer}
-
-Optimizer that implements the Adam algorithm.
-
-See [Kingma et. al., 2014](http://arxiv.org/abs/1412.6980)
-([pdf](http://arxiv.org/pdf/1412.6980.pdf)).
-- - -
-
-#### `tf.train.AdamOptimizer.__init__(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False, name='Adam')` {#AdamOptimizer.__init__}
-
-Construct a new Adam optimizer.
-
-Initialization:
-
-```
-m_0 <- 0 (Initialize initial 1st moment vector)
-v_0 <- 0 (Initialize initial 2nd moment vector)
-t <- 0 (Initialize timestep)
-```
-
-The update rule for `variable` with gradient `g` uses an optimization
-described at the end of section2 of the paper:
-
-```
-t <- t + 1
-lr_t <- learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t)
-
-m_t <- beta1 * m_{t-1} + (1 - beta1) * g
-v_t <- beta2 * v_{t-1} + (1 - beta2) * g * g
-variable <- variable - lr_t * m_t / (sqrt(v_t) + epsilon)
-```
-
-The default value of 1e-8 for epsilon might not be a good default in
-general. For example, when training an Inception network on ImageNet a
-current good choice is 1.0 or 0.1.
-
-Note that in dense implement of this algorithm, m_t, v_t and variable will
-update even if g is zero, but in sparse implement, m_t, v_t and variable
-will not update in iterations g is zero.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A Tensor or a floating point value. The learning rate.
-* <b>`beta1`</b>: A float value or a constant float tensor.
- The exponential decay rate for the 1st moment estimates.
-* <b>`beta2`</b>: A float value or a constant float tensor.
- The exponential decay rate for the 2nd moment estimates.
-* <b>`epsilon`</b>: A small constant for numerical stability.
-* <b>`use_locking`</b>: If True use locks for update operations.
-* <b>`name`</b>: Optional name for the operations created when applying gradients.
- Defaults to "Adam".
-
-
-- - -
-
-#### `tf.train.AdamOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#AdamOptimizer.apply_gradients}
-
-Apply gradients to variables.
-
-This is the second part of `minimize()`. It returns an `Operation` that
-applies gradients.
-
-##### Args:
-
-
-* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
- `compute_gradients()`.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`name`</b>: Optional name for the returned operation. Default to the
- name passed to the `Optimizer` constructor.
-
-##### Returns:
-
- An `Operation` that applies the specified gradients. If `global_step`
- was not None, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
-* <b>`ValueError`</b>: If none of the variables have gradients.
-
-
-- - -
-
-#### `tf.train.AdamOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#AdamOptimizer.compute_gradients}
-
-Compute gradients of `loss` for the variables in `var_list`.
-
-This is the first part of `minimize()`. It returns a list
-of (gradient, variable) pairs where "gradient" is the gradient
-for "variable". Note that "gradient" can be a `Tensor`, an
-`IndexedSlices`, or `None` if there is no gradient for the
-given variable.
-
-##### Args:
-
-
-* <b>`loss`</b>: A Tensor containing the value to minimize.
-* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKey.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- A list of (gradient, variable) pairs. Variable is always present, but
- gradient can be `None`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
-* <b>`ValueError`</b>: If some arguments are invalid.
-
-
-- - -
-
-#### `tf.train.AdamOptimizer.get_name()` {#AdamOptimizer.get_name}
-
-
-
-
-- - -
-
-#### `tf.train.AdamOptimizer.get_slot(var, name)` {#AdamOptimizer.get_slot}
-
-Return a slot named `name` created for `var` by the Optimizer.
-
-Some `Optimizer` subclasses use additional variables. For example
-`Momentum` and `Adagrad` use variables to accumulate updates. This method
-gives access to these `Variable` objects if for some reason you need them.
-
-Use `get_slot_names()` to get the list of slot names created by the
-`Optimizer`.
-
-##### Args:
-
-
-* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
-* <b>`name`</b>: A string.
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-- - -
-
-#### `tf.train.AdamOptimizer.get_slot_names()` {#AdamOptimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-See `get_slot()`.
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.train.AdamOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#AdamOptimizer.minimize}
-
-Add operations to minimize `loss` by updating `var_list`.
-
-This method simply combines calls `compute_gradients()` and
-`apply_gradients()`. If you want to process the gradient before applying
-them call `compute_gradients()` and `apply_gradients()` explicitly instead
-of using this function.
-
-##### Args:
-
-
-* <b>`loss`</b>: A `Tensor` containing the value to minimize.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`name`</b>: Optional name for the returned operation.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- An Operation that updates the variables in `var_list`. If `global_step`
- was not `None`, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
-
-
-
-- - -
-
-### `class tf.train.FtrlOptimizer` {#FtrlOptimizer}
-
-Optimizer that implements the FTRL algorithm.
-
-See this [paper](
-https://www.eecs.tufts.edu/~dsculley/papers/ad-click-prediction.pdf).
-- - -
-
-#### `tf.train.FtrlOptimizer.__init__(learning_rate, learning_rate_power=-0.5, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='Ftrl')` {#FtrlOptimizer.__init__}
-
-Construct a new FTRL optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A float value or a constant float `Tensor`.
-* <b>`learning_rate_power`</b>: A float value, must be less or equal to zero.
-* <b>`initial_accumulator_value`</b>: The starting value for accumulators.
- Only positive values are allowed.
-* <b>`l1_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`l2_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`use_locking`</b>: If `True` use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "Ftrl".
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If one of the arguments is invalid.
-
-
-- - -
-
-#### `tf.train.FtrlOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#FtrlOptimizer.apply_gradients}
-
-Apply gradients to variables.
-
-This is the second part of `minimize()`. It returns an `Operation` that
-applies gradients.
-
-##### Args:
-
-
-* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
- `compute_gradients()`.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`name`</b>: Optional name for the returned operation. Default to the
- name passed to the `Optimizer` constructor.
-
-##### Returns:
-
- An `Operation` that applies the specified gradients. If `global_step`
- was not None, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `grads_and_vars` is malformed.
-* <b>`ValueError`</b>: If none of the variables have gradients.
-
-
-- - -
-
-#### `tf.train.FtrlOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)` {#FtrlOptimizer.compute_gradients}
-
-Compute gradients of `loss` for the variables in `var_list`.
-
-This is the first part of `minimize()`. It returns a list
-of (gradient, variable) pairs where "gradient" is the gradient
-for "variable". Note that "gradient" can be a `Tensor`, an
-`IndexedSlices`, or `None` if there is no gradient for the
-given variable.
-
-##### Args:
-
-
-* <b>`loss`</b>: A Tensor containing the value to minimize.
-* <b>`var_list`</b>: Optional list of `tf.Variable` to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKey.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- A list of (gradient, variable) pairs. Variable is always present, but
- gradient can be `None`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `var_list` contains anything else than `Variable` objects.
-* <b>`ValueError`</b>: If some arguments are invalid.
-
-
-- - -
-
-#### `tf.train.FtrlOptimizer.get_name()` {#FtrlOptimizer.get_name}
-
-
-
-
-- - -
-
-#### `tf.train.FtrlOptimizer.get_slot(var, name)` {#FtrlOptimizer.get_slot}
-
-Return a slot named `name` created for `var` by the Optimizer.
-
-Some `Optimizer` subclasses use additional variables. For example
-`Momentum` and `Adagrad` use variables to accumulate updates. This method
-gives access to these `Variable` objects if for some reason you need them.
-
-Use `get_slot_names()` to get the list of slot names created by the
-`Optimizer`.
-
-##### Args:
-
-
-* <b>`var`</b>: A variable passed to `minimize()` or `apply_gradients()`.
-* <b>`name`</b>: A string.
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-- - -
-
-#### `tf.train.FtrlOptimizer.get_slot_names()` {#FtrlOptimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-See `get_slot()`.
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.train.FtrlOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)` {#FtrlOptimizer.minimize}
-
-Add operations to minimize `loss` by updating `var_list`.
-
-This method simply combines calls `compute_gradients()` and
-`apply_gradients()`. If you want to process the gradient before applying
-them call `compute_gradients()` and `apply_gradients()` explicitly instead
-of using this function.
-
-##### Args:
-
-
-* <b>`loss`</b>: A `Tensor` containing the value to minimize.
-* <b>`global_step`</b>: Optional `Variable` to increment by one after the
- variables have been updated.
-* <b>`var_list`</b>: Optional list of `Variable` objects to update to minimize
- `loss`. Defaults to the list of variables collected in the graph
- under the key `GraphKeys.TRAINABLE_VARIABLES`.
-* <b>`gate_gradients`</b>: How to gate the computation of gradients. Can be
- `GATE_NONE`, `GATE_OP`, or `GATE_GRAPH`.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Valid values are defined in the class `AggregationMethod`.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`name`</b>: Optional name for the returned operation.
-* <b>`grad_loss`</b>: Optional. A `Tensor` holding the gradient computed for `loss`.
-
-##### Returns:
-
- An Operation that updates the variables in `var_list`. If `global_step`
- was not `None`, that operation also increments `global_step`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If some of the variables are not `Variable` objects.
-
-
-
-- - -
-
-### `class tf.train.ProximalGradientDescentOptimizer` {#ProximalGradientDescentOptimizer}
-
-Optimizer that implements the proximal gradient descent algorithm.
-
-See this [paper](http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting.pdf).
-
-- - -
-
-#### `tf.train.ProximalGradientDescentOptimizer.__init__(learning_rate, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='ProximalGradientDescent')` {#ProximalGradientDescentOptimizer.__init__}
-
-Construct a new proximal gradient descent optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A Tensor or a floating point value. The learning
- rate to use.
-* <b>`l1_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`l2_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`use_locking`</b>: If True use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "GradientDescent".
-
-
-
-- - -
-
-### `class tf.train.ProximalAdagradOptimizer` {#ProximalAdagradOptimizer}
-
-Optimizer that implements the Proximal Adagrad algorithm.
-
-See this [paper](http://papers.nips.cc/paper/3793-efficient-learning-using-forward-backward-splitting.pdf).
-
-- - -
-
-#### `tf.train.ProximalAdagradOptimizer.__init__(learning_rate, initial_accumulator_value=0.1, l1_regularization_strength=0.0, l2_regularization_strength=0.0, use_locking=False, name='ProximalAdagrad')` {#ProximalAdagradOptimizer.__init__}
-
-Construct a new ProximalAdagrad optimizer.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A `Tensor` or a floating point value. The learning rate.
-* <b>`initial_accumulator_value`</b>: A floating point value.
- Starting value for the accumulators, must be positive.
-* <b>`l1_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`l2_regularization_strength`</b>: A float value, must be greater than or
- equal to zero.
-* <b>`use_locking`</b>: If `True` use locks for update operations.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "Adagrad".
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the `initial_accumulator_value` is invalid.
-
-
-
-- - -
-
-### `class tf.train.RMSPropOptimizer` {#RMSPropOptimizer}
-
-Optimizer that implements the RMSProp algorithm.
-
-See the [paper](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf).
-
-- - -
-
-#### `tf.train.RMSPropOptimizer.__init__(learning_rate, decay=0.9, momentum=0.0, epsilon=1e-10, use_locking=False, centered=False, name='RMSProp')` {#RMSPropOptimizer.__init__}
-
-Construct a new RMSProp optimizer.
-
-Note that in dense implement of this algorithm, m_t and v_t will
-update even if g is zero, but in sparse implement, m_t and v_t
-will not update in iterations g is zero.
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A Tensor or a floating point value. The learning rate.
-* <b>`decay`</b>: Discounting factor for the history/coming gradient
-* <b>`momentum`</b>: A scalar tensor.
-* <b>`epsilon`</b>: Small value to avoid zero denominator.
-* <b>`use_locking`</b>: If True use locks for update operation.
-* <b>`centered`</b>: If True, gradients are normalized by the estimated variance of
- the gradient; if False, by the uncentered second moment. Setting this to
- True may help with training, but is slightly more expensive in terms of
- computation and memory. Defaults to False.
-* <b>`name`</b>: Optional name prefix for the operations created when applying
- gradients. Defaults to "RMSProp".
-
-
-
-- - -
-
-### `tf.gradients(ys, xs, grad_ys=None, name='gradients', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None)` {#gradients}
-
-Constructs symbolic partial derivatives of sum of `ys` w.r.t. x in `xs`.
-
-`ys` and `xs` are each a `Tensor` or a list of tensors. `grad_ys`
-is a list of `Tensor`, holding the gradients received by the
-`ys`. The list must be the same length as `ys`.
-
-`gradients()` adds ops to the graph to output the partial
-derivatives of `ys` with respect to `xs`. It returns a list of
-`Tensor` of length `len(xs)` where each tensor is the `sum(dy/dx)`
-for y in `ys`.
-
-`grad_ys` is a list of tensors of the same length as `ys` that holds
-the initial gradients for each y in `ys`. When `grad_ys` is None,
-we fill in a tensor of '1's of the shape of y for each y in `ys`. A
-user can provide their own initial `grad_ys` to compute the
-derivatives using a different initial gradient for each y (e.g., if
-one wanted to weight the gradient differently for each value in
-each y).
-
-##### Args:
-
-
-* <b>`ys`</b>: A `Tensor` or list of tensors to be differentiated.
-* <b>`xs`</b>: A `Tensor` or list of tensors to be used for differentiation.
-* <b>`grad_ys`</b>: Optional. A `Tensor` or list of tensors the same size as
- `ys` and holding the gradients computed for each y in `ys`.
-* <b>`name`</b>: Optional name to use for grouping all the gradient ops together.
- defaults to 'gradients'.
-* <b>`colocate_gradients_with_ops`</b>: If True, try colocating gradients with
- the corresponding op.
-* <b>`gate_gradients`</b>: If True, add a tuple around the gradients returned
- for an operations. This avoids some race conditions.
-* <b>`aggregation_method`</b>: Specifies the method used to combine gradient terms.
- Accepted values are constants defined in the class `AggregationMethod`.
-
-##### Returns:
-
- A list of `sum(dy/dx)` for each x in `xs`.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: if one of the operations between `x` and `y` does not
- have a registered gradient function.
-* <b>`ValueError`</b>: if the arguments are invalid.
-
-
-- - -
-
-### `class tf.AggregationMethod` {#AggregationMethod}
-
-A class listing aggregation methods used to combine gradients.
-
-Computing partial derivatives can require aggregating gradient
-contributions. This class lists the various methods that can
-be used to combine gradients in the graph:
-
-* `ADD_N`: All of the gradient terms are summed as part of one
- operation using the "AddN" op. It has the property that all
- gradients must be ready before any aggregation is performed.
-* `DEFAULT`: The system-chosen default aggregation method.
-
-- - -
-
-### `tf.stop_gradient(input, name=None)` {#stop_gradient}
-
-Stops gradient computation.
-
-When executed in a graph, this op outputs its input tensor as-is.
-
-When building ops to compute gradients, this op prevents the contribution of
-its inputs to be taken into account. Normally, the gradient generator adds ops
-to a graph to compute the derivatives of a specified 'loss' by recursively
-finding out inputs that contributed to its computation. If you insert this op
-in the graph it inputs are masked from the gradient generator. They are not
-taken into account for computing gradients.
-
-This is useful any time you want to compute a value with TensorFlow but need
-to pretend that the value was a constant. Some examples include:
-
-* The *EM* algorithm where the *M-step* should not involve backpropagation
- through the output of the *E-step*.
-* Contrastive divergence training of Boltzmann machines where, when
- differentiating the energy function, the training must not backpropagate
- through the graph that generated the samples from the model.
-* Adversarial training, where no backprop should happen through the adversarial
- example generation process.
-
-##### Args:
-
-
-* <b>`input`</b>: A `Tensor`.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A `Tensor`. Has the same type as `input`.
-
-
-- - -
-
-### `tf.hessians(ys, xs, name='hessians', colocate_gradients_with_ops=False, gate_gradients=False, aggregation_method=None)` {#hessians}
-
-Constructs the Hessian of sum of `ys` with respect to `x` in `xs`.
-
-`hessians()` adds ops to the graph to output the Hessian matrix of `ys`
-with respect to `xs`. It returns a list of `Tensor` of length `len(xs)`
-where each tensor is the Hessian of `sum(ys)`. This function currently
-only supports evaluating the Hessian with respect to (a list of) one-
-dimensional tensors.
-
-The Hessian is a matrix of second-order partial derivatives of a scalar
-tensor (see https://en.wikipedia.org/wiki/Hessian_matrix for more details).
-
-##### Args:
-
-
-* <b>`ys`</b>: A `Tensor` or list of tensors to be differentiated.
-* <b>`xs`</b>: A `Tensor` or list of tensors to be used for differentiation.
-* <b>`name`</b>: Optional name to use for grouping all the gradient ops together.
- defaults to 'hessians'.
-* <b>`colocate_gradients_with_ops`</b>: See `gradients()` documentation for details.
-* <b>`gate_gradients`</b>: See `gradients()` documentation for details.
-* <b>`aggregation_method`</b>: See `gradients()` documentation for details.
-
-##### Returns:
-
- A list of Hessian matrices of `sum(y)` for each `x` in `xs`.
-
-##### Raises:
-
-
-* <b>`LookupError`</b>: if one of the operations between `xs` and `ys` does not
- have a registered gradient function.
-* <b>`ValueError`</b>: if the arguments are invalid or not supported. Currently,
- this function only supports one-dimensional `x` in `xs`.
-
-
-- - -
-
-### `tf.clip_by_value(t, clip_value_min, clip_value_max, name=None)` {#clip_by_value}
-
-Clips tensor values to a specified min and max.
-
-Given a tensor `t`, this operation returns a tensor of the same type and
-shape as `t` with its values clipped to `clip_value_min` and `clip_value_max`.
-Any values less than `clip_value_min` are set to `clip_value_min`. Any values
-greater than `clip_value_max` are set to `clip_value_max`.
-
-##### Args:
-
-
-* <b>`t`</b>: A `Tensor`.
-* <b>`clip_value_min`</b>: A 0-D (scalar) `Tensor`. The minimum value to clip by.
-* <b>`clip_value_max`</b>: A 0-D (scalar) `Tensor`. The maximum value to clip by.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A clipped `Tensor`.
-
-
-- - -
-
-### `tf.clip_by_norm(t, clip_norm, axes=None, name=None)` {#clip_by_norm}
-
-Clips tensor values to a maximum L2-norm.
-
-Given a tensor `t`, and a maximum clip value `clip_norm`, this operation
-normalizes `t` so that its L2-norm is less than or equal to `clip_norm`,
-along the dimensions given in `axes`. Specifically, in the default case
-where all dimensions are used for calculation, if the L2-norm of `t` is
-already less than or equal to `clip_norm`, then `t` is not modified. If
-the L2-norm is greater than `clip_norm`, then this operation returns a
-tensor of the same type and shape as `t` with its values set to:
-
-`t * clip_norm / l2norm(t)`
-
-In this case, the L2-norm of the output tensor is `clip_norm`.
-
-As another example, if `t` is a matrix and `axes == [1]`, then each row
-of the output will have L2-norm equal to `clip_norm`. If `axes == [0]`
-instead, each column of the output will be clipped.
-
-This operation is typically used to clip gradients before applying them with
-an optimizer.
-
-##### Args:
-
-
-* <b>`t`</b>: A `Tensor`.
-* <b>`clip_norm`</b>: A 0-D (scalar) `Tensor` > 0. A maximum clipping value.
-* <b>`axes`</b>: A 1-D (vector) `Tensor` of type int32 containing the dimensions
- to use for computing the L2-norm. If `None` (the default), uses all
- dimensions.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A clipped `Tensor`.
-
-
-- - -
-
-### `tf.clip_by_average_norm(t, clip_norm, name=None)` {#clip_by_average_norm}
-
-Clips tensor values to a maximum average L2-norm.
-
-Given a tensor `t`, and a maximum clip value `clip_norm`, this operation
-normalizes `t` so that its average L2-norm is less than or equal to
-`clip_norm`. Specifically, if the average L2-norm is already less than or
-equal to `clip_norm`, then `t` is not modified. If the average L2-norm is
-greater than `clip_norm`, then this operation returns a tensor of the same
-type and shape as `t` with its values set to:
-
-`t * clip_norm / l2norm_avg(t)`
-
-In this case, the average L2-norm of the output tensor is `clip_norm`.
-
-This operation is typically used to clip gradients before applying them with
-an optimizer.
-
-##### Args:
-
-
-* <b>`t`</b>: A `Tensor`.
-* <b>`clip_norm`</b>: A 0-D (scalar) `Tensor` > 0. A maximum clipping value.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A clipped `Tensor`.
-
-
-- - -
-
-### `tf.clip_by_global_norm(t_list, clip_norm, use_norm=None, name=None)` {#clip_by_global_norm}
-
-Clips values of multiple tensors by the ratio of the sum of their norms.
-
-Given a tuple or list of tensors `t_list`, and a clipping ratio `clip_norm`,
-this operation returns a list of clipped tensors `list_clipped`
-and the global norm (`global_norm`) of all tensors in `t_list`. Optionally,
-if you've already computed the global norm for `t_list`, you can specify
-the global norm with `use_norm`.
-
-To perform the clipping, the values `t_list[i]` are set to:
-
- t_list[i] * clip_norm / max(global_norm, clip_norm)
-
-where:
-
- global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))
-
-If `clip_norm > global_norm` then the entries in `t_list` remain as they are,
-otherwise they're all shrunk by the global ratio.
-
-Any of the entries of `t_list` that are of type `None` are ignored.
-
-This is the correct way to perform gradient clipping (for example, see
-[Pascanu et al., 2012](http://arxiv.org/abs/1211.5063)
-([pdf](http://arxiv.org/pdf/1211.5063.pdf))).
-
-However, it is slower than `clip_by_norm()` because all the parameters must be
-ready before the clipping operation can be performed.
-
-##### Args:
-
-
-* <b>`t_list`</b>: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None.
-* <b>`clip_norm`</b>: A 0-D (scalar) `Tensor` > 0. The clipping ratio.
-* <b>`use_norm`</b>: A 0-D (scalar) `Tensor` of type `float` (optional). The global
- norm to use. If not provided, `global_norm()` is used to compute the norm.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
-
-* <b>`list_clipped`</b>: A list of `Tensors` of the same type as `list_t`.
-* <b>`global_norm`</b>: A 0-D (scalar) `Tensor` representing the global norm.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `t_list` is not a sequence.
-
-
-- - -
-
-### `tf.global_norm(t_list, name=None)` {#global_norm}
-
-Computes the global norm of multiple tensors.
-
-Given a tuple or list of tensors `t_list`, this operation returns the
-global norm of the elements in all tensors in `t_list`. The global norm is
-computed as:
-
-`global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))`
-
-Any entries in `t_list` that are of type None are ignored.
-
-##### Args:
-
-
-* <b>`t_list`</b>: A tuple or list of mixed `Tensors`, `IndexedSlices`, or None.
-* <b>`name`</b>: A name for the operation (optional).
-
-##### Returns:
-
- A 0-D (scalar) `Tensor` of type `float`.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `t_list` is not a sequence.
-
-
-- - -
-
-### `tf.train.exponential_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None)` {#exponential_decay}
-
-Applies exponential decay to the learning rate.
-
-When training a model, it is often recommended to lower the learning rate as
-the training progresses. This function applies an exponential decay function
-to a provided initial learning rate. It requires a `global_step` value to
-compute the decayed learning rate. You can just pass a TensorFlow variable
-that you increment at each training step.
-
-The function returns the decayed learning rate. It is computed as:
-
-```python
-decayed_learning_rate = learning_rate *
- decay_rate ^ (global_step / decay_steps)
-```
-
-If the argument `staircase` is `True`, then `global_step / decay_steps` is an
-integer division and the decayed learning rate follows a staircase function.
-
-Example: decay every 100000 steps with a base of 0.96:
-
-```python
-...
-global_step = tf.Variable(0, trainable=False)
-starter_learning_rate = 0.1
-learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
- 100000, 0.96, staircase=True)
-# Passing global_step to minimize() will increment it at each step.
-learning_step = (
- tf.train.GradientDescentOptimizer(learning_rate)
- .minimize(...my loss..., global_step=global_step)
-)
-```
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A scalar `float32` or `float64` `Tensor` or a
- Python number. The initial learning rate.
-* <b>`global_step`</b>: A scalar `int32` or `int64` `Tensor` or a Python number.
- Global step to use for the decay computation. Must not be negative.
-* <b>`decay_steps`</b>: A scalar `int32` or `int64` `Tensor` or a Python number.
- Must be positive. See the decay computation above.
-* <b>`decay_rate`</b>: A scalar `float32` or `float64` `Tensor` or a
- Python number. The decay rate.
-* <b>`staircase`</b>: Boolean. If `True` decay the learning rate at discrete intervals
-* <b>`name`</b>: String. Optional name of the operation. Defaults to
- 'ExponentialDecay'.
-
-##### Returns:
-
- A scalar `Tensor` of the same type as `learning_rate`. The decayed
- learning rate.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `global_step` is not supplied.
-
-
-- - -
-
-### `tf.train.inverse_time_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None)` {#inverse_time_decay}
-
-Applies inverse time decay to the initial learning rate.
-
-When training a model, it is often recommended to lower the learning rate as
-the training progresses. This function applies an inverse decay function
-to a provided initial learning rate. It requires an `global_step` value to
-compute the decayed learning rate. You can just pass a TensorFlow variable
-that you increment at each training step.
-
-The function returns the decayed learning rate. It is computed as:
-
-```python
-decayed_learning_rate = learning_rate / (1 + decay_rate * t)
-```
-
-Example: decay 1/t with a rate of 0.5:
-
-```python
-...
-global_step = tf.Variable(0, trainable=False)
-learning_rate = 0.1
-k = 0.5
-learning_rate = tf.train.inverse_time_decay(learning_rate, global_step, k)
-
-# Passing global_step to minimize() will increment it at each step.
-learning_step = (
- tf.train.GradientDescentOptimizer(learning_rate)
- .minimize(...my loss..., global_step=global_step)
-)
-```
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A scalar `float32` or `float64` `Tensor` or a
- Python number. The initial learning rate.
-* <b>`global_step`</b>: A Python number.
- Global step to use for the decay computation. Must not be negative.
-* <b>`decay_steps`</b>: How often to apply decay.
-* <b>`decay_rate`</b>: A Python number. The decay rate.
-* <b>`staircase`</b>: Whether to apply decay in a discrete staircase, as opposed to
- continuous, fashion.
-* <b>`name`</b>: String. Optional name of the operation. Defaults to
- 'InverseTimeDecay'.
-
-##### Returns:
-
- A scalar `Tensor` of the same type as `learning_rate`. The decayed
- learning rate.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `global_step` is not supplied.
-
-
-- - -
-
-### `tf.train.natural_exp_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None)` {#natural_exp_decay}
-
-Applies natural exponential decay to the initial learning rate.
-
-When training a model, it is often recommended to lower the learning rate as
-the training progresses. This function applies an exponential decay function
-to a provided initial learning rate. It requires an `global_step` value to
-compute the decayed learning rate. You can just pass a TensorFlow variable
-that you increment at each training step.
-
-The function returns the decayed learning rate. It is computed as:
-
-```python
-decayed_learning_rate = learning_rate * exp(-decay_rate * global_step)
-```
-
-Example: decay exponentially with a base of 0.96:
-
-```python
-...
-global_step = tf.Variable(0, trainable=False)
-learning_rate = 0.1
-k = 0.5
-learning_rate = tf.train.exponential_time_decay(learning_rate, global_step, k)
-
-# Passing global_step to minimize() will increment it at each step.
-learning_step = (
- tf.train.GradientDescentOptimizer(learning_rate)
- .minimize(...my loss..., global_step=global_step)
-)
-```
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A scalar `float32` or `float64` `Tensor` or a
- Python number. The initial learning rate.
-* <b>`global_step`</b>: A Python number.
- Global step to use for the decay computation. Must not be negative.
-* <b>`decay_steps`</b>: How often to apply decay.
-* <b>`decay_rate`</b>: A Python number. The decay rate.
-* <b>`staircase`</b>: Whether to apply decay in a discrete staircase, as opposed to
- continuous, fashion.
-* <b>`name`</b>: String. Optional name of the operation. Defaults to
- 'ExponentialTimeDecay'.
-
-##### Returns:
-
- A scalar `Tensor` of the same type as `learning_rate`. The decayed
- learning rate.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `global_step` is not supplied.
-
-
-- - -
-
-### `tf.train.piecewise_constant(x, boundaries, values, name=None)` {#piecewise_constant}
-
-Piecewise constant from boundaries and interval values.
-
-Example: use a learning rate that's 1.0 for the first 100000 steps, 0.5
- for steps 100001 to 110000, and 0.1 for any additional steps.
-
-```python
-global_step = tf.Variable(0, trainable=False)
-boundaries = [100000, 110000]
-values = [1.0, 0.5, 0.1]
-learning_rate = tf.train.piecewise_constant(global_step, boundaries, values)
-
-# Later, whenever we perform an optimization step, we increment global_step.
-```
-
-##### Args:
-
-
-* <b>`x`</b>: A 0-D scalar `Tensor`. Must be one of the following types: `float32`,
- `float64`, `uint8`, `int8`, `int16`, `int32`, `int64`.
-* <b>`boundaries`</b>: A list of `Tensor`s or `int`s or `float`s with strictly
- increasing entries, and with all elements having the same type as `x`.
-* <b>`values`</b>: A list of `Tensor`s or float`s or `int`s that specifies the values
- for the intervals defined by `boundaries`. It should have one more element
- than `boundaries`, and all elements should have the same type.
-* <b>`name`</b>: A string. Optional name of the operation. Defaults to
- 'PiecewiseConstant'.
-
-##### Returns:
-
- A 0-D Tensor. Its value is `values[0]` when `x <= boundaries[0]`,
- `values[1]` when `x > boundaries[0]` and `x <= boundaries[1]`, ...,
- and values[-1] when `x > boundaries[-1]`.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if types of `x` and `buondaries` do not match, or types of all
- `values` do not match.
-
-
-- - -
-
-### `tf.train.polynomial_decay(learning_rate, global_step, decay_steps, end_learning_rate=0.0001, power=1.0, cycle=False, name=None)` {#polynomial_decay}
-
-Applies a polynomial decay to the learning rate.
-
-It is commonly observed that a monotonically decreasing learning rate, whose
-degree of change is carefully chosen, results in a better performing model.
-This function applies a polynomial decay function to a provided initial
-`learning_rate` to reach an `end_learning_rate` in the given `decay_steps`.
-
-It requires a `global_step` value to compute the decayed learning rate. You
-can just pass a TensorFlow variable that you increment at each training step.
-
-The function returns the decayed learning rate. It is computed as:
-
-```python
-global_step = min(global_step, decay_steps)
-decayed_learning_rate = (learning_rate - end_learning_rate) *
- (1 - global_step / decay_steps) ^ (power) +
- end_learning_rate
-
-```
-
-If `cycle` is True then a multiple of `decay_steps` is used, the first one
-that is bigger than `global_steps`.
-
-```python
-decay_steps = decay_steps * ceil(global_step / decay_steps)
-decayed_learning_rate = (learning_rate - end_learning_rate) *
- (1 - global_step / decay_steps) ^ (power) +
- end_learning_rate
-
-```
-
-Example: decay from 0.1 to 0.01 in 10000 steps using sqrt (i.e. power=0.5):
-
-```python
-...
-global_step = tf.Variable(0, trainable=False)
-starter_learning_rate = 0.1
-end_learning_rate = 0.01
-decay_steps = 10000
-learning_rate = tf.train.polynomial_decay(starter_learning_rate, global_step,
- decay_steps, end_learning_rate,
- power=0.5)
-# Passing global_step to minimize() will increment it at each step.
-learning_step = (
- tf.train.GradientDescentOptimizer(learning_rate)
- .minimize(...my loss..., global_step=global_step)
-)
-```
-
-##### Args:
-
-
-* <b>`learning_rate`</b>: A scalar `float32` or `float64` `Tensor` or a
- Python number. The initial learning rate.
-* <b>`global_step`</b>: A scalar `int32` or `int64` `Tensor` or a Python number.
- Global step to use for the decay computation. Must not be negative.
-* <b>`decay_steps`</b>: A scalar `int32` or `int64` `Tensor` or a Python number.
- Must be positive. See the decay computation above.
-* <b>`end_learning_rate`</b>: A scalar `float32` or `float64` `Tensor` or a
- Python number. The minimal end learning rate.
-* <b>`power`</b>: A scalar `float32` or `float64` `Tensor` or a
- Python number. The power of the polynomial. Defaults to sqrt, i.e. 0.5.
-* <b>`cycle`</b>: A boolean, whether or not it should cycle beyond decay_steps.
-* <b>`name`</b>: String. Optional name of the operation. Defaults to
- 'PolynomialDecay'.
-
-##### Returns:
-
- A scalar `Tensor` of the same type as `learning_rate`. The decayed
- learning rate.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `global_step` is not supplied.
-
-
-- - -
-
-### `class tf.train.ExponentialMovingAverage` {#ExponentialMovingAverage}
-
-Maintains moving averages of variables by employing an exponential decay.
-
-When training a model, it is often beneficial to maintain moving averages of
-the trained parameters. Evaluations that use averaged parameters sometimes
-produce significantly better results than the final trained values.
-
-The `apply()` method adds shadow copies of trained variables and add ops that
-maintain a moving average of the trained variables in their shadow copies.
-It is used when building the training model. The ops that maintain moving
-averages are typically run after each training step.
-The `average()` and `average_name()` methods give access to the shadow
-variables and their names. They are useful when building an evaluation
-model, or when restoring a model from a checkpoint file. They help use the
-moving averages in place of the last trained values for evaluations.
-
-The moving averages are computed using exponential decay. You specify the
-decay value when creating the `ExponentialMovingAverage` object. The shadow
-variables are initialized with the same initial values as the trained
-variables. When you run the ops to maintain the moving averages, each
-shadow variable is updated with the formula:
-
- `shadow_variable -= (1 - decay) * (shadow_variable - variable)`
-
-This is mathematically equivalent to the classic formula below, but the use
-of an `assign_sub` op (the `"-="` in the formula) allows concurrent lockless
-updates to the variables:
-
- `shadow_variable = decay * shadow_variable + (1 - decay) * variable`
-
-Reasonable values for `decay` are close to 1.0, typically in the
-multiple-nines range: 0.999, 0.9999, etc.
-
-Example usage when creating a training model:
-
-```python
-# Create variables.
-var0 = tf.Variable(...)
-var1 = tf.Variable(...)
-# ... use the variables to build a training model...
-...
-# Create an op that applies the optimizer. This is what we usually
-# would use as a training op.
-opt_op = opt.minimize(my_loss, [var0, var1])
-
-# Create an ExponentialMovingAverage object
-ema = tf.train.ExponentialMovingAverage(decay=0.9999)
-
-# Create the shadow variables, and add ops to maintain moving averages
-# of var0 and var1.
-maintain_averages_op = ema.apply([var0, var1])
-
-# Create an op that will update the moving averages after each training
-# step. This is what we will use in place of the usual training op.
-with tf.control_dependencies([opt_op]):
- training_op = tf.group(maintain_averages_op)
-
-...train the model by running training_op...
-```
-
-There are two ways to use the moving averages for evaluations:
-
-* Build a model that uses the shadow variables instead of the variables.
- For this, use the `average()` method which returns the shadow variable
- for a given variable.
-* Build a model normally but load the checkpoint files to evaluate by using
- the shadow variable names. For this use the `average_name()` method. See
- the [Saver class](../../api_docs/python/train.md#Saver) for more
- information on restoring saved variables.
-
-Example of restoring the shadow variable values:
-
-```python
-# Create a Saver that loads variables from their saved shadow values.
-shadow_var0_name = ema.average_name(var0)
-shadow_var1_name = ema.average_name(var1)
-saver = tf.train.Saver({shadow_var0_name: var0, shadow_var1_name: var1})
-saver.restore(...checkpoint filename...)
-# var0 and var1 now hold the moving average values
-```
-
-- - -
-
-#### `tf.train.ExponentialMovingAverage.__init__(decay, num_updates=None, zero_debias=False, name='ExponentialMovingAverage')` {#ExponentialMovingAverage.__init__}
-
-Creates a new ExponentialMovingAverage object.
-
-The `apply()` method has to be called to create shadow variables and add
-ops to maintain moving averages.
-
-The optional `num_updates` parameter allows one to tweak the decay rate
-dynamically. It is typical to pass the count of training steps, usually
-kept in a variable that is incremented at each step, in which case the
-decay rate is lower at the start of training. This makes moving averages
-move faster. If passed, the actual decay rate used is:
-
- `min(decay, (1 + num_updates) / (10 + num_updates))`
-
-##### Args:
-
-
-* <b>`decay`</b>: Float. The decay to use.
-* <b>`num_updates`</b>: Optional count of number of updates applied to variables.
-* <b>`zero_debias`</b>: If `True`, zero debias moving-averages that are initialized
- with tensors.
-* <b>`name`</b>: String. Optional prefix name to use for the name of ops added in
- `apply()`.
-
-
-- - -
-
-#### `tf.train.ExponentialMovingAverage.apply(var_list=None)` {#ExponentialMovingAverage.apply}
-
-Maintains moving averages of variables.
-
-`var_list` must be a list of `Variable` or `Tensor` objects. This method
-creates shadow variables for all elements of `var_list`. Shadow variables
-for `Variable` objects are initialized to the variable's initial value.
-They will be added to the `GraphKeys.MOVING_AVERAGE_VARIABLES` collection.
-For `Tensor` objects, the shadow variables are initialized to 0 and zero
-debiased (see docstring in `assign_moving_average` for more details).
-
-shadow variables are created with `trainable=False` and added to the
-`GraphKeys.ALL_VARIABLES` collection. They will be returned by calls to
-`tf.global_variables()`.
-
-Returns an op that updates all shadow variables as described above.
-
-Note that `apply()` can be called multiple times with different lists of
-variables.
-
-##### Args:
-
-
-* <b>`var_list`</b>: A list of Variable or Tensor objects. The variables
- and Tensors must be of types float16, float32, or float64.
-
-##### Returns:
-
- An Operation that updates the moving averages.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the arguments are not all float16, float32, or float64.
-* <b>`ValueError`</b>: If the moving average of one of the variables is already
- being computed.
-
-
-- - -
-
-#### `tf.train.ExponentialMovingAverage.average_name(var)` {#ExponentialMovingAverage.average_name}
-
-Returns the name of the `Variable` holding the average for `var`.
-
-The typical scenario for `ExponentialMovingAverage` is to compute moving
-averages of variables during training, and restore the variables from the
-computed moving averages during evaluations.
-
-To restore variables, you have to know the name of the shadow variables.
-That name and the original variable can then be passed to a `Saver()` object
-to restore the variable from the moving average value with:
- `saver = tf.train.Saver({ema.average_name(var): var})`
-
-`average_name()` can be called whether or not `apply()` has been called.
-
-##### Args:
-
-
-* <b>`var`</b>: A `Variable` object.
-
-##### Returns:
-
- A string: The name of the variable that will be used or was used
- by the `ExponentialMovingAverage class` to hold the moving average of
- `var`.
-
-
-- - -
-
-#### `tf.train.ExponentialMovingAverage.average(var)` {#ExponentialMovingAverage.average}
-
-Returns the `Variable` holding the average of `var`.
-
-##### Args:
-
-
-* <b>`var`</b>: A `Variable` object.
-
-##### Returns:
-
- A `Variable` object or `None` if the moving average of `var`
- is not maintained.
-
-
-- - -
-
-#### `tf.train.ExponentialMovingAverage.variables_to_restore(moving_avg_variables=None)` {#ExponentialMovingAverage.variables_to_restore}
-
-Returns a map of names to `Variables` to restore.
-
-If a variable has a moving average, use the moving average variable name as
-the restore name; otherwise, use the variable name.
-
-For example,
-
-```python
- variables_to_restore = ema.variables_to_restore()
- saver = tf.train.Saver(variables_to_restore)
-```
-
-Below is an example of such mapping:
-
-```
- conv/batchnorm/gamma/ExponentialMovingAverage: conv/batchnorm/gamma,
- conv_4/conv2d_params/ExponentialMovingAverage: conv_4/conv2d_params,
- global_step: global_step
-```
-
-##### Args:
-
-
-* <b>`moving_avg_variables`</b>: a list of variables that require to use of the
- moving variable name to be restored. If None, it will default to
- variables.moving_average_variables() + variables.trainable_variables()
-
-##### Returns:
-
- A map from restore_names to variables. The restore_name can be the
- moving_average version of the variable name if it exist, or the original
- variable name.
-
-
-
-- - -
-
-### `class tf.train.Coordinator` {#Coordinator}
-
-A coordinator for threads.
-
-This class implements a simple mechanism to coordinate the termination of a
-set of threads.
-
-#### Usage:
-
-```python
-# Create a coordinator.
-coord = Coordinator()
-# Start a number of threads, passing the coordinator to each of them.
-...start thread 1...(coord, ...)
-...start thread N...(coord, ...)
-# Wait for all the threads to terminate.
-coord.join(threads)
-```
-
-Any of the threads can call `coord.request_stop()` to ask for all the threads
-to stop. To cooperate with the requests, each thread must check for
-`coord.should_stop()` on a regular basis. `coord.should_stop()` returns
-`True` as soon as `coord.request_stop()` has been called.
-
-A typical thread running with a coordinator will do something like:
-
-```python
-while not coord.should_stop():
- ...do some work...
-```
-
-#### Exception handling:
-
-A thread can report an exception to the coordinator as part of the
-`should_stop()` call. The exception will be re-raised from the
-`coord.join()` call.
-
-Thread code:
-
-```python
-try:
- while not coord.should_stop():
- ...do some work...
-except Exception as e:
- coord.request_stop(e)
-```
-
-Main code:
-
-```python
-try:
- ...
- coord = Coordinator()
- # Start a number of threads, passing the coordinator to each of them.
- ...start thread 1...(coord, ...)
- ...start thread N...(coord, ...)
- # Wait for all the threads to terminate.
- coord.join(threads)
-except Exception as e:
- ...exception that was passed to coord.request_stop()
-```
-
-To simplify the thread implementation, the Coordinator provides a
-context handler `stop_on_exception()` that automatically requests a stop if
-an exception is raised. Using the context handler the thread code above
-can be written as:
-
-```python
-with coord.stop_on_exception():
- while not coord.should_stop():
- ...do some work...
-```
-
-#### Grace period for stopping:
-
-After a thread has called `coord.request_stop()` the other threads have a
-fixed time to stop, this is called the 'stop grace period' and defaults to 2
-minutes. If any of the threads is still alive after the grace period expires
-`coord.join()` raises a RuntimeException reporting the laggards.
-
-```python
-try:
- ...
- coord = Coordinator()
- # Start a number of threads, passing the coordinator to each of them.
- ...start thread 1...(coord, ...)
- ...start thread N...(coord, ...)
- # Wait for all the threads to terminate, give them 10s grace period
- coord.join(threads, stop_grace_period_secs=10)
-except RuntimeException:
- ...one of the threads took more than 10s to stop after request_stop()
- ...was called.
-except Exception:
- ...exception that was passed to coord.request_stop()
-```
-- - -
-
-#### `tf.train.Coordinator.__init__(clean_stop_exception_types=None)` {#Coordinator.__init__}
-
-Create a new Coordinator.
-
-##### Args:
-
-
-* <b>`clean_stop_exception_types`</b>: Optional tuple of Exception types that should
- cause a clean stop of the coordinator. If an exception of one of these
- types is reported to `request_stop(ex)` the coordinator will behave as
- if `request_stop(None)` was called. Defaults to
- `(tf.errors.OutOfRangeError,)` which is used by input queues to signal
- the end of input. When feeding training data from a Python iterator it
- is common to add `StopIteration` to this list.
-
-
-- - -
-
-#### `tf.train.Coordinator.clear_stop()` {#Coordinator.clear_stop}
-
-Clears the stop flag.
-
-After this is called, calls to `should_stop()` will return `False`.
-
-
-- - -
-
-#### `tf.train.Coordinator.join(threads=None, stop_grace_period_secs=120, ignore_live_threads=False)` {#Coordinator.join}
-
-Wait for threads to terminate.
-
-This call blocks until a set of threads have terminated. The set of thread
-is the union of the threads passed in the `threads` argument and the list
-of threads that registered with the coordinator by calling
-`Coordinator.register_thread()`.
-
-After the threads stop, if an `exc_info` was passed to `request_stop`, that
-exception is re-raised.
-
-Grace period handling: When `request_stop()` is called, threads are given
-'stop_grace_period_secs' seconds to terminate. If any of them is still
-alive after that period expires, a `RuntimeError` is raised. Note that if
-an `exc_info` was passed to `request_stop()` then it is raised instead of
-that `RuntimeError`.
-
-##### Args:
-
-
-* <b>`threads`</b>: List of `threading.Threads`. The started threads to join in
- addition to the registered threads.
-* <b>`stop_grace_period_secs`</b>: Number of seconds given to threads to stop after
- `request_stop()` has been called.
-* <b>`ignore_live_threads`</b>: If `False`, raises an error if any of the threads are
- still alive after `stop_grace_period_secs`.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If any thread is still alive after `request_stop()`
- is called and the grace period expires.
-
-
-- - -
-
-#### `tf.train.Coordinator.joined` {#Coordinator.joined}
-
-
-
-
-- - -
-
-#### `tf.train.Coordinator.raise_requested_exception()` {#Coordinator.raise_requested_exception}
-
-If an exception has been passed to `request_stop`, this raises it.
-
-
-- - -
-
-#### `tf.train.Coordinator.register_thread(thread)` {#Coordinator.register_thread}
-
-Register a thread to join.
-
-##### Args:
-
-
-* <b>`thread`</b>: A Python thread to join.
-
-
-- - -
-
-#### `tf.train.Coordinator.request_stop(ex=None)` {#Coordinator.request_stop}
-
-Request that the threads stop.
-
-After this is called, calls to `should_stop()` will return `True`.
-
-Note: If an exception is being passed in, in must be in the context of
-handling the exception (i.e. `try: ... except Exception as ex: ...`) and not
-a newly created one.
-
-##### Args:
-
-
-* <b>`ex`</b>: Optional `Exception`, or Python `exc_info` tuple as returned by
- `sys.exc_info()`. If this is the first call to `request_stop()` the
- corresponding exception is recorded and re-raised from `join()`.
-
-
-- - -
-
-#### `tf.train.Coordinator.should_stop()` {#Coordinator.should_stop}
-
-Check if stop was requested.
-
-##### Returns:
-
- True if a stop was requested.
-
-
-- - -
-
-#### `tf.train.Coordinator.stop_on_exception()` {#Coordinator.stop_on_exception}
-
-Context manager to request stop when an Exception is raised.
-
-Code that uses a coordinator must catch exceptions and pass
-them to the `request_stop()` method to stop the other threads
-managed by the coordinator.
-
-This context handler simplifies the exception handling.
-Use it as follows:
-
-```python
-with coord.stop_on_exception():
- # Any exception raised in the body of the with
- # clause is reported to the coordinator before terminating
- # the execution of the body.
- ...body...
-```
-
-This is completely equivalent to the slightly longer code:
-
-```python
-try:
- ...body...
-exception Exception as ex:
- coord.request_stop(ex)
-```
-
-##### Yields:
-
- nothing.
-
-
-- - -
-
-#### `tf.train.Coordinator.wait_for_stop(timeout=None)` {#Coordinator.wait_for_stop}
-
-Wait till the Coordinator is told to stop.
-
-##### Args:
-
-
-* <b>`timeout`</b>: Float. Sleep for up to that many seconds waiting for
- should_stop() to become True.
-
-##### Returns:
-
- True if the Coordinator is told stop, False if the timeout expired.
-
-
-
-- - -
-
-### `class tf.train.QueueRunner` {#QueueRunner}
-
-Holds a list of enqueue operations for a queue, each to be run in a thread.
-
-Queues are a convenient TensorFlow mechanism to compute tensors
-asynchronously using multiple threads. For example in the canonical 'Input
-Reader' setup one set of threads generates filenames in a queue; a second set
-of threads read records from the files, processes them, and enqueues tensors
-on a second queue; a third set of threads dequeues these input records to
-construct batches and runs them through training operations.
-
-There are several delicate issues when running multiple threads that way:
-closing the queues in sequence as the input is exhausted, correctly catching
-and reporting exceptions, etc.
-
-The `QueueRunner`, combined with the `Coordinator`, helps handle these issues.
-- - -
-
-#### `tf.train.QueueRunner.__init__(queue=None, enqueue_ops=None, close_op=None, cancel_op=None, queue_closed_exception_types=None, queue_runner_def=None, import_scope=None)` {#QueueRunner.__init__}
-
-Create a QueueRunner.
-
-On construction the `QueueRunner` adds an op to close the queue. That op
-will be run if the enqueue ops raise exceptions.
-
-When you later call the `create_threads()` method, the `QueueRunner` will
-create one thread for each op in `enqueue_ops`. Each thread will run its
-enqueue op in parallel with the other threads. The enqueue ops do not have
-to all be the same op, but it is expected that they all enqueue tensors in
-`queue`.
-
-##### Args:
-
-
-* <b>`queue`</b>: A `Queue`.
-* <b>`enqueue_ops`</b>: List of enqueue ops to run in threads later.
-* <b>`close_op`</b>: Op to close the queue. Pending enqueue ops are preserved.
-* <b>`cancel_op`</b>: Op to close the queue and cancel pending enqueue ops.
-* <b>`queue_closed_exception_types`</b>: Optional tuple of Exception types that
- indicate that the queue has been closed when raised during an enqueue
- operation. Defaults to `(tf.errors.OutOfRangeError,)`. Another common
- case includes `(tf.errors.OutOfRangeError, tf.errors.CancelledError)`,
- when some of the enqueue ops may dequeue from other Queues.
-* <b>`queue_runner_def`</b>: Optional `QueueRunnerDef` protocol buffer. If specified,
- recreates the QueueRunner from its contents. `queue_runner_def` and the
- other arguments are mutually exclusive.
-* <b>`import_scope`</b>: Optional `string`. Name scope to add. Only used when
- initializing from protocol buffer.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both `queue_runner_def` and `queue` are both specified.
-* <b>`ValueError`</b>: If `queue` or `enqueue_ops` are not provided when not
- restoring from `queue_runner_def`.
-
-
-- - -
-
-#### `tf.train.QueueRunner.cancel_op` {#QueueRunner.cancel_op}
-
-
-
-
-- - -
-
-#### `tf.train.QueueRunner.close_op` {#QueueRunner.close_op}
-
-
-
-
-- - -
-
-#### `tf.train.QueueRunner.create_threads(sess, coord=None, daemon=False, start=False)` {#QueueRunner.create_threads}
-
-Create threads to run the enqueue ops for the given session.
-
-This method requires a session in which the graph was launched. It creates
-a list of threads, optionally starting them. There is one thread for each
-op passed in `enqueue_ops`.
-
-The `coord` argument is an optional coordinator that the threads will use
-to terminate together and report exceptions. If a coordinator is given,
-this method starts an additional thread to close the queue when the
-coordinator requests a stop.
-
-If previously created threads for the given session are still running, no
-new threads will be created.
-
-##### Args:
-
-
-* <b>`sess`</b>: A `Session`.
-* <b>`coord`</b>: Optional `Coordinator` object for reporting errors and checking
- stop conditions.
-* <b>`daemon`</b>: Boolean. If `True` make the threads daemon threads.
-* <b>`start`</b>: Boolean. If `True` starts the threads. If `False` the
- caller must call the `start()` method of the returned threads.
-
-##### Returns:
-
- A list of threads.
-
-
-- - -
-
-#### `tf.train.QueueRunner.enqueue_ops` {#QueueRunner.enqueue_ops}
-
-
-
-
-- - -
-
-#### `tf.train.QueueRunner.exceptions_raised` {#QueueRunner.exceptions_raised}
-
-Exceptions raised but not handled by the `QueueRunner` threads.
-
-Exceptions raised in queue runner threads are handled in one of two ways
-depending on whether or not a `Coordinator` was passed to
-`create_threads()`:
-
-* With a `Coordinator`, exceptions are reported to the coordinator and
- forgotten by the `QueueRunner`.
-* Without a `Coordinator`, exceptions are captured by the `QueueRunner` and
- made available in this `exceptions_raised` property.
-
-##### Returns:
-
- A list of Python `Exception` objects. The list is empty if no exception
- was captured. (No exceptions are captured when using a Coordinator.)
-
-
-- - -
-
-#### `tf.train.QueueRunner.from_proto(queue_runner_def, import_scope=None)` {#QueueRunner.from_proto}
-
-Returns a `QueueRunner` object created from `queue_runner_def`.
-
-
-- - -
-
-#### `tf.train.QueueRunner.name` {#QueueRunner.name}
-
-The string name of the underlying Queue.
-
-
-- - -
-
-#### `tf.train.QueueRunner.queue` {#QueueRunner.queue}
-
-
-
-
-- - -
-
-#### `tf.train.QueueRunner.queue_closed_exception_types` {#QueueRunner.queue_closed_exception_types}
-
-
-
-
-- - -
-
-#### `tf.train.QueueRunner.to_proto(export_scope=None)` {#QueueRunner.to_proto}
-
-Converts this `QueueRunner` to a `QueueRunnerDef` protocol buffer.
-
-##### Args:
-
-
-* <b>`export_scope`</b>: Optional `string`. Name scope to remove.
-
-##### Returns:
-
- A `QueueRunnerDef` protocol buffer, or `None` if the `Variable` is not in
- the specified name scope.
-
-
-
-- - -
-
-### `class tf.train.LooperThread` {#LooperThread}
-
-A thread that runs code repeatedly, optionally on a timer.
-
-This thread class is intended to be used with a `Coordinator`. It repeatedly
-runs code specified either as `target` and `args` or by the `run_loop()`
-method.
-
-Before each run the thread checks if the coordinator has requested stop. In
-that case the looper thread terminates immediately.
-
-If the code being run raises an exception, that exception is reported to the
-coordinator and the thread terminates. The coordinator will then request all
-the other threads it coordinates to stop.
-
-You typically pass looper threads to the supervisor `Join()` method.
-- - -
-
-#### `tf.train.LooperThread.__init__(coord, timer_interval_secs, target=None, args=None, kwargs=None)` {#LooperThread.__init__}
-
-Create a LooperThread.
-
-##### Args:
-
-
-* <b>`coord`</b>: A Coordinator.
-* <b>`timer_interval_secs`</b>: Time boundaries at which to call Run(), or None
- if it should be called back to back.
-* <b>`target`</b>: Optional callable object that will be executed in the thread.
-* <b>`args`</b>: Optional arguments to pass to `target` when calling it.
-* <b>`kwargs`</b>: Optional keyword arguments to pass to `target` when calling it.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If one of the arguments is invalid.
-
-
-- - -
-
-#### `tf.train.LooperThread.__repr__()` {#LooperThread.__repr__}
-
-
-
-
-- - -
-
-#### `tf.train.LooperThread.daemon` {#LooperThread.daemon}
-
-A boolean value indicating whether this thread is a daemon thread (True) or not (False).
-
-This must be set before start() is called, otherwise RuntimeError is
-raised. Its initial value is inherited from the creating thread; the
-main thread is not a daemon thread and therefore all threads created in
-the main thread default to daemon = False.
-
-The entire Python program exits when no alive non-daemon threads are
-left.
-
-
-- - -
-
-#### `tf.train.LooperThread.getName()` {#LooperThread.getName}
-
-
-
-
-- - -
-
-#### `tf.train.LooperThread.ident` {#LooperThread.ident}
-
-Thread identifier of this thread or None if it has not been started.
-
-This is a nonzero integer. See the thread.get_ident() function. Thread
-identifiers may be recycled when a thread exits and another thread is
-created. The identifier is available even after the thread has exited.
-
-
-- - -
-
-#### `tf.train.LooperThread.isAlive()` {#LooperThread.isAlive}
-
-Return whether the thread is alive.
-
-This method returns True just before the run() method starts until just
-after the run() method terminates. The module function enumerate()
-returns a list of all alive threads.
-
-
-- - -
-
-#### `tf.train.LooperThread.isDaemon()` {#LooperThread.isDaemon}
-
-
-
-
-- - -
-
-#### `tf.train.LooperThread.is_alive()` {#LooperThread.is_alive}
-
-Return whether the thread is alive.
-
-This method returns True just before the run() method starts until just
-after the run() method terminates. The module function enumerate()
-returns a list of all alive threads.
-
-
-- - -
-
-#### `tf.train.LooperThread.join(timeout=None)` {#LooperThread.join}
-
-Wait until the thread terminates.
-
-This blocks the calling thread until the thread whose join() method is
-called terminates -- either normally or through an unhandled exception
-or until the optional timeout occurs.
-
-When the timeout argument is present and not None, it should be a
-floating point number specifying a timeout for the operation in seconds
-(or fractions thereof). As join() always returns None, you must call
-isAlive() after join() to decide whether a timeout happened -- if the
-thread is still alive, the join() call timed out.
-
-When the timeout argument is not present or None, the operation will
-block until the thread terminates.
-
-A thread can be join()ed many times.
-
-join() raises a RuntimeError if an attempt is made to join the current
-thread as that would cause a deadlock. It is also an error to join() a
-thread before it has been started and attempts to do so raises the same
-exception.
-
-
-- - -
-
-#### `tf.train.LooperThread.loop(coord, timer_interval_secs, target, args=None, kwargs=None)` {#LooperThread.loop}
-
-Start a LooperThread that calls a function periodically.
-
-If `timer_interval_secs` is None the thread calls `target(args)`
-repeatedly. Otherwise `target(args)` is called every `timer_interval_secs`
-seconds. The thread terminates when a stop of the coordinator is
-requested.
-
-##### Args:
-
-
-* <b>`coord`</b>: A Coordinator.
-* <b>`timer_interval_secs`</b>: Number. Time boundaries at which to call `target`.
-* <b>`target`</b>: A callable object.
-* <b>`args`</b>: Optional arguments to pass to `target` when calling it.
-* <b>`kwargs`</b>: Optional keyword arguments to pass to `target` when calling it.
-
-##### Returns:
-
- The started thread.
-
-
-- - -
-
-#### `tf.train.LooperThread.name` {#LooperThread.name}
-
-A string used for identification purposes only.
-
-It has no semantics. Multiple threads may be given the same name. The
-initial name is set by the constructor.
-
-
-- - -
-
-#### `tf.train.LooperThread.run()` {#LooperThread.run}
-
-
-
-
-- - -
-
-#### `tf.train.LooperThread.run_loop()` {#LooperThread.run_loop}
-
-Called at 'timer_interval_secs' boundaries.
-
-
-- - -
-
-#### `tf.train.LooperThread.setDaemon(daemonic)` {#LooperThread.setDaemon}
-
-
-
-
-- - -
-
-#### `tf.train.LooperThread.setName(name)` {#LooperThread.setName}
-
-
-
-
-- - -
-
-#### `tf.train.LooperThread.start()` {#LooperThread.start}
-
-Start the thread's activity.
-
-It must be called at most once per thread object. It arranges for the
-object's run() method to be invoked in a separate thread of control.
-
-This method will raise a RuntimeError if called more than once on the
-same thread object.
-
-
-- - -
-
-#### `tf.train.LooperThread.start_loop()` {#LooperThread.start_loop}
-
-Called when the thread starts.
-
-
-- - -
-
-#### `tf.train.LooperThread.stop_loop()` {#LooperThread.stop_loop}
-
-Called when the thread stops.
-
-
-
-- - -
-
-### `tf.train.add_queue_runner(qr, collection='queue_runners')` {#add_queue_runner}
-
-Adds a `QueueRunner` to a collection in the graph.
-
-When building a complex model that uses many queues it is often difficult to
-gather all the queue runners that need to be run. This convenience function
-allows you to add a queue runner to a well known collection in the graph.
-
-The companion method `start_queue_runners()` can be used to start threads for
-all the collected queue runners.
-
-##### Args:
-
-
-* <b>`qr`</b>: A `QueueRunner`.
-* <b>`collection`</b>: A `GraphKey` specifying the graph collection to add
- the queue runner to. Defaults to `GraphKeys.QUEUE_RUNNERS`.
-
-
-- - -
-
-### `tf.train.start_queue_runners(sess=None, coord=None, daemon=True, start=True, collection='queue_runners')` {#start_queue_runners}
-
-Starts all queue runners collected in the graph.
-
-This is a companion method to `add_queue_runner()`. It just starts
-threads for all queue runners collected in the graph. It returns
-the list of all threads.
-
-##### Args:
-
-
-* <b>`sess`</b>: `Session` used to run the queue ops. Defaults to the
- default session.
-* <b>`coord`</b>: Optional `Coordinator` for coordinating the started threads.
-* <b>`daemon`</b>: Whether the threads should be marked as `daemons`, meaning
- they don't block program exit.
-* <b>`start`</b>: Set to `False` to only create the threads, not start them.
-* <b>`collection`</b>: A `GraphKey` specifying the graph collection to
- get the queue runners from. Defaults to `GraphKeys.QUEUE_RUNNERS`.
-
-##### Returns:
-
- A list of threads.
-
-
-- - -
-
-### `class tf.train.Server` {#Server}
-
-An in-process TensorFlow server, for use in distributed training.
-
-A `tf.train.Server` instance encapsulates a set of devices and a
-[`tf.Session`](../../api_docs/python/client.md#Session) target that
-can participate in distributed training. A server belongs to a
-cluster (specified by a [`tf.train.ClusterSpec`](#ClusterSpec)), and
-corresponds to a particular task in a named job. The server can
-communicate with any other server in the same cluster.
-
-- - -
-
-#### `tf.train.Server.__init__(server_or_cluster_def, job_name=None, task_index=None, protocol=None, config=None, start=True)` {#Server.__init__}
-
-Creates a new server with the given definition.
-
-The `job_name`, `task_index`, and `protocol` arguments are optional, and
-override any information provided in `server_or_cluster_def`.
-
-##### Args:
-
-
-* <b>`server_or_cluster_def`</b>: A `tf.train.ServerDef` or
- `tf.train.ClusterDef` protocol buffer, or a
- `tf.train.ClusterSpec` object, describing the server to be
- created and/or the cluster of which it is a member.
-* <b>`job_name`</b>: (Optional.) Specifies the name of the job of which the server
- is a member. Defaults to the value in `server_or_cluster_def`, if
- specified.
-* <b>`task_index`</b>: (Optional.) Specifies the task index of the server in its
- job. Defaults to the value in `server_or_cluster_def`, if specified.
- Otherwise defaults to 0 if the server's job has only one task.
-* <b>`protocol`</b>: (Optional.) Specifies the protocol to be used by the server.
- Acceptable values include `"grpc"`. Defaults to the value in
- `server_or_cluster_def`, if specified. Otherwise defaults to `"grpc"`.
-* <b>`config`</b>: (Options.) A `tf.ConfigProto` that specifies default
- configuration options for all sessions that run on this server.
-* <b>`start`</b>: (Optional.) Boolean, indicating whether to start the server
- after creating it. Defaults to `True`.
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses if an error occurs while
- creating the TensorFlow server.
-
-
-- - -
-
-#### `tf.train.Server.create_local_server(config=None, start=True)` {#Server.create_local_server}
-
-Creates a new single-process cluster running on the local host.
-
-This method is a convenience wrapper for creating a
-`tf.train.Server` with a `tf.train.ServerDef` that specifies a
-single-process cluster containing a single task in a job called
-`"local"`.
-
-##### Args:
-
-
-* <b>`config`</b>: (Options.) A `tf.ConfigProto` that specifies default
- configuration options for all sessions that run on this server.
-* <b>`start`</b>: (Optional.) Boolean, indicating whether to start the server after
- creating it. Defaults to `True`.
-
-##### Returns:
-
- A local `tf.train.Server`.
-
-
-- - -
-
-#### `tf.train.Server.target` {#Server.target}
-
-Returns the target for a `tf.Session` to connect to this server.
-
-To create a
-[`tf.Session`](../../api_docs/python/client.md#Session) that
-connects to this server, use the following snippet:
-
-```python
-server = tf.train.Server(...)
-with tf.Session(server.target):
- # ...
-```
-
-##### Returns:
-
- A string containing a session target for this server.
-
-
-- - -
-
-#### `tf.train.Server.server_def` {#Server.server_def}
-
-Returns the `tf.train.ServerDef` for this server.
-
-##### Returns:
-
- A `tf.train.ServerDef` protocol buffer that describes the configuration
- of this server.
-
-
-
-- - -
-
-#### `tf.train.Server.start()` {#Server.start}
-
-Starts this server.
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses if an error occurs while
- starting the TensorFlow server.
-
-
-- - -
-
-#### `tf.train.Server.join()` {#Server.join}
-
-Blocks until the server has shut down.
-
-This method currently blocks forever.
-
-##### Raises:
-
- tf.errors.OpError: Or one of its subclasses if an error occurs while
- joining the TensorFlow server.
-
-
-
-- - -
-
-### `class tf.train.Supervisor` {#Supervisor}
-
-A training helper that checkpoints models and computes summaries.
-
-The Supervisor is a small wrapper around a `Coordinator`, a `Saver`,
-and a `SessionManager` that takes care of common needs of TensorFlow
-training programs.
-
-#### Use for a single program
-
-```python
-with tf.Graph().as_default():
- ...add operations to the graph...
- # Create a Supervisor that will checkpoint the model in '/tmp/mydir'.
- sv = Supervisor(logdir='/tmp/mydir')
- # Get a TensorFlow session managed by the supervisor.
- with sv.managed_session(FLAGS.master) as sess:
- # Use the session to train the graph.
- while not sv.should_stop():
- sess.run(<my_train_op>)
-```
-
-Within the `with sv.managed_session()` block all variables in the graph have
-been initialized. In addition, a few services have been started to
-checkpoint the model and add summaries to the event log.
-
-If the program crashes and is restarted, the managed session automatically
-reinitialize variables from the most recent checkpoint.
-
-The supervisor is notified of any exception raised by one of the services.
-After an exception is raised, `should_stop()` returns `True`. In that case
-the training loop should also stop. This is why the training loop has to
-check for `sv.should_stop()`.
-
-Exceptions that indicate that the training inputs have been exhausted,
-`tf.errors.OutOfRangeError`, also cause `sv.should_stop()` to return `True`
-but are not re-raised from the `with` block: they indicate a normal
-termination.
-
-#### Use for multiple replicas
-
-To train with replicas you deploy the same program in a `Cluster`.
-One of the tasks must be identified as the *chief*: the task that handles
-initialization, checkpoints, summaries, and recovery. The other tasks
-depend on the *chief* for these services.
-
-The only change you have to do to the single program code is to indicate
-if the program is running as the *chief*.
-
-```python
-# Choose a task as the chief. This could be based on server_def.task_index,
-# or job_def.name, or job_def.tasks. It's entirely up to the end user.
-# But there can be only one *chief*.
-is_chief = (server_def.task_index == 0)
-server = tf.train.Server(server_def)
-
-with tf.Graph().as_default():
- ...add operations to the graph...
- # Create a Supervisor that uses log directory on a shared file system.
- # Indicate if you are the 'chief'
- sv = Supervisor(logdir='/shared_directory/...', is_chief=is_chief)
- # Get a Session in a TensorFlow server on the cluster.
- with sv.managed_session(server.target) as sess:
- # Use the session to train the graph.
- while not sv.should_stop():
- sess.run(<my_train_op>)
-```
-
-In the *chief* task, the `Supervisor` works exactly as in the first example
-above. In the other tasks `sv.managed_session()` waits for the Model to have
-been initialized before returning a session to the training code. The
-non-chief tasks depend on the chief task for initializing the model.
-
-If one of the tasks crashes and restarts, `managed_session()`
-checks if the Model is initialized. If yes, it just creates a session and
-returns it to the training code that proceeds normally. If the model needs
-to be initialized, the chief task takes care of reinitializing it; the other
-tasks just wait for the model to have been initialized.
-
-NOTE: This modified program still works fine as a single program.
-The single program marks itself as the chief.
-
-#### What `master` string to use
-
-Whether you are running on your machine or in the cluster you can use the
-following values for the --master flag:
-
-* Specifying `''` requests an in-process session that does not use RPC.
-
-* Specifying `'local'` requests a session that uses the RPC-based
- "Master interface" to run TensorFlow programs. See
- [`tf.train.Server.create_local_server()`](#Server.create_local_server) for
- details.
-
-* Specifying `'grpc://hostname:port'` requests a session that uses
- the RPC interface to a specific host, and also allows the in-process
- master to access remote tensorflow workers. Often, it is
- appropriate to pass `server.target` (for some `tf.train.Server`
- named `server).
-
-#### Advanced use
-
-##### Launching additional services
-
-`managed_session()` launches the Checkpoint and Summary services (threads).
-If you need more services to run you can simply launch them in the block
-controlled by `managed_session()`.
-
-Example: Start a thread to print losses. We want this thread to run
-every 60 seconds, so we launch it with `sv.loop()`.
-
- ```python
- ...
- sv = Supervisor(logdir='/tmp/mydir')
- with sv.managed_session(FLAGS.master) as sess:
- sv.loop(60, print_loss, (sess, ))
- while not sv.should_stop():
- sess.run(my_train_op)
- ```
-
-##### Launching fewer services
-
-`managed_session()` launches the "summary" and "checkpoint" threads which use
-either the optionally `summary_op` and `saver` passed to the constructor, or
-default ones created automatically by the supervisor. If you want to run
-your own summary and checkpointing logic, disable these services by passing
-`None` to the `summary_op` and `saver` parameters.
-
-Example: Create summaries manually every 100 steps in the chief.
-
- ```python
- # Create a Supervisor with no automatic summaries.
- sv = Supervisor(logdir='/tmp/mydir', is_chief=is_chief, summary_op=None)
- # As summary_op was None, managed_session() does not start the
- # summary thread.
- with sv.managed_session(FLAGS.master) as sess:
- for step in xrange(1000000):
- if sv.should_stop():
- break
- if is_chief and step % 100 == 0:
- # Create the summary every 100 chief steps.
- sv.summary_computed(sess, sess.run(my_summary_op))
- else:
- # Train normally
- sess.run(my_train_op)
- ```
-
-##### Custom model initialization
-
-`managed_session()` only supports initializing the model by running an
-`init_op` or restoring from the latest checkpoint. If you have special
-initialization needs, see how to specify a `local_init_op` when creating the
-supervisor. You can also use the `SessionManager` directly to create a
-session and check if it could be initialized automatically.
-
-- - -
-
-#### `tf.train.Supervisor.__init__(graph=None, ready_op=0, ready_for_local_init_op=0, is_chief=True, init_op=0, init_feed_dict=None, local_init_op=0, logdir=None, summary_op=0, saver=0, global_step=0, save_summaries_secs=120, save_model_secs=600, recovery_wait_secs=30, stop_grace_secs=120, checkpoint_basename='model.ckpt', session_manager=None, summary_writer=0, init_fn=None)` {#Supervisor.__init__}
-
-Create a `Supervisor`.
-
-##### Args:
-
-
-* <b>`graph`</b>: A `Graph`. The graph that the model will use. Defaults to the
- default `Graph`. The supervisor may add operations to the graph before
- creating a session, but the graph should not be modified by the caller
- after passing it to the supervisor.
-* <b>`ready_op`</b>: 1-D string `Tensor`. This tensor is evaluated by supervisors in
- `prepare_or_wait_for_session()` to check if the model is ready to use.
- The model is considered ready if it returns an empty array. Defaults to
- the tensor returned from `tf.report_uninitialized_variables()` If
- `None`, the model is not checked for readiness.
-* <b>`ready_for_local_init_op`</b>: 1-D string `Tensor`. This tensor is evaluated by
- supervisors in `prepare_or_wait_for_session()` to check if the model is
- ready to run the local_init_op.
- The model is considered ready if it returns an empty array. Defaults to
- the tensor returned from
- `tf.report_uninitialized_variables(tf.global_variables())`. If `None`,
- the model is not checked for readiness before running local_init_op.
-* <b>`is_chief`</b>: If True, create a chief supervisor in charge of initializing
- and restoring the model. If False, create a supervisor that relies
- on a chief supervisor for inits and restore.
-* <b>`init_op`</b>: `Operation`. Used by chief supervisors to initialize the model
- when it can not be recovered. Defaults to an `Operation` that
- initializes all variables. If `None`, no initialization is done
- automatically unless you pass a value for `init_fn`, see below.
-* <b>`init_feed_dict`</b>: A dictionary that maps `Tensor` objects to feed values.
- This feed dictionary will be used when `init_op` is evaluated.
-* <b>`local_init_op`</b>: `Operation`. Used by all supervisors to run initializations
- that should run for every new supervisor instance. By default these
- are table initializers and initializers for local variables.
- If `None`, no further per supervisor-instance initialization is
- done automatically.
-* <b>`logdir`</b>: A string. Optional path to a directory where to checkpoint the
- model and log events for the visualizer. Used by chief supervisors.
- The directory will be created if it does not exist.
-* <b>`summary_op`</b>: An `Operation` that returns a Summary for the event logs.
- Used by chief supervisors if a `logdir` was specified. Defaults to the
- operation returned from summary.merge_all(). If `None`, summaries are
- not computed automatically.
-* <b>`saver`</b>: A Saver object. Used by chief supervisors if a `logdir` was
- specified. Defaults to the saved returned by Saver().
- If `None`, the model is not saved automatically.
-* <b>`global_step`</b>: An integer Tensor of size 1 that counts steps. The value
- from 'global_step' is used in summaries and checkpoint filenames.
- Default to the op named 'global_step' in the graph if it exists, is of
- rank 1, size 1, and of type tf.int32 or tf.int64. If `None` the global
- step is not recorded in summaries and checkpoint files. Used by chief
- supervisors if a `logdir` was specified.
-* <b>`save_summaries_secs`</b>: Number of seconds between the computation of
- summaries for the event log. Defaults to 120 seconds. Pass 0 to
- disable summaries.
-* <b>`save_model_secs`</b>: Number of seconds between the creation of model
- checkpoints. Defaults to 600 seconds. Pass 0 to disable checkpoints.
-* <b>`recovery_wait_secs`</b>: Number of seconds between checks that the model
- is ready. Used by supervisors when waiting for a chief supervisor
- to initialize or restore the model. Defaults to 30 seconds.
-* <b>`stop_grace_secs`</b>: Grace period, in seconds, given to running threads to
- stop when `stop()` is called. Defaults to 120 seconds.
-* <b>`checkpoint_basename`</b>: The basename for checkpoint saving.
-* <b>`session_manager`</b>: `SessionManager`, which manages Session creation and
- recovery. If it is `None`, a default `SessionManager` will be created
- with the set of arguments passed in for backwards compatibility.
-* <b>`summary_writer`</b>: `SummaryWriter` to use or `USE_DEFAULT`. Can be `None`
- to indicate that no summaries should be written.
-* <b>`init_fn`</b>: Optional callable used to initialize the model. Called
- after the optional `init_op` is called. The callable must accept one
- argument, the session being initialized.
-
-##### Returns:
-
- A `Supervisor`.
-
-
-- - -
-
-#### `tf.train.Supervisor.managed_session(master='', config=None, start_standard_services=True, close_summary_writer=True)` {#Supervisor.managed_session}
-
-Returns a context manager for a managed session.
-
-This context manager creates and automatically recovers a session. It
-optionally starts the standard services that handle checkpoints and
-summaries. It monitors exceptions raised from the `with` block or from the
-services and stops the supervisor as needed.
-
-The context manager is typically used as follows:
-
-```python
-def train():
- sv = tf.train.Supervisor(...)
- with sv.managed_session(<master>) as sess:
- for step in xrange(..):
- if sv.should_stop():
- break
- sess.run(<my training op>)
- ...do other things needed at each training step...
-```
-
-An exception raised from the `with` block or one of the service threads is
-raised again when the block exits. This is done after stopping all threads
-and closing the session. For example, an `AbortedError` exception, raised
-in case of preemption of one of the workers in a distributed model, is
-raised again when the block exits.
-
-If you want to retry the training loop in case of preemption you can do it
-as follows:
-
-```python
-def main(...):
- while True
- try:
- train()
- except tf.errors.Aborted:
- pass
-```
-
-As a special case, exceptions used for control flow, such as
-`OutOfRangeError` which reports that input queues are exhausted, are not
-raised again from the `with` block: they indicate a clean termination of
-the training loop and are considered normal termination.
-
-##### Args:
-
-
-* <b>`master`</b>: name of the TensorFlow master to use. See the `tf.Session`
- constructor for how this is interpreted.
-* <b>`config`</b>: Optional `ConfigProto` proto used to configure the session.
- Passed as-is to create the session.
-* <b>`start_standard_services`</b>: Whether to start the standard services,
- such as checkpoint, summary and step counter.
-* <b>`close_summary_writer`</b>: Whether to close the summary writer when
- closing the session. Defaults to True.
-
-##### Returns:
-
- A context manager that yields a `Session` restored from the latest
- checkpoint or initialized from scratch if not checkpoint exists. The
- session is closed when the `with` block exits.
-
-
-- - -
-
-#### `tf.train.Supervisor.prepare_or_wait_for_session(master='', config=None, wait_for_checkpoint=False, max_wait_secs=7200, start_standard_services=True)` {#Supervisor.prepare_or_wait_for_session}
-
-Make sure the model is ready to be used.
-
-Create a session on 'master', recovering or initializing the model as
-needed, or wait for a session to be ready. If running as the chief
-and `start_standard_service` is set to True, also call the session
-manager to start the standard services.
-
-##### Args:
-
-
-* <b>`master`</b>: name of the TensorFlow master to use. See the `tf.Session`
- constructor for how this is interpreted.
-* <b>`config`</b>: Optional ConfigProto proto used to configure the session,
- which is passed as-is to create the session.
-* <b>`wait_for_checkpoint`</b>: Whether we should wait for the availability of a
- checkpoint before creating Session. Defaults to False.
-* <b>`max_wait_secs`</b>: Maximum time to wait for the session to become available.
-* <b>`start_standard_services`</b>: Whether to start the standard services and the
- queue runners.
-
-##### Returns:
-
- A Session object that can be used to drive the model.
-
-
-- - -
-
-#### `tf.train.Supervisor.start_standard_services(sess)` {#Supervisor.start_standard_services}
-
-Start the standard services for 'sess'.
-
-This starts services in the background. The services started depend
-on the parameters to the constructor and may include:
-
- - A Summary thread computing summaries every save_summaries_secs.
- - A Checkpoint thread saving the model every save_model_secs.
- - A StepCounter thread measure step time.
-
-##### Args:
-
-
-* <b>`sess`</b>: A Session.
-
-##### Returns:
-
- A list of threads that are running the standard services. You can use
- the Supervisor's Coordinator to join these threads with:
- sv.coord.Join(<list of threads>)
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If called with a non-chief Supervisor.
-* <b>`ValueError`</b>: If not `logdir` was passed to the constructor as the
- services need a log directory.
-
-
-- - -
-
-#### `tf.train.Supervisor.start_queue_runners(sess, queue_runners=None)` {#Supervisor.start_queue_runners}
-
-Start threads for `QueueRunners`.
-
-Note that the queue runners collected in the graph key `QUEUE_RUNNERS`
-are already started automatically when you create a session with the
-supervisor, so unless you have non-collected queue runners to start
-you do not need to call this explicitly.
-
-##### Args:
-
-
-* <b>`sess`</b>: A `Session`.
-* <b>`queue_runners`</b>: A list of `QueueRunners`. If not specified, we'll use the
- list of queue runners gathered in the graph under the key
- `GraphKeys.QUEUE_RUNNERS`.
-
-##### Returns:
-
- The list of threads started for the `QueueRunners`.
-
-
-- - -
-
-#### `tf.train.Supervisor.summary_computed(sess, summary, global_step=None)` {#Supervisor.summary_computed}
-
-Indicate that a summary was computed.
-
-##### Args:
-
-
-* <b>`sess`</b>: A `Session` object.
-* <b>`summary`</b>: A Summary proto, or a string holding a serialized summary proto.
-* <b>`global_step`</b>: Int. global step this summary is associated with. If `None`,
- it will try to fetch the current step.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if 'summary' is not a Summary proto or a string.
-* <b>`RuntimeError`</b>: if the Supervisor was created without a `logdir`.
-
-
-
-- - -
-
-#### `tf.train.Supervisor.stop(threads=None, close_summary_writer=True)` {#Supervisor.stop}
-
-Stop the services and the coordinator.
-
-This does not close the session.
-
-##### Args:
-
-
-* <b>`threads`</b>: Optional list of threads to join with the coordinator. If
- `None`, defaults to the threads running the standard services, the
- threads started for `QueueRunners`, and the threads started by the
- `loop()` method. To wait on additional threads, pass the
- list in this parameter.
-* <b>`close_summary_writer`</b>: Whether to close the `summary_writer`. Defaults to
- `True` if the summary writer was created by the supervisor, `False`
- otherwise.
-
-
-- - -
-
-#### `tf.train.Supervisor.request_stop(ex=None)` {#Supervisor.request_stop}
-
-Request that the coordinator stop the threads.
-
-See `Coordinator.request_stop()`.
-
-##### Args:
-
-
-* <b>`ex`</b>: Optional `Exception`, or Python `exc_info` tuple as returned by
- `sys.exc_info()`. If this is the first call to `request_stop()` the
- corresponding exception is recorded and re-raised from `join()`.
-
-
-- - -
-
-#### `tf.train.Supervisor.should_stop()` {#Supervisor.should_stop}
-
-Check if the coordinator was told to stop.
-
-See `Coordinator.should_stop()`.
-
-##### Returns:
-
- True if the coordinator was told to stop, False otherwise.
-
-
-- - -
-
-#### `tf.train.Supervisor.stop_on_exception()` {#Supervisor.stop_on_exception}
-
-Context handler to stop the supervisor when an exception is raised.
-
-See `Coordinator.stop_on_exception()`.
-
-##### Returns:
-
- A context handler.
-
-
-- - -
-
-#### `tf.train.Supervisor.wait_for_stop()` {#Supervisor.wait_for_stop}
-
-Block waiting for the coordinator to stop.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.train.Supervisor.Loop(timer_interval_secs, target, args=None, kwargs=None)` {#Supervisor.Loop}
-
-Start a LooperThread that calls a function periodically.
-
-If `timer_interval_secs` is None the thread calls `target(*args, **kwargs)`
-repeatedly. Otherwise it calls it every `timer_interval_secs`
-seconds. The thread terminates when a stop is requested.
-
-The started thread is added to the list of threads managed by the supervisor
-so it does not need to be passed to the `stop()` method.
-
-##### Args:
-
-
-* <b>`timer_interval_secs`</b>: Number. Time boundaries at which to call `target`.
-* <b>`target`</b>: A callable object.
-* <b>`args`</b>: Optional arguments to pass to `target` when calling it.
-* <b>`kwargs`</b>: Optional keyword arguments to pass to `target` when calling it.
-
-##### Returns:
-
- The started thread.
-
-
-- - -
-
-#### `tf.train.Supervisor.PrepareSession(master='', config=None, wait_for_checkpoint=False, max_wait_secs=7200, start_standard_services=True)` {#Supervisor.PrepareSession}
-
-Make sure the model is ready to be used.
-
-Create a session on 'master', recovering or initializing the model as
-needed, or wait for a session to be ready. If running as the chief
-and `start_standard_service` is set to True, also call the session
-manager to start the standard services.
-
-##### Args:
-
-
-* <b>`master`</b>: name of the TensorFlow master to use. See the `tf.Session`
- constructor for how this is interpreted.
-* <b>`config`</b>: Optional ConfigProto proto used to configure the session,
- which is passed as-is to create the session.
-* <b>`wait_for_checkpoint`</b>: Whether we should wait for the availability of a
- checkpoint before creating Session. Defaults to False.
-* <b>`max_wait_secs`</b>: Maximum time to wait for the session to become available.
-* <b>`start_standard_services`</b>: Whether to start the standard services and the
- queue runners.
-
-##### Returns:
-
- A Session object that can be used to drive the model.
-
-
-- - -
-
-#### `tf.train.Supervisor.RequestStop(ex=None)` {#Supervisor.RequestStop}
-
-Request that the coordinator stop the threads.
-
-See `Coordinator.request_stop()`.
-
-##### Args:
-
-
-* <b>`ex`</b>: Optional `Exception`, or Python `exc_info` tuple as returned by
- `sys.exc_info()`. If this is the first call to `request_stop()` the
- corresponding exception is recorded and re-raised from `join()`.
-
-
-- - -
-
-#### `tf.train.Supervisor.ShouldStop()` {#Supervisor.ShouldStop}
-
-Check if the coordinator was told to stop.
-
-See `Coordinator.should_stop()`.
-
-##### Returns:
-
- True if the coordinator was told to stop, False otherwise.
-
-
-- - -
-
-#### `tf.train.Supervisor.StartQueueRunners(sess, queue_runners=None)` {#Supervisor.StartQueueRunners}
-
-Start threads for `QueueRunners`.
-
-Note that the queue runners collected in the graph key `QUEUE_RUNNERS`
-are already started automatically when you create a session with the
-supervisor, so unless you have non-collected queue runners to start
-you do not need to call this explicitly.
-
-##### Args:
-
-
-* <b>`sess`</b>: A `Session`.
-* <b>`queue_runners`</b>: A list of `QueueRunners`. If not specified, we'll use the
- list of queue runners gathered in the graph under the key
- `GraphKeys.QUEUE_RUNNERS`.
-
-##### Returns:
-
- The list of threads started for the `QueueRunners`.
-
-
-- - -
-
-#### `tf.train.Supervisor.StartStandardServices(sess)` {#Supervisor.StartStandardServices}
-
-Start the standard services for 'sess'.
-
-This starts services in the background. The services started depend
-on the parameters to the constructor and may include:
-
- - A Summary thread computing summaries every save_summaries_secs.
- - A Checkpoint thread saving the model every save_model_secs.
- - A StepCounter thread measure step time.
-
-##### Args:
-
-
-* <b>`sess`</b>: A Session.
-
-##### Returns:
-
- A list of threads that are running the standard services. You can use
- the Supervisor's Coordinator to join these threads with:
- sv.coord.Join(<list of threads>)
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If called with a non-chief Supervisor.
-* <b>`ValueError`</b>: If not `logdir` was passed to the constructor as the
- services need a log directory.
-
-
-- - -
-
-#### `tf.train.Supervisor.Stop(threads=None, close_summary_writer=True)` {#Supervisor.Stop}
-
-Stop the services and the coordinator.
-
-This does not close the session.
-
-##### Args:
-
-
-* <b>`threads`</b>: Optional list of threads to join with the coordinator. If
- `None`, defaults to the threads running the standard services, the
- threads started for `QueueRunners`, and the threads started by the
- `loop()` method. To wait on additional threads, pass the
- list in this parameter.
-* <b>`close_summary_writer`</b>: Whether to close the `summary_writer`. Defaults to
- `True` if the summary writer was created by the supervisor, `False`
- otherwise.
-
-
-- - -
-
-#### `tf.train.Supervisor.StopOnException()` {#Supervisor.StopOnException}
-
-Context handler to stop the supervisor when an exception is raised.
-
-See `Coordinator.stop_on_exception()`.
-
-##### Returns:
-
- A context handler.
-
-
-- - -
-
-#### `tf.train.Supervisor.SummaryComputed(sess, summary, global_step=None)` {#Supervisor.SummaryComputed}
-
-Indicate that a summary was computed.
-
-##### Args:
-
-
-* <b>`sess`</b>: A `Session` object.
-* <b>`summary`</b>: A Summary proto, or a string holding a serialized summary proto.
-* <b>`global_step`</b>: Int. global step this summary is associated with. If `None`,
- it will try to fetch the current step.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: if 'summary' is not a Summary proto or a string.
-* <b>`RuntimeError`</b>: if the Supervisor was created without a `logdir`.
-
-
-- - -
-
-#### `tf.train.Supervisor.WaitForStop()` {#Supervisor.WaitForStop}
-
-Block waiting for the coordinator to stop.
-
-
-- - -
-
-#### `tf.train.Supervisor.coord` {#Supervisor.coord}
-
-Return the Coordinator used by the Supervisor.
-
-The Coordinator can be useful if you want to run multiple threads
-during your training.
-
-##### Returns:
-
- A Coordinator object.
-
-
-- - -
-
-#### `tf.train.Supervisor.global_step` {#Supervisor.global_step}
-
-Return the global_step Tensor used by the supervisor.
-
-##### Returns:
-
- An integer Tensor for the global_step.
-
-
-- - -
-
-#### `tf.train.Supervisor.init_feed_dict` {#Supervisor.init_feed_dict}
-
-Return the feed dictionary used when evaluating the `init_op`.
-
-##### Returns:
-
- A feed dictionary or `None`.
-
-
-- - -
-
-#### `tf.train.Supervisor.init_op` {#Supervisor.init_op}
-
-Return the Init Op used by the supervisor.
-
-##### Returns:
-
- An Op or `None`.
-
-
-- - -
-
-#### `tf.train.Supervisor.is_chief` {#Supervisor.is_chief}
-
-Return True if this is a chief supervisor.
-
-##### Returns:
-
- A bool.
-
-
-- - -
-
-#### `tf.train.Supervisor.loop(timer_interval_secs, target, args=None, kwargs=None)` {#Supervisor.loop}
-
-Start a LooperThread that calls a function periodically.
-
-If `timer_interval_secs` is None the thread calls `target(*args, **kwargs)`
-repeatedly. Otherwise it calls it every `timer_interval_secs`
-seconds. The thread terminates when a stop is requested.
-
-The started thread is added to the list of threads managed by the supervisor
-so it does not need to be passed to the `stop()` method.
-
-##### Args:
-
-
-* <b>`timer_interval_secs`</b>: Number. Time boundaries at which to call `target`.
-* <b>`target`</b>: A callable object.
-* <b>`args`</b>: Optional arguments to pass to `target` when calling it.
-* <b>`kwargs`</b>: Optional keyword arguments to pass to `target` when calling it.
-
-##### Returns:
-
- The started thread.
-
-
-- - -
-
-#### `tf.train.Supervisor.ready_for_local_init_op` {#Supervisor.ready_for_local_init_op}
-
-
-
-
-- - -
-
-#### `tf.train.Supervisor.ready_op` {#Supervisor.ready_op}
-
-Return the Ready Op used by the supervisor.
-
-##### Returns:
-
- An Op or `None`.
-
-
-- - -
-
-#### `tf.train.Supervisor.save_model_secs` {#Supervisor.save_model_secs}
-
-Return the delay between checkpoints.
-
-##### Returns:
-
- A timestamp.
-
-
-- - -
-
-#### `tf.train.Supervisor.save_path` {#Supervisor.save_path}
-
-Return the save path used by the supervisor.
-
-##### Returns:
-
- A string.
-
-
-- - -
-
-#### `tf.train.Supervisor.save_summaries_secs` {#Supervisor.save_summaries_secs}
-
-Return the delay between summary computations.
-
-##### Returns:
-
- A timestamp.
-
-
-- - -
-
-#### `tf.train.Supervisor.saver` {#Supervisor.saver}
-
-Return the Saver used by the supervisor.
-
-##### Returns:
-
- A Saver object.
-
-
-- - -
-
-#### `tf.train.Supervisor.session_manager` {#Supervisor.session_manager}
-
-Return the SessionManager used by the Supervisor.
-
-##### Returns:
-
- A SessionManager object.
-
-
-- - -
-
-#### `tf.train.Supervisor.summary_op` {#Supervisor.summary_op}
-
-Return the Summary Tensor used by the chief supervisor.
-
-##### Returns:
-
- A string Tensor for the summary or `None`.
-
-
-- - -
-
-#### `tf.train.Supervisor.summary_writer` {#Supervisor.summary_writer}
-
-Return the SummaryWriter used by the chief supervisor.
-
-##### Returns:
-
- A SummaryWriter.
-
-
-
-- - -
-
-### `class tf.train.SessionManager` {#SessionManager}
-
-Training helper that restores from checkpoint and creates session.
-
-This class is a small wrapper that takes care of session creation and
-checkpoint recovery. It also provides functions that to facilitate
-coordination among multiple training threads or processes.
-
-* Checkpointing trained variables as the training progresses.
-* Initializing variables on startup, restoring them from the most recent
- checkpoint after a crash, or wait for checkpoints to become available.
-
-### Usage:
-
-```python
-with tf.Graph().as_default():
- ...add operations to the graph...
- # Create a SessionManager that will checkpoint the model in '/tmp/mydir'.
- sm = SessionManager()
- sess = sm.prepare_session(master, init_op, saver, checkpoint_dir)
- # Use the session to train the graph.
- while True:
- sess.run(<my_train_op>)
-```
-
-`prepare_session()` initializes or restores a model. It requires `init_op`
-and `saver` as an argument.
-
-A second process could wait for the model to be ready by doing the following:
-
-```python
-with tf.Graph().as_default():
- ...add operations to the graph...
- # Create a SessionManager that will wait for the model to become ready.
- sm = SessionManager()
- sess = sm.wait_for_session(master)
- # Use the session to train the graph.
- while True:
- sess.run(<my_train_op>)
-```
-
-`wait_for_session()` waits for a model to be initialized by other processes.
-- - -
-
-#### `tf.train.SessionManager.__init__(local_init_op=None, ready_op=None, ready_for_local_init_op=None, graph=None, recovery_wait_secs=30)` {#SessionManager.__init__}
-
-Creates a SessionManager.
-
-The `local_init_op` is an `Operation` that is run always after a new session
-was created. If `None`, this step is skipped.
-
-The `ready_op` is an `Operation` used to check if the model is ready. The
-model is considered ready if that operation returns an empty 1D string
-tensor. If the operation returns a non empty 1D string tensor, the elements
-are concatenated and used to indicate to the user why the model is not
-ready.
-
-The `ready_for_local_init_op` is an `Operation` used to check if the model
-is ready to run local_init_op. The model is considered ready if that
-operation returns an empty 1D string tensor. If the operation returns a non
-empty 1D string tensor, the elements are concatenated and used to indicate
-to the user why the model is not ready.
-
-If `ready_op` is `None`, the model is not checked for readiness.
-
-`recovery_wait_secs` is the number of seconds between checks that
-the model is ready. It is used by processes to wait for a model to
-be initialized or restored. Defaults to 30 seconds.
-
-##### Args:
-
-
-* <b>`local_init_op`</b>: An `Operation` run immediately after session creation.
- Usually used to initialize tables and local variables.
-* <b>`ready_op`</b>: An `Operation` to check if the model is initialized.
-* <b>`ready_for_local_init_op`</b>: An `Operation` to check if the model is ready
- to run local_init_op.
-* <b>`graph`</b>: The `Graph` that the model will use.
-* <b>`recovery_wait_secs`</b>: Seconds between checks for the model to be ready.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If ready_for_local_init_op is not None but local_init_op is
- None
-
-
-- - -
-
-#### `tf.train.SessionManager.prepare_session(master, init_op=None, saver=None, checkpoint_dir=None, checkpoint_filename_with_path=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None, init_feed_dict=None, init_fn=None)` {#SessionManager.prepare_session}
-
-Creates a `Session`. Makes sure the model is ready to be used.
-
-Creates a `Session` on 'master'. If a `saver` object is passed in, and
-`checkpoint_dir` points to a directory containing valid checkpoint
-files, then it will try to recover the model from checkpoint. If
-no checkpoint files are available, and `wait_for_checkpoint` is
-`True`, then the process would check every `recovery_wait_secs`,
-up to `max_wait_secs`, for recovery to succeed.
-
-If the model cannot be recovered successfully then it is initialized by
-either running the provided `init_op`, or calling the provided `init_fn`.
-The local_init_op is also run after init_op and init_fn, regardless of
-whether the model was recovered successfully, but only if
-ready_for_local_init_op passes.
-
-It is an error if the model cannot be recovered and no `init_op`
-or `init_fn` or `local_init_op` are passed.
-
-##### Args:
-
-
-* <b>`master`</b>: `String` representation of the TensorFlow master to use.
-* <b>`init_op`</b>: Optional `Operation` used to initialize the model.
-* <b>`saver`</b>: A `Saver` object used to restore a model.
-* <b>`checkpoint_dir`</b>: Path to the checkpoint files. The latest checkpoint in the
- dir will be used to restore.
-* <b>`checkpoint_filename_with_path`</b>: Full file name path to the checkpoint file.
-* <b>`wait_for_checkpoint`</b>: Whether to wait for checkpoint to become available.
-* <b>`max_wait_secs`</b>: Maximum time to wait for checkpoints to become available.
-* <b>`config`</b>: Optional `ConfigProto` proto used to configure the session.
-* <b>`init_feed_dict`</b>: Optional dictionary that maps `Tensor` objects to feed
- values. This feed dictionary is passed to the session `run()` call when
- running the init op.
-* <b>`init_fn`</b>: Optional callable used to initialize the model. Called after the
- optional `init_op` is called. The callable must accept one argument,
- the session being initialized.
-
-##### Returns:
-
- A `Session` object that can be used to drive the model.
-
-##### Raises:
-
-
-* <b>`RuntimeError`</b>: If the model cannot be initialized or recovered.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both checkpoint_dir and checkpoint_filename_with_path are
- set.
-
-
-- - -
-
-#### `tf.train.SessionManager.recover_session(master, saver=None, checkpoint_dir=None, checkpoint_filename_with_path=None, wait_for_checkpoint=False, max_wait_secs=7200, config=None)` {#SessionManager.recover_session}
-
-Creates a `Session`, recovering if possible.
-
-Creates a new session on 'master'. If the session is not initialized
-and can be recovered from a checkpoint, recover it.
-
-##### Args:
-
-
-* <b>`master`</b>: `String` representation of the TensorFlow master to use.
-* <b>`saver`</b>: A `Saver` object used to restore a model.
-* <b>`checkpoint_dir`</b>: Path to the checkpoint files. The latest checkpoint in the
- dir will be used to restore.
-* <b>`checkpoint_filename_with_path`</b>: Full file name path to the checkpoint file.
-* <b>`wait_for_checkpoint`</b>: Whether to wait for checkpoint to become available.
-* <b>`max_wait_secs`</b>: Maximum time to wait for checkpoints to become available.
-* <b>`config`</b>: Optional `ConfigProto` proto used to configure the session.
-
-##### Returns:
-
- A pair (sess, initialized) where 'initialized' is `True` if
- the session could be recovered and initialized, `False` otherwise.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If both checkpoint_dir and checkpoint_filename_with_path are
- set.
-
-
-- - -
-
-#### `tf.train.SessionManager.wait_for_session(master, config=None, max_wait_secs=inf)` {#SessionManager.wait_for_session}
-
-Creates a new `Session` and waits for model to be ready.
-
-Creates a new `Session` on 'master'. Waits for the model to be
-initialized or recovered from a checkpoint. It's expected that
-another thread or process will make the model ready, and that this
-is intended to be used by threads/processes that participate in a
-distributed training configuration where a different thread/process
-is responsible for initializing or recovering the model being trained.
-
-NB: The amount of time this method waits for the session is bounded
-by max_wait_secs. By default, this function will wait indefinitely.
-
-##### Args:
-
-
-* <b>`master`</b>: `String` representation of the TensorFlow master to use.
-* <b>`config`</b>: Optional ConfigProto proto used to configure the session.
-* <b>`max_wait_secs`</b>: Maximum time to wait for the session to become available.
-
-##### Returns:
-
- A `Session`. May be None if the operation exceeds the timeout
- specified by config.operation_timeout_in_ms.
-
-##### Raises:
-
- tf.DeadlineExceededError: if the session is not available after
- max_wait_secs.
-
-
-
-- - -
-
-### `class tf.train.ClusterSpec` {#ClusterSpec}
-
-Represents a cluster as a set of "tasks", organized into "jobs".
-
-A `tf.train.ClusterSpec` represents the set of processes that
-participate in a distributed TensorFlow computation. Every
-[`tf.train.Server`](#Server) is constructed in a particular cluster.
-
-To create a cluster with two jobs and five tasks, you specify the
-mapping from job names to lists of network addresses (typically
-hostname-port pairs).
-
-```python
-cluster = tf.train.ClusterSpec({"worker": ["worker0.example.com:2222",
- "worker1.example.com:2222",
- "worker2.example.com:2222"],
- "ps": ["ps0.example.com:2222",
- "ps1.example.com:2222"]})
-```
-
-Each job may also be specified as a sparse mapping from task indices
-to network addresses. This enables a server to be configured without
-needing to know the identity of (for example) all other worker
-tasks:
-
-```python
-cluster = tf.train.ClusterSpec({"worker": {1: "worker1.example.com:2222"},
- "ps": ["ps0.example.com:2222",
- "ps1.example.com:2222"]})
-```
-
-- - -
-
-#### `tf.train.ClusterSpec.as_cluster_def()` {#ClusterSpec.as_cluster_def}
-
-Returns a `tf.train.ClusterDef` protocol buffer based on this cluster.
-
-
-- - -
-
-#### `tf.train.ClusterSpec.as_dict()` {#ClusterSpec.as_dict}
-
-Returns a dictionary from job names to their tasks.
-
-For each job, if the task index space is dense, the corresponding
-value will be a list of network addresses; otherwise it will be a
-dictionary mapping (sparse) task indices to the corresponding
-addresses.
-
-##### Returns:
-
- A dictionary mapping job names to lists or dictionaries
- describing the tasks in those jobs.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.train.ClusterSpec.__bool__()` {#ClusterSpec.__bool__}
-
-
-
-
-- - -
-
-#### `tf.train.ClusterSpec.__eq__(other)` {#ClusterSpec.__eq__}
-
-
-
-
-- - -
-
-#### `tf.train.ClusterSpec.__init__(cluster)` {#ClusterSpec.__init__}
-
-Creates a `ClusterSpec`.
-
-##### Args:
-
-
-* <b>`cluster`</b>: A dictionary mapping one or more job names to (i) a
- list of network addresses, or (ii) a dictionary mapping integer
- task indices to network addresses; or a `tf.train.ClusterDef`
- protocol buffer.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If `cluster` is not a dictionary mapping strings to lists
- of strings, and not a `tf.train.ClusterDef` protobuf.
-
-
-- - -
-
-#### `tf.train.ClusterSpec.__ne__(other)` {#ClusterSpec.__ne__}
-
-
-
-
-- - -
-
-#### `tf.train.ClusterSpec.__nonzero__()` {#ClusterSpec.__nonzero__}
-
-
-
-
-- - -
-
-#### `tf.train.ClusterSpec.job_tasks(job_name)` {#ClusterSpec.job_tasks}
-
-Returns a mapping from task ID to address in the given job.
-
-NOTE: For backwards compatibility, this method returns a list. If
-the given job was defined with a sparse set of task indices, the
-length of this list may not reflect the number of tasks defined in
-this job. Use the [`num_tasks()`](#ClusterSpec.num_tasks) method
-to find the number of tasks defined in a particular job.
-
-##### Args:
-
-
-* <b>`job_name`</b>: The string name of a job in this cluster.
-
-##### Returns:
-
- A list of task addresses, where the index in the list
- corresponds to the task index of each task. The list may contain
- `None` if the job was defined with a sparse set of task indices.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `job_name` does not name a job in this cluster.
-
-
-- - -
-
-#### `tf.train.ClusterSpec.jobs` {#ClusterSpec.jobs}
-
-Returns a list of job names in this cluster.
-
-##### Returns:
-
- A list of strings, corresponding to the names of jobs in this cluster.
-
-
-- - -
-
-#### `tf.train.ClusterSpec.num_tasks(job_name)` {#ClusterSpec.num_tasks}
-
-Returns the number of tasks defined in the given job.
-
-##### Args:
-
-
-* <b>`job_name`</b>: The string name of a job in this cluster.
-
-##### Returns:
-
- The number of tasks defined in the given job.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `job_name` does not name a job in this cluster.
-
-
-- - -
-
-#### `tf.train.ClusterSpec.task_address(job_name, task_index)` {#ClusterSpec.task_address}
-
-Returns the address of the given task in the given job.
-
-##### Args:
-
-
-* <b>`job_name`</b>: The string name of a job in this cluster.
-* <b>`task_index`</b>: A non-negative integer.
-
-##### Returns:
-
- The address of the given task in the given job.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `job_name` does not name a job in this cluster,
- or no task with index `task_index` is defined in that job.
-
-
-- - -
-
-#### `tf.train.ClusterSpec.task_indices(job_name)` {#ClusterSpec.task_indices}
-
-Returns a list of valid task indices in the given job.
-
-##### Args:
-
-
-* <b>`job_name`</b>: The string name of a job in this cluster.
-
-##### Returns:
-
- A list of valid task indices in the given job.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If `job_name` does not name a job in this cluster,
- or no task with index `task_index` is defined in that job.
-
-
-
-- - -
-
-### `tf.train.replica_device_setter(ps_tasks=0, ps_device='/job:ps', worker_device='/job:worker', merge_devices=True, cluster=None, ps_ops=None, ps_strategy=None)` {#replica_device_setter}
-
-Return a `device function` to use when building a Graph for replicas.
-
-Device Functions are used in `with tf.device(device_function):` statement to
-automatically assign devices to `Operation` objects as they are constructed,
-Device constraints are added from the inner-most context first, working
-outwards. The merging behavior adds constraints to fields that are yet unset
-by a more inner context. Currently the fields are (job, task, cpu/gpu).
-
-If `cluster` is `None`, and `ps_tasks` is 0, the returned function is a no-op.
-Otherwise, the value of `ps_tasks` is derived from `cluster`.
-
-By default, only Variable ops are placed on ps tasks, and the placement
-strategy is round-robin over all ps tasks. A custom `ps_strategy` may be used
-to do more intelligent placement, such as
-`tf.contrib.training.GreedyLoadBalancingStrategy`.
-
-For example,
-
-```python
-# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker
-# jobs on hosts worker0, worker1 and worker2.
-cluster_spec = {
- "ps": ["ps0:2222", "ps1:2222"],
- "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]}
-with tf.device(tf.train.replica_device_setter(cluster=cluster_spec)):
- # Build your graph
- v1 = tf.Variable(...) # assigned to /job:ps/task:0
- v2 = tf.Variable(...) # assigned to /job:ps/task:1
- v3 = tf.Variable(...) # assigned to /job:ps/task:0
-# Run compute
-```
-
-##### Args:
-
-
-* <b>`ps_tasks`</b>: Number of tasks in the `ps` job. Ignored if `cluster` is
- provided.
-* <b>`ps_device`</b>: String. Device of the `ps` job. If empty no `ps` job is used.
- Defaults to `ps`.
-* <b>`worker_device`</b>: String. Device of the `worker` job. If empty no `worker`
- job is used.
-* <b>`merge_devices`</b>: `Boolean`. If `True`, merges or only sets a device if the
- device constraint is completely unset. merges device specification rather
- than overriding them.
-* <b>`cluster`</b>: `ClusterDef` proto or `ClusterSpec`.
-* <b>`ps_ops`</b>: List of strings representing `Operation` types that need to be
- placed on `ps` devices. If `None`, defaults to `["Variable"]`.
-* <b>`ps_strategy`</b>: A callable invoked for every ps `Operation` (i.e. matched by
- `ps_ops`), that takes the `Operation` and returns the ps task index to
- use. If `None`, defaults to a round-robin strategy across all `ps`
- devices.
-
-##### Returns:
-
- A function to pass to `tf.device()`.
-
-##### Raises:
-
- TypeError if `cluster` is not a dictionary or `ClusterDef` protocol buffer,
- or if `ps_strategy` is provided but not a callable.
-
-
-- - -
-
-### `tf.train.MonitoredTrainingSession(master='', is_chief=True, checkpoint_dir=None, scaffold=None, hooks=None, chief_only_hooks=None, save_checkpoint_secs=600, save_summaries_steps=100, save_summaries_secs=None, config=None, stop_grace_period_secs=120)` {#MonitoredTrainingSession}
-
-Creates a `MonitoredSession` for training.
-
-For a chief, this utility sets proper session initializer/restorer. It also
-creates hooks related to checkpoint and summary saving. For workers, this
-utility sets proper session creator which waits for the chief to
-inialize/restore.
-
-
-##### Args:
-
-
-* <b>`master`</b>: `String` the TensorFlow master to use.
-* <b>`is_chief`</b>: If `True`, it will take care of initialization and recovery the
- underlying TensorFlow session. If `False`, it will wait on a chief to
- initialize or recover the TensorFlow session.
-* <b>`checkpoint_dir`</b>: A string. Optional path to a directory where to restore
- variables.
-* <b>`scaffold`</b>: A `Scaffold` used for gathering or building supportive ops. If
- not specified, a default one is created. It's used to finalize the graph.
-* <b>`hooks`</b>: Optional list of `SessionRunHook` objects.
-* <b>`chief_only_hooks`</b>: list of `SessionRunHook` objects. Activate these hooks if
- `is_chief==True`, ignore otherwise.
-* <b>`save_checkpoint_secs`</b>: The frequency, in seconds, that a checkpoint is saved
- using a default checkpoint saver. If `save_checkpoint_secs` is set to
- `None`, then the default checkpoint saver isn't used.
-* <b>`save_summaries_steps`</b>: The frequency, in number of global steps, that the
- summaries are written to disk using a default summary saver. If both
- `save_summaries_steps` and `save_summaries_secs` are set to `None`, then
- the default summary saver isn't used.
-* <b>`save_summaries_secs`</b>: The frequency, in secs, that the summaries are written
- to disk using a default summary saver. If both `save_summaries_steps` and
- `save_summaries_secs` are set to `None`, then the default summary saver
- isn't used.
-* <b>`config`</b>: an instance of `tf.ConfigProto` proto used to configure the session.
- It's the `config` argument of constructor of `tf.Session`.
-* <b>`stop_grace_period_secs`</b>: Number of seconds given to threads to stop after
- `close()` has been called.
-
-##### Returns:
-
- A `MonitoredSession` object.
-
-
-- - -
-
-### `class tf.train.MonitoredSession` {#MonitoredSession}
-
-Session-like object that handles initialization, recovery and hooks.
-
-Example usage:
-
-```python
-saver_hook = CheckpointSaverHook(...)
-summary_hook = SummaryHook(...)
-with MonitoredSession(session_creator=ChiefSessionCreator(...),
- hooks=[saver_hook, summary_hook]) as sess:
- while not sess.should_stop():
- sess.run(train_op)
-```
-
-Initialization: At creation time the monitored session does following things
-in given order:
-
-* calls `hook.begin()` for each given hook
-* finalizes the graph via `scaffold.finalize()`
-* create session
-* initializes the model via initialization ops provided by `Scaffold`
-* restores variables if a checkpoint exists
-* launches queue runners
-
-Run: When `run()` is called, the monitored session does following things:
-
-* calls `hook.before_run()`
-* calls TensorFlow `session.run()` with merged fetches and feed_dict
-* calls `hook.after_run()`
-* returns result of `session.run()` asked by user
-* if `AbortedError` occurs, it recovers or reinitializes the session before
- executing the run() call again
-
-
-Exit: At the `close()`, the monitored session does following things in order:
-
-* calls `hook.end()`
-* closes the queue runners and the session
-* suppresses `OutOfRange` error which indicates that all inputs have been
- processed if the monitored_session is used as a context
-
-How to set `tf.Session` arguments:
-
-* In most cases you can set session arguments as follows:
-
-```python
-MonitoredSession(
- session_creator=ChiefSessionCreator(master=..., config=...))
-```
-
-* In distributed setting for a non-chief worker, you can use following:
-
-```python
-MonitoredSession(
- session_creator=WorkerSessionCreator(master=..., config=...))
-```
-
-See `MonitoredTrainingSession` for an example usage based on chief or worker.
-
-Args:
- session_creator: A factory object to create session. Typically a
- `ChiefSessionCreator` which is the default one.
- hooks: An iterable of `SessionRunHook' objects.
-
-Returns:
- A MonitoredSession object.
-- - -
-
-#### `tf.train.MonitoredSession.__enter__()` {#MonitoredSession.__enter__}
-
-
-
-
-- - -
-
-#### `tf.train.MonitoredSession.__exit__(exception_type, exception_value, traceback)` {#MonitoredSession.__exit__}
-
-
-
-
-- - -
-
-#### `tf.train.MonitoredSession.__init__(session_creator=None, hooks=None, stop_grace_period_secs=120)` {#MonitoredSession.__init__}
-
-
-
-
-- - -
-
-#### `tf.train.MonitoredSession.close()` {#MonitoredSession.close}
-
-
-
-
-- - -
-
-#### `tf.train.MonitoredSession.graph` {#MonitoredSession.graph}
-
-The graph that was launched in this session.
-
-
-- - -
-
-#### `tf.train.MonitoredSession.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#MonitoredSession.run}
-
-Run ops in the monitored session.
-
-This method is completely compatible with the `tf.Session.run()` method.
-
-##### Args:
-
-
-* <b>`fetches`</b>: Same as `tf.Session.run()`.
-* <b>`feed_dict`</b>: Same as `tf.Session.run()`.
-* <b>`options`</b>: Same as `tf.Session.run()`.
-* <b>`run_metadata`</b>: Same as `tf.Session.run()`.
-
-##### Returns:
-
- Same as `tf.Session.run()`.
-
-
-- - -
-
-#### `tf.train.MonitoredSession.should_stop()` {#MonitoredSession.should_stop}
-
-
-
-
-
-- - -
-
-### `class tf.train.SingularMonitoredSession` {#SingularMonitoredSession}
-
-Session-like object that handles initialization, restoring, and hooks.
-
-Please note that this utility is not recommended for distributed settings.
-For distributed settings, please use `tf.train.MonitoredSession`. The
-differences between `MonitoredSession` and `SingularMonitoredSession` are:
-* `MonitoredSession` handles `AbortedError` for distributed settings,
- but `SingularMonitoredSession` does not.
-* `MonitoredSession` can be created in `chief` or `worker` modes.
- `SingularMonitoredSession` is always created as `chief`.
-* You can access the raw `tf.Session` object used by
- `SingularMonitoredSession`, whereas in MonitoredSession the raw session is
- private. This can be used:
- - To `run` without hooks.
- - To save and restore.
-* All other functionality is identical.
-
-Example usage:
-```python
-saver_hook = CheckpointSaverHook(...)
-summary_hook = SummaryHook(...)
-with SingularMonitoredSession(hooks=[saver_hook, summary_hook]) as sess:
- while not sess.should_stop():
- sess.run(train_op)
-```
-
-Initialization: At creation time the hooked session does following things
-in given order:
-
-* calls `hook.begin()` for each given hook
-* finalizes the graph via `scaffold.finalize()`
-* create session
-* initializes the model via initialization ops provided by `Scaffold`
-* restores variables if a checkpoint exists
-* launches queue runners
-
-Run: When `run()` is called, the hooked session does following things:
-
-* calls `hook.before_run()`
-* calls TensorFlow `session.run()` with merged fetches and feed_dict
-* calls `hook.after_run()`
-* returns result of `session.run()` asked by user
-
-Exit: At the `close()`, the hooked session does following things in order:
-
-* calls `hook.end()`
-* closes the queue runners and the session
-* surpresses `OutOfRange` error which indicates that all inputs have been
- processed if the `SingularMonitoredSession` is used as a context.
-- - -
-
-#### `tf.train.SingularMonitoredSession.__enter__()` {#SingularMonitoredSession.__enter__}
-
-
-
-
-- - -
-
-#### `tf.train.SingularMonitoredSession.__exit__(exception_type, exception_value, traceback)` {#SingularMonitoredSession.__exit__}
-
-
-
-
-- - -
-
-#### `tf.train.SingularMonitoredSession.__init__(hooks=None, scaffold=None, master='', config=None, checkpoint_dir=None, stop_grace_period_secs=120)` {#SingularMonitoredSession.__init__}
-
-Creates a SingularMonitoredSession.
-
-##### Args:
-
-
-* <b>`hooks`</b>: An iterable of `SessionRunHook' objects.
-* <b>`scaffold`</b>: A `Scaffold` used for gathering or building supportive ops. If
- not specified a default one is created. It's used to finalize the graph.
-* <b>`master`</b>: `String` representation of the TensorFlow master to use.
-* <b>`config`</b>: `ConfigProto` proto used to configure the session.
-* <b>`checkpoint_dir`</b>: A string. Optional path to a directory where to restore
- variables.
-* <b>`stop_grace_period_secs`</b>: Number of seconds given to threads to stop after
- `close()` has been called.
-
-
-- - -
-
-#### `tf.train.SingularMonitoredSession.close()` {#SingularMonitoredSession.close}
-
-
-
-
-- - -
-
-#### `tf.train.SingularMonitoredSession.graph` {#SingularMonitoredSession.graph}
-
-The graph that was launched in this session.
-
-
-- - -
-
-#### `tf.train.SingularMonitoredSession.raw_session()` {#SingularMonitoredSession.raw_session}
-
-Returns underlying `TensorFlow.Session` object.
-
-
-- - -
-
-#### `tf.train.SingularMonitoredSession.run(fetches, feed_dict=None, options=None, run_metadata=None)` {#SingularMonitoredSession.run}
-
-Run ops in the monitored session.
-
-This method is completely compatible with the `tf.Session.run()` method.
-
-##### Args:
-
-
-* <b>`fetches`</b>: Same as `tf.Session.run()`.
-* <b>`feed_dict`</b>: Same as `tf.Session.run()`.
-* <b>`options`</b>: Same as `tf.Session.run()`.
-* <b>`run_metadata`</b>: Same as `tf.Session.run()`.
-
-##### Returns:
-
- Same as `tf.Session.run()`.
-
-
-- - -
-
-#### `tf.train.SingularMonitoredSession.should_stop()` {#SingularMonitoredSession.should_stop}
-
-
-
-
-
-- - -
-
-### `class tf.train.Scaffold` {#Scaffold}
-
-Structure to create or gather pieces commonly needed to train a model.
-
-When you build a model for training you usually need ops to initialize
-variables, a `Saver` to checkpoint them, an op to collect summaries for
-the visualizer, and so on.
-
-Various libraries built on top of the core TensorFlow library take care of
-creating some or all of these pieces and storing them in well known
-collections in the graph. The `Scaffold` class helps pick these pieces from
-the graph collections, creating and adding them to the collections if needed.
-
-If you call the scaffold constructor without any arguments, it will pick
-pieces from the collections, creating default ones if needed when
-`scaffold.finalize()` is called. You can pass arguments to the constructor to
-provide your own pieces. Pieces that you pass to the constructor are not
-added to the graph collections.
-
-The following pieces are directly accessible as attributes of the `Scaffold`
-object:
-
-* `saver`: A `tf.Saver` object taking care of saving the variables. Picked
- from and stored into the `SAVERS` collection in the graph by default.
-* `init_op`: An op to run to initialize the variables. Picked from and
- stored into the `INIT_OP` collection in the graph by default.
-* `ready_op`: An op to verify that the variables are initialized. Picked
- from and stored into the `READY_OP` collection in the graph by default.
-* `ready_for_local_init_op`: An op to verify that global state has been
- initialized and it is alright to run `local_init_op`. Picked from and
- stored into the `READY_FOR_LOCAL_INIT_OP` collection in the graph by
- default. This is needed when the initialization of local variables depends
- on the values of global variables.
-* `local_init_op`: An op to initialize the local variables. Picked
- from and stored into the `LOCAL_INIT_OP` collection in the graph by default.
-* `summary_op`: An op to run and merge the summaries in the graph. Picked
- from and stored into the `SUMMARY_OP` collection in the graph by default.
-* `global_step`: A tensor containing the global step counter. Picked
- from and stored into the `GLOBAL_STEP` collection in the graph by default.
-
-You can also pass the following additional pieces to the constructor:
-
-* `init_feed_dict`: A sessionn feed dictionary that should be used when
- running the init op.
-* `init_fn`: A callable to run run after the init op to perform additional
- initializations. The callable will be called as
- `init_fn(scaffold, session)`.
-- - -
-
-#### `tf.train.Scaffold.__init__(init_op=None, init_feed_dict=None, init_fn=None, ready_op=None, ready_for_local_init_op=None, local_init_op=None, summary_op=None, saver=None)` {#Scaffold.__init__}
-
-Create a scaffold.
-
-##### Args:
-
-
-* <b>`init_op`</b>: Optional op for initializing variables.
-* <b>`init_feed_dict`</b>: Optional session feed dictionary to use when running the
- init_op.
-* <b>`init_fn`</b>: Optional function to use to initialize the model after running
- the init_op. Will be called as `init_fn(scaffold, session)`.
-* <b>`ready_op`</b>: Optional op to verify that the variables are initialized. Must
- return an empty 1D string tensor when the variables are initialized, or
- a non-empty 1D string tensor listing the names of the non-initialized
- variables.
-* <b>`ready_for_local_init_op`</b>: Optional op to verify that the global variables
- are initialized and `local_init_op` can be run. Must return an empty
- 1D string tensor when the global variables are initialized, or a
- non-empty 1D string tensor listing the names of the non-initialized
- global variables.
-* <b>`local_init_op`</b>: Optional op to initialize local variables.
-* <b>`summary_op`</b>: Optional op to gather all summaries. Must return a scalar
- string tensor containing a serialized `Summary` proto.
-* <b>`saver`</b>: Optional `tf.Saver` object to use to save and restore variables.
-
-
-- - -
-
-#### `tf.train.Scaffold.finalize()` {#Scaffold.finalize}
-
-Creates operations if needed and finalizes the graph.
-
-
-- - -
-
-#### `tf.train.Scaffold.get_or_default(arg_name, collection_key, default_constructor)` {#Scaffold.get_or_default}
-
-Get from cache or create a default operation.
-
-
-- - -
-
-#### `tf.train.Scaffold.init_feed_dict` {#Scaffold.init_feed_dict}
-
-
-
-
-- - -
-
-#### `tf.train.Scaffold.init_fn` {#Scaffold.init_fn}
-
-
-
-
-- - -
-
-#### `tf.train.Scaffold.init_op` {#Scaffold.init_op}
-
-
-
-
-- - -
-
-#### `tf.train.Scaffold.local_init_op` {#Scaffold.local_init_op}
-
-
-
-
-- - -
-
-#### `tf.train.Scaffold.ready_for_local_init_op` {#Scaffold.ready_for_local_init_op}
-
-
-
-
-- - -
-
-#### `tf.train.Scaffold.ready_op` {#Scaffold.ready_op}
-
-
-
-
-- - -
-
-#### `tf.train.Scaffold.saver` {#Scaffold.saver}
-
-
-
-
-- - -
-
-#### `tf.train.Scaffold.summary_op` {#Scaffold.summary_op}
-
-
-
-
-
-- - -
-
-### `class tf.train.SessionCreator` {#SessionCreator}
-
-A factory for tf.Session.
-- - -
-
-#### `tf.train.SessionCreator.create_session()` {#SessionCreator.create_session}
-
-
-
-
-
-- - -
-
-### `class tf.train.ChiefSessionCreator` {#ChiefSessionCreator}
-
-Creates a tf.Session for a chief.
-- - -
-
-#### `tf.train.ChiefSessionCreator.__init__(scaffold=None, master='', config=None, checkpoint_dir=None, checkpoint_filename_with_path=None)` {#ChiefSessionCreator.__init__}
-
-Initializes a chief session creator.
-
-##### Args:
-
-
-* <b>`scaffold`</b>: A `Scaffold` used for gathering or building supportive ops. If
- not specified a default one is created. It's used to finalize the graph.
-* <b>`master`</b>: `String` representation of the TensorFlow master to use.
-* <b>`config`</b>: `ConfigProto` proto used to configure the session.
-* <b>`checkpoint_dir`</b>: A string. Optional path to a directory where to restore
- variables.
-* <b>`checkpoint_filename_with_path`</b>: Full file name path to the checkpoint file.
-
-
-- - -
-
-#### `tf.train.ChiefSessionCreator.create_session()` {#ChiefSessionCreator.create_session}
-
-
-
-
-
-- - -
-
-### `class tf.train.WorkerSessionCreator` {#WorkerSessionCreator}
-
-Creates a tf.Session for a worker.
-- - -
-
-#### `tf.train.WorkerSessionCreator.__init__(scaffold=None, master='', config=None)` {#WorkerSessionCreator.__init__}
-
-Initializes a worker session creator.
-
-##### Args:
-
-
-* <b>`scaffold`</b>: A `Scaffold` used for gathering or building supportive ops. If
- not specified a default one is created. It's used to finalize the graph.
-* <b>`master`</b>: `String` representation of the TensorFlow master to use.
-* <b>`config`</b>: `ConfigProto` proto used to configure the session.
-
-
-- - -
-
-#### `tf.train.WorkerSessionCreator.create_session()` {#WorkerSessionCreator.create_session}
-
-
-
-
-
-- - -
-
-### `tf.train.summary_iterator(path)` {#summary_iterator}
-
-An iterator for reading `Event` protocol buffers from an event file.
-
-You can use this function to read events written to an event file. It returns
-a Python iterator that yields `Event` protocol buffers.
-
-Example: Print the contents of an events file.
-
-```python
-for e in tf.train.summary_iterator(path to events file):
- print(e)
-```
-
-Example: Print selected summary values.
-
-```python
-# This example supposes that the events file contains summaries with a
-# summary value tag 'loss'. These could have been added by calling
-# `add_summary()`, passing the output of a scalar summary op created with
-# with: `tf.summary.scalar('loss', loss_tensor)`.
-for e in tf.train.summary_iterator(path to events file):
- for v in e.summary.value:
- if v.tag == 'loss':
- print(v.simple_value)
-```
-
-See the protocol buffer definitions of
-[Event](https://www.tensorflow.org/code/tensorflow/core/util/event.proto)
-and
-[Summary](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto)
-for more information about their attributes.
-
-##### Args:
-
-
-* <b>`path`</b>: The path to an event file created by a `SummaryWriter`.
-
-##### Yields:
-
- `Event` protocol buffers.
-
-
-- - -
-
-### `class tf.train.SessionRunHook` {#SessionRunHook}
-
-Hook to extend calls to MonitoredSession.run().
-- - -
-
-#### `tf.train.SessionRunHook.after_create_session(session, coord)` {#SessionRunHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.SessionRunHook.after_run(run_context, run_values)` {#SessionRunHook.after_run}
-
-Called after each call to run().
-
-The `run_values` argument contains results of requested ops/tensors by
-`before_run()`.
-
-The `run_context` argument is the same one send to `before_run` call.
-`run_context.request_stop()` can be called to stop the iteration.
-
-##### Args:
-
-
-* <b>`run_context`</b>: A `SessionRunContext` object.
-* <b>`run_values`</b>: A SessionRunValues object.
-
-
-- - -
-
-#### `tf.train.SessionRunHook.before_run(run_context)` {#SessionRunHook.before_run}
-
-Called before each call to run().
-
-You can return from this call a `SessionRunArgs` object indicating ops or
-tensors to add to the upcoming `run()` call. These ops/tensors will be run
-together with the ops/tensors originally passed to the original run() call.
-The run args you return can also contain feeds to be added to the run()
-call.
-
-The `run_context` argument is a `SessionRunContext` that provides
-information about the upcoming `run()` call: the originally requested
-op/tensors, the TensorFlow Session.
-
-At this point graph is finalized and you can not add ops.
-
-##### Args:
-
-
-* <b>`run_context`</b>: A `SessionRunContext` object.
-
-##### Returns:
-
- None or a `SessionRunArgs` object.
-
-
-- - -
-
-#### `tf.train.SessionRunHook.begin()` {#SessionRunHook.begin}
-
-Called once before using the session.
-
-When called, the default graph is the one that will be launched in the
-session. The hook can modify the graph by adding new operations to it.
-After the `begin()` call the graph will be finalized and the other callbacks
-can not modify the graph anymore. Second call of `begin()` on the same
-graph, should not change the graph.
-
-
-- - -
-
-#### `tf.train.SessionRunHook.end(session)` {#SessionRunHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
-
-- - -
-
-### `class tf.train.SessionRunArgs` {#SessionRunArgs}
-
-Represents arguments to be added to a `Session.run()` call.
-
-Args:
- fetches: Exactly like the 'fetches' argument to Session.Run().
- Can be a single tensor or op, a list of 'fetches' or a dictionary
- of fetches. For example:
- fetches = global_step_tensor
- fetches = [train_op, summary_op, global_step_tensor]
- fetches = {'step': global_step_tensor, 'summ': summary_op}
- Note that this can recurse as expected:
- fetches = {'step': global_step_tensor,
- 'ops': [train_op, check_nan_op]}
- feed_dict: Exactly like the `feed_dict` argument to `Session.Run()`
- options: Exactly like the `options` argument to `Session.run()`, i.e., a
- config_pb2.RunOptions proto.
-- - -
-
-#### `tf.train.SessionRunArgs.__getnewargs__()` {#SessionRunArgs.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.train.SessionRunArgs.__getstate__()` {#SessionRunArgs.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.train.SessionRunArgs.__new__(cls, fetches, feed_dict=None, options=None)` {#SessionRunArgs.__new__}
-
-
-
-
-- - -
-
-#### `tf.train.SessionRunArgs.__repr__()` {#SessionRunArgs.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.train.SessionRunArgs.feed_dict` {#SessionRunArgs.feed_dict}
-
-Alias for field number 1
-
-
-- - -
-
-#### `tf.train.SessionRunArgs.fetches` {#SessionRunArgs.fetches}
-
-Alias for field number 0
-
-
-- - -
-
-#### `tf.train.SessionRunArgs.options` {#SessionRunArgs.options}
-
-Alias for field number 2
-
-
-
-- - -
-
-### `class tf.train.SessionRunContext` {#SessionRunContext}
-
-Provides information about the `session.run()` call being made.
-
-Provides information about original request to `Session.Run()` function.
-SessionRunHook objects can stop the loop by calling `request_stop()` of
-`run_context`. In the future we may use this object to add more information
-about run without changing the Hook API.
-- - -
-
-#### `tf.train.SessionRunContext.__init__(original_args, session)` {#SessionRunContext.__init__}
-
-Initializes SessionRunContext.
-
-
-- - -
-
-#### `tf.train.SessionRunContext.original_args` {#SessionRunContext.original_args}
-
-A `SessionRunArgs` object holding the original arguments of `run()`.
-
-If user called `MonitoredSession.run(fetches=a, feed_dict=b)`, then this
-field is equal to SessionRunArgs(a, b).
-
-##### Returns:
-
- A `SessionRunArgs` object
-
-
-- - -
-
-#### `tf.train.SessionRunContext.request_stop()` {#SessionRunContext.request_stop}
-
-Sets stop requested field.
-
-Hooks can use this function to request stop of iterations.
-`MonitoredSession` checks whether this is called or not.
-
-
-- - -
-
-#### `tf.train.SessionRunContext.session` {#SessionRunContext.session}
-
-A TensorFlow session object which will execute the `run`.
-
-
-- - -
-
-#### `tf.train.SessionRunContext.stop_requested` {#SessionRunContext.stop_requested}
-
-Returns whether a stop is requested or not.
-
-If true, `MonitoredSession` stops iterations.
-
-##### Returns:
-
- A `bool`
-
-
-
-- - -
-
-### `class tf.train.SessionRunValues` {#SessionRunValues}
-
-Contains the results of `Session.run()`.
-
-In the future we may use this object to add more information about result of
-run without changing the Hook API.
-
-Args:
- results: The return values from `Session.run()` corresponding to the fetches
- attribute returned in the RunArgs. Note that this has the same shape as
- the RunArgs fetches. For example:
- fetches = global_step_tensor
- => results = nparray(int)
- fetches = [train_op, summary_op, global_step_tensor]
- => results = [None, nparray(string), nparray(int)]
- fetches = {'step': global_step_tensor, 'summ': summary_op}
- => results = {'step': nparray(int), 'summ': nparray(string)}
- options: `RunOptions` from the `Session.run()` call.
- run_metadata: `RunMetadata` from the `Session.run()` call.
-- - -
-
-#### `tf.train.SessionRunValues.__getnewargs__()` {#SessionRunValues.__getnewargs__}
-
-Return self as a plain tuple. Used by copy and pickle.
-
-
-- - -
-
-#### `tf.train.SessionRunValues.__getstate__()` {#SessionRunValues.__getstate__}
-
-Exclude the OrderedDict from pickling
-
-
-- - -
-
-#### `tf.train.SessionRunValues.__new__(_cls, results, options, run_metadata)` {#SessionRunValues.__new__}
-
-Create new instance of SessionRunValues(results, options, run_metadata)
-
-
-- - -
-
-#### `tf.train.SessionRunValues.__repr__()` {#SessionRunValues.__repr__}
-
-Return a nicely formatted representation string
-
-
-- - -
-
-#### `tf.train.SessionRunValues.options` {#SessionRunValues.options}
-
-Alias for field number 1
-
-
-- - -
-
-#### `tf.train.SessionRunValues.results` {#SessionRunValues.results}
-
-Alias for field number 0
-
-
-- - -
-
-#### `tf.train.SessionRunValues.run_metadata` {#SessionRunValues.run_metadata}
-
-Alias for field number 2
-
-
-
-- - -
-
-### `class tf.train.LoggingTensorHook` {#LoggingTensorHook}
-
-Prints the given tensors once every N local steps or once every N seconds.
-
-The tensors will be printed to the log, with `INFO` severity.
-- - -
-
-#### `tf.train.LoggingTensorHook.__init__(tensors, every_n_iter=None, every_n_secs=None, formatter=None)` {#LoggingTensorHook.__init__}
-
-Initializes a LoggingHook monitor.
-
-##### Args:
-
-
-* <b>`tensors`</b>: `dict` that maps string-valued tags to tensors/tensor names,
- or `iterable` of tensors/tensor names.
-* <b>`every_n_iter`</b>: `int`, print the values of `tensors` once every N local
- steps taken on the current worker.
-* <b>`every_n_secs`</b>: `int` or `float`, print the values of `tensors` once every N
- seconds. Exactly one of `every_n_iter` and `every_n_secs` should be
- provided.
-* <b>`formatter`</b>: function, takes dict of `tag`->`Tensor` and returns a string.
- If `None` uses default printing all tensors.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: if `every_n_iter` is non-positive.
-
-
-- - -
-
-#### `tf.train.LoggingTensorHook.after_create_session(session, coord)` {#LoggingTensorHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.LoggingTensorHook.after_run(run_context, run_values)` {#LoggingTensorHook.after_run}
-
-
-
-
-- - -
-
-#### `tf.train.LoggingTensorHook.before_run(run_context)` {#LoggingTensorHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.LoggingTensorHook.begin()` {#LoggingTensorHook.begin}
-
-
-
-
-- - -
-
-#### `tf.train.LoggingTensorHook.end(session)` {#LoggingTensorHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
-
-- - -
-
-### `class tf.train.StopAtStepHook` {#StopAtStepHook}
-
-Monitor to request stop at a specified step.
-- - -
-
-#### `tf.train.StopAtStepHook.__init__(num_steps=None, last_step=None)` {#StopAtStepHook.__init__}
-
-Create a StopAtStep Hook.
-
-This hook requests stop after either a number of steps have been
-executed or a last step has been reached. Only of the two options can be
-specified.
-
-if `num_steps` is specified, it indicates the number of steps to execute
-after `begin()` is called. If instead `last_step` is specified, it
-indicates the last step we want to execute, as passed to the `after_run()`
-call.
-
-##### Args:
-
-
-* <b>`num_steps`</b>: Number of steps to execute.
-* <b>`last_step`</b>: Step after which to stop.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If one of the arguments is invalid.
-
-
-- - -
-
-#### `tf.train.StopAtStepHook.after_create_session(session, coord)` {#StopAtStepHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.StopAtStepHook.after_run(run_context, run_values)` {#StopAtStepHook.after_run}
-
-
-
-
-- - -
-
-#### `tf.train.StopAtStepHook.before_run(run_context)` {#StopAtStepHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.StopAtStepHook.begin()` {#StopAtStepHook.begin}
-
-
-
-
-- - -
-
-#### `tf.train.StopAtStepHook.end(session)` {#StopAtStepHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
-
-- - -
-
-### `class tf.train.CheckpointSaverHook` {#CheckpointSaverHook}
-
-Saves checkpoints every N steps or seconds.
-- - -
-
-#### `tf.train.CheckpointSaverHook.__init__(checkpoint_dir, save_secs=None, save_steps=None, saver=None, checkpoint_basename='model.ckpt', scaffold=None, listeners=None)` {#CheckpointSaverHook.__init__}
-
-Initialize CheckpointSaverHook monitor.
-
-##### Args:
-
-
-* <b>`checkpoint_dir`</b>: `str`, base directory for the checkpoint files.
-* <b>`save_secs`</b>: `int`, save every N secs.
-* <b>`save_steps`</b>: `int`, save every N steps.
-* <b>`saver`</b>: `Saver` object, used for saving.
-* <b>`checkpoint_basename`</b>: `str`, base name for the checkpoint files.
-* <b>`scaffold`</b>: `Scaffold`, use to get saver object.
-* <b>`listeners`</b>: List of `CheckpointSaverListener` subclass instances.
- Used for callbacks that run immediately after the corresponding
- CheckpointSaverHook callbacks, only in steps where the
- CheckpointSaverHook was triggered.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: One of `save_steps` or `save_secs` should be set.
-* <b>`ValueError`</b>: Exactly one of saver or scaffold should be set.
-
-
-- - -
-
-#### `tf.train.CheckpointSaverHook.after_create_session(session, coord)` {#CheckpointSaverHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.CheckpointSaverHook.after_run(run_context, run_values)` {#CheckpointSaverHook.after_run}
-
-
-
-
-- - -
-
-#### `tf.train.CheckpointSaverHook.before_run(run_context)` {#CheckpointSaverHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.CheckpointSaverHook.begin()` {#CheckpointSaverHook.begin}
-
-
-
-
-- - -
-
-#### `tf.train.CheckpointSaverHook.end(session)` {#CheckpointSaverHook.end}
-
-
-
-
-
-- - -
-
-### `tf.train.NewCheckpointReader(filepattern)` {#NewCheckpointReader}
-
-
-
-
-- - -
-
-### `class tf.train.StepCounterHook` {#StepCounterHook}
-
-Steps per second monitor.
-- - -
-
-#### `tf.train.StepCounterHook.__init__(every_n_steps=100, every_n_secs=None, output_dir=None, summary_writer=None)` {#StepCounterHook.__init__}
-
-
-
-
-- - -
-
-#### `tf.train.StepCounterHook.after_create_session(session, coord)` {#StepCounterHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.StepCounterHook.after_run(run_context, run_values)` {#StepCounterHook.after_run}
-
-
-
-
-- - -
-
-#### `tf.train.StepCounterHook.before_run(run_context)` {#StepCounterHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.StepCounterHook.begin()` {#StepCounterHook.begin}
-
-
-
-
-- - -
-
-#### `tf.train.StepCounterHook.end(session)` {#StepCounterHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
-
-- - -
-
-### `class tf.train.NanLossDuringTrainingError` {#NanLossDuringTrainingError}
-
-
-- - -
-
-#### `tf.train.NanLossDuringTrainingError.__str__()` {#NanLossDuringTrainingError.__str__}
-
-
-
-
-
-- - -
-
-### `class tf.train.NanTensorHook` {#NanTensorHook}
-
-NaN Loss monitor.
-
-Monitors loss and stops training if loss is NaN.
-Can either fail with exception or just stop training.
-- - -
-
-#### `tf.train.NanTensorHook.__init__(loss_tensor, fail_on_nan_loss=True)` {#NanTensorHook.__init__}
-
-Initializes NanLoss monitor.
-
-##### Args:
-
-
-* <b>`loss_tensor`</b>: `Tensor`, the loss tensor.
-* <b>`fail_on_nan_loss`</b>: `bool`, whether to raise exception when loss is NaN.
-
-
-- - -
-
-#### `tf.train.NanTensorHook.after_create_session(session, coord)` {#NanTensorHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.NanTensorHook.after_run(run_context, run_values)` {#NanTensorHook.after_run}
-
-
-
-
-- - -
-
-#### `tf.train.NanTensorHook.before_run(run_context)` {#NanTensorHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.NanTensorHook.begin()` {#NanTensorHook.begin}
-
-Called once before using the session.
-
-When called, the default graph is the one that will be launched in the
-session. The hook can modify the graph by adding new operations to it.
-After the `begin()` call the graph will be finalized and the other callbacks
-can not modify the graph anymore. Second call of `begin()` on the same
-graph, should not change the graph.
-
-
-- - -
-
-#### `tf.train.NanTensorHook.end(session)` {#NanTensorHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
-
-- - -
-
-### `class tf.train.SummarySaverHook` {#SummarySaverHook}
-
-Saves summaries every N steps.
-- - -
-
-#### `tf.train.SummarySaverHook.__init__(save_steps=None, save_secs=None, output_dir=None, summary_writer=None, scaffold=None, summary_op=None)` {#SummarySaverHook.__init__}
-
-Initializes a `SummarySaver` monitor.
-
-##### Args:
-
-
-* <b>`save_steps`</b>: `int`, save summaries every N steps. Exactly one of
- `save_secs` and `save_steps` should be set.
-* <b>`save_secs`</b>: `int`, save summaries every N seconds.
-* <b>`output_dir`</b>: `string`, the directory to save the summaries to. Only used
- if no `summary_writer` is supplied.
-* <b>`summary_writer`</b>: `SummaryWriter`. If `None` and an `output_dir` was passed,
- one will be created accordingly.
-* <b>`scaffold`</b>: `Scaffold` to get summary_op if it's not provided.
-* <b>`summary_op`</b>: `Tensor` of type `string` containing the serialized `Summary`
- protocol buffer or a list of `Tensor`. They are most likely an output
- by TF summary methods like `tf.summary.scalar` or
- `tf.summary.merge_all`. It can be passed in as one tensor; if more
- than one, they must be passed in as a list.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: Exactly one of scaffold or summary_op should be set.
-
-
-- - -
-
-#### `tf.train.SummarySaverHook.after_create_session(session, coord)` {#SummarySaverHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.SummarySaverHook.after_run(run_context, run_values)` {#SummarySaverHook.after_run}
-
-
-
-
-- - -
-
-#### `tf.train.SummarySaverHook.before_run(run_context)` {#SummarySaverHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.SummarySaverHook.begin()` {#SummarySaverHook.begin}
-
-
-
-
-- - -
-
-#### `tf.train.SummarySaverHook.end(session=None)` {#SummarySaverHook.end}
-
-
-
-
-
-- - -
-
-### `class tf.train.GlobalStepWaiterHook` {#GlobalStepWaiterHook}
-
-Delay execution until global step reaches to wait_until_step.
-
-This hook delays execution until global step reaches to `wait_until_step`. It
-is used to gradually start workers in distributed settings. One example usage
-would be setting `wait_until_step=int(K*log(task_id+1))` assuming that
-task_id=0 is the chief.
-- - -
-
-#### `tf.train.GlobalStepWaiterHook.__init__(wait_until_step)` {#GlobalStepWaiterHook.__init__}
-
-Create a _GlobalStepWaiterHook.
-
-##### Args:
-
-
-* <b>`wait_until_step`</b>: an `int` shows until which global step should we wait.
-
-
-- - -
-
-#### `tf.train.GlobalStepWaiterHook.after_create_session(session, coord)` {#GlobalStepWaiterHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.GlobalStepWaiterHook.after_run(run_context, run_values)` {#GlobalStepWaiterHook.after_run}
-
-Called after each call to run().
-
-The `run_values` argument contains results of requested ops/tensors by
-`before_run()`.
-
-The `run_context` argument is the same one send to `before_run` call.
-`run_context.request_stop()` can be called to stop the iteration.
-
-##### Args:
-
-
-* <b>`run_context`</b>: A `SessionRunContext` object.
-* <b>`run_values`</b>: A SessionRunValues object.
-
-
-- - -
-
-#### `tf.train.GlobalStepWaiterHook.before_run(run_context)` {#GlobalStepWaiterHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.GlobalStepWaiterHook.begin()` {#GlobalStepWaiterHook.begin}
-
-
-
-
-- - -
-
-#### `tf.train.GlobalStepWaiterHook.end(session)` {#GlobalStepWaiterHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
-
-- - -
-
-### `class tf.train.FinalOpsHook` {#FinalOpsHook}
-
-A run hook which evaluates `Tensors` at the end of a session.
-- - -
-
-#### `tf.train.FinalOpsHook.__init__(final_ops, final_ops_feed_dict=None)` {#FinalOpsHook.__init__}
-
-Constructs the FinalOpHook with ops to run at the end of the session.
-
-##### Args:
-
-
-* <b>`final_ops`</b>: A single `Tensor`, a list of `Tensors` or a dictionary of
- names to `Tensors`.
-* <b>`final_ops_feed_dict`</b>: A feed dictionary to use when running
- `final_ops_dict`.
-
-
-- - -
-
-#### `tf.train.FinalOpsHook.after_create_session(session, coord)` {#FinalOpsHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.FinalOpsHook.after_run(run_context, run_values)` {#FinalOpsHook.after_run}
-
-Called after each call to run().
-
-The `run_values` argument contains results of requested ops/tensors by
-`before_run()`.
-
-The `run_context` argument is the same one send to `before_run` call.
-`run_context.request_stop()` can be called to stop the iteration.
-
-##### Args:
-
-
-* <b>`run_context`</b>: A `SessionRunContext` object.
-* <b>`run_values`</b>: A SessionRunValues object.
-
-
-- - -
-
-#### `tf.train.FinalOpsHook.before_run(run_context)` {#FinalOpsHook.before_run}
-
-Called before each call to run().
-
-You can return from this call a `SessionRunArgs` object indicating ops or
-tensors to add to the upcoming `run()` call. These ops/tensors will be run
-together with the ops/tensors originally passed to the original run() call.
-The run args you return can also contain feeds to be added to the run()
-call.
-
-The `run_context` argument is a `SessionRunContext` that provides
-information about the upcoming `run()` call: the originally requested
-op/tensors, the TensorFlow Session.
-
-At this point graph is finalized and you can not add ops.
-
-##### Args:
-
-
-* <b>`run_context`</b>: A `SessionRunContext` object.
-
-##### Returns:
-
- None or a `SessionRunArgs` object.
-
-
-- - -
-
-#### `tf.train.FinalOpsHook.begin()` {#FinalOpsHook.begin}
-
-Called once before using the session.
-
-When called, the default graph is the one that will be launched in the
-session. The hook can modify the graph by adding new operations to it.
-After the `begin()` call the graph will be finalized and the other callbacks
-can not modify the graph anymore. Second call of `begin()` on the same
-graph, should not change the graph.
-
-
-- - -
-
-#### `tf.train.FinalOpsHook.end(session)` {#FinalOpsHook.end}
-
-
-
-
-- - -
-
-#### `tf.train.FinalOpsHook.final_ops_values` {#FinalOpsHook.final_ops_values}
-
-
-
-
-
-- - -
-
-### `class tf.train.FeedFnHook` {#FeedFnHook}
-
-Runs `feed_fn` and sets the `feed_dict` accordingly.
-- - -
-
-#### `tf.train.FeedFnHook.__init__(feed_fn)` {#FeedFnHook.__init__}
-
-Constructs the FeedFnHook with given `feed_fn`.
-
-##### Args:
-
-
-* <b>`feed_fn`</b>: function, no arguments and returns `dict` to feed.
-
-
-- - -
-
-#### `tf.train.FeedFnHook.after_create_session(session, coord)` {#FeedFnHook.after_create_session}
-
-Called when new TensorFlow session is created.
-
-This is called to signal the hooks that a new session has been created. This
-has two essential differences with the situation in which `begin` is called:
-
-* When this is called, the graph is finalized and ops can no longer be added
- to the graph.
-* This method will also be called as a result of recovering a wrapped
- session, not only at the beginning of the overall session.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that has been created.
-* <b>`coord`</b>: A Coordinator object which keeps track of all threads.
-
-
-- - -
-
-#### `tf.train.FeedFnHook.after_run(run_context, run_values)` {#FeedFnHook.after_run}
-
-Called after each call to run().
-
-The `run_values` argument contains results of requested ops/tensors by
-`before_run()`.
-
-The `run_context` argument is the same one send to `before_run` call.
-`run_context.request_stop()` can be called to stop the iteration.
-
-##### Args:
-
-
-* <b>`run_context`</b>: A `SessionRunContext` object.
-* <b>`run_values`</b>: A SessionRunValues object.
-
-
-- - -
-
-#### `tf.train.FeedFnHook.before_run(run_context)` {#FeedFnHook.before_run}
-
-
-
-
-- - -
-
-#### `tf.train.FeedFnHook.begin()` {#FeedFnHook.begin}
-
-Called once before using the session.
-
-When called, the default graph is the one that will be launched in the
-session. The hook can modify the graph by adding new operations to it.
-After the `begin()` call the graph will be finalized and the other callbacks
-can not modify the graph anymore. Second call of `begin()` on the same
-graph, should not change the graph.
-
-
-- - -
-
-#### `tf.train.FeedFnHook.end(session)` {#FeedFnHook.end}
-
-Called at the end of session.
-
-The `session` argument can be used in case the hook wants to run final ops,
-such as saving a last checkpoint.
-
-##### Args:
-
-
-* <b>`session`</b>: A TensorFlow Session that will be soon closed.
-
-
-
-- - -
-
-### `tf.train.global_step(sess, global_step_tensor)` {#global_step}
-
-Small helper to get the global step.
-
-```python
-# Creates a variable to hold the global_step.
-global_step_tensor = tf.Variable(10, trainable=False, name='global_step')
-# Creates a session.
-sess = tf.Session()
-# Initializes the variable.
-print('global_step: %s' % tf.train.global_step(sess, global_step_tensor))
-
-global_step: 10
-```
-
-##### Args:
-
-
-* <b>`sess`</b>: A TensorFlow `Session` object.
-* <b>`global_step_tensor`</b>: `Tensor` or the `name` of the operation that contains
- the global step.
-
-##### Returns:
-
- The global step value.
-
-
-- - -
-
-### `tf.train.basic_train_loop(supervisor, train_step_fn, args=None, kwargs=None, master='')` {#basic_train_loop}
-
-Basic loop to train a model.
-
-Calls `train_step_fn` in a loop to train a model. The function is called as:
-
-```python
-train_step_fn(session, *args, **kwargs)
-```
-
-It is passed a `tf.Session` in addition to `args` and `kwargs`. The function
-typically runs one training step in the session.
-
-##### Args:
-
-
-* <b>`supervisor`</b>: `tf.Supervisor` to run the training services.
-* <b>`train_step_fn`</b>: Callable to execute one training step. Called
- repeatedly as `train_step_fn(session, *args **kwargs)`.
-* <b>`args`</b>: Optional positional arguments passed to `train_step_fn`.
-* <b>`kwargs`</b>: Optional keyword arguments passed to `train_step_fn`.
-* <b>`master`</b>: Master to use to create the training session. Defaults to
- `""` which causes the session to be created in the local process.
-
-
-- - -
-
-### `tf.train.get_global_step(graph=None)` {#get_global_step}
-
-Get the global step tensor.
-
-The global step tensor must be an integer variable. We first try to find it
-in the collection `GLOBAL_STEP`, or by name `global_step:0`.
-
-##### Args:
-
-
-* <b>`graph`</b>: The graph to find the global step in. If missing, use default graph.
-
-##### Returns:
-
- The global step variable, or `None` if none was found.
-
-##### Raises:
-
-
-* <b>`TypeError`</b>: If the global step tensor has a non-integer type, or if it is not
- a `Variable`.
-
-
-- - -
-
-### `tf.train.assert_global_step(global_step_tensor)` {#assert_global_step}
-
-Asserts `global_step_tensor` is a scalar int `Variable` or `Tensor`.
-
-##### Args:
-
-
-* <b>`global_step_tensor`</b>: `Tensor` to test.
-
-
-- - -
-
-### `tf.train.write_graph(graph_or_graph_def, logdir, name, as_text=True)` {#write_graph}
-
-Writes a graph proto to a file.
-
-The graph is written as a binary proto unless `as_text` is `True`.
-
-```python
-v = tf.Variable(0, name='my_variable')
-sess = tf.Session()
-tf.train.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt')
-```
-
-or
-
-```python
-v = tf.Variable(0, name='my_variable')
-sess = tf.Session()
-tf.train.write_graph(sess.graph, '/tmp/my-model', 'train.pbtxt')
-```
-
-##### Args:
-
-
-* <b>`graph_or_graph_def`</b>: A `Graph` or a `GraphDef` protocol buffer.
-* <b>`logdir`</b>: Directory where to write the graph. This can refer to remote
- filesystems, such as Google Cloud Storage (GCS).
-* <b>`name`</b>: Filename for the graph.
-* <b>`as_text`</b>: If `True`, writes the graph as an ASCII proto.
-
-##### Returns:
-
- The path of the output proto file.
-
-
-
-## Other Functions and Classes
-- - -
-
-### `class tf.train.SyncReplicasOptimizer` {#SyncReplicasOptimizer}
-
-Class to synchronize, aggregate gradients and pass them to the optimizer.
-
-In a typical asynchronous training environment, it's common to have some
-stale gradients. For example, with a N-replica asynchronous training,
-gradients will be applied to the variables N times independently. Depending
-on each replica's training speed, some gradients might be calculated from
-copies of the variable from several steps back (N-1 steps on average). This
-optimizer avoids stale gradients by collecting gradients from all replicas,
-averaging them, then applying them to the variables in one shot, after
-which replicas can fetch the new variables and continue.
-
-The following accumulators/queue are created:
-<empty line>
-* N `gradient accumulators`, one per variable to train. Gradients are pushed
- to them and the chief worker will wait until enough gradients are collected
- and then average them before applying to variables. The accumulator will
- drop all stale gradients (more details in the accumulator op).
-* 1 `token` queue where the optimizer pushes the new global_step value after
- all variables are updated.
-
-The following local variable is created:
-* `sync_rep_local_step`, one per replica. Compared against the global_step in
- each accumulator to check for staleness of the gradients.
-
-The optimizer adds nodes to the graph to collect gradients and pause the
-trainers until variables are updated.
-For the Parameter Server job:
-<empty line>
-1. An accumulator is created for each variable, and each replica pushes the
- gradients into the accumulators instead of directly applying them to the
- variables.
-2. Each accumulator averages once enough gradients (replicas_to_aggregate)
- have been accumulated.
-3. Apply the averaged gradients to the variables.
-4. Only after all variables have been updated, increment the global step.
-5. Only after step 4, pushes `global_step` in the `token_queue`, once for
- each worker replica. The workers can now fetch the global step, use it to
- update its local_step variable and start the next batch.
-
-For the replicas:
-<empty line>
-1. Start a step: fetch variables and compute gradients.
-2. Once the gradients have been computed, push them into gradient
- accumulators. Each accumulator will check the staleness and drop the stale.
-3. After pushing all the gradients, dequeue an updated value of global_step
- from the token queue and record that step to its local_step variable. Note
- that this is effectively a barrier.
-4. Start the next batch.
-
-### Usage
-
-```python
-# Create any optimizer to update the variables, say a simple SGD:
-opt = GradientDescentOptimizer(learning_rate=0.1)
-
-# Wrap the optimizer with sync_replicas_optimizer with 50 replicas: at each
-# step the optimizer collects 50 gradients before applying to variables.
-# Note that if you want to have 2 backup replicas, you can change
-# total_num_replicas=52 and make sure this number matches how many physical
-# replicas you started in your job.
-opt = tf.SyncReplicasOptimizer(opt, replicas_to_aggregate=50,
- total_num_replicas=50)
-
-# Some models have startup_delays to help stabilize the model but when using
-# sync_replicas training, set it to 0.
-
-# Now you can call `minimize()` or `compute_gradients()` and
-# `apply_gradients()` normally
-training_op = opt.minimize(total_loss, global_step=self.global_step)
-
-
-# You can create the hook which handles initialization and queues.
-sync_replicas_hook = opt.make_session_run_hook(is_chief)
-```
-
-In the training program, every worker will run the train_op as if not
-synchronized.
-
-```python
-with training.MonitoredTrainingSession(
- master=workers[worker_id].target, is_chief=is_chief,
- hooks=[sync_replicas_hook]) as mon_sess:
- while not mon_sess.should_stop():
- mon_sess.run(training_op)
-```
-
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.__init__(opt, replicas_to_aggregate, total_num_replicas=None, variable_averages=None, variables_to_average=None, use_locking=False, name='sync_replicas')` {#SyncReplicasOptimizer.__init__}
-
-Construct a sync_replicas optimizer.
-
-##### Args:
-
-
-* <b>`opt`</b>: The actual optimizer that will be used to compute and apply the
- gradients. Must be one of the Optimizer classes.
-* <b>`replicas_to_aggregate`</b>: number of replicas to aggregate for each variable
- update.
-* <b>`total_num_replicas`</b>: Total number of tasks/workers/replicas, could be
- different from replicas_to_aggregate.
- If total_num_replicas > replicas_to_aggregate: it is backup_replicas +
- replicas_to_aggregate.
- If total_num_replicas < replicas_to_aggregate: Replicas compute
- multiple batches per update to variables.
-* <b>`variable_averages`</b>: Optional `ExponentialMovingAverage` object, used to
- maintain moving averages for the variables passed in
- `variables_to_average`.
-* <b>`variables_to_average`</b>: a list of variables that need to be averaged. Only
- needed if variable_averages is passed in.
-* <b>`use_locking`</b>: If True use locks for update operation.
-* <b>`name`</b>: string. Optional name of the returned operation.
-
-
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.compute_gradients(*args, **kwargs)` {#SyncReplicasOptimizer.compute_gradients}
-
-Compute gradients of "loss" for the variables in "var_list".
-
-This simply wraps the compute_gradients() from the real optimizer. The
-gradients will be aggregated in the apply_gradients() so that user can
-modify the gradients like clipping with per replica global norm if needed.
-The global norm with aggregated gradients can be bad as one replica's huge
-gradients can hurt the gradients from other replicas.
-
-##### Args:
-
-
-* <b>`*args`</b>: Arguments for compute_gradients().
-* <b>`**kwargs`</b>: Keyword arguments for compute_gradients().
-
-##### Returns:
-
- A list of (gradient, variable) pairs.
-
-
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)` {#SyncReplicasOptimizer.apply_gradients}
-
-Apply gradients to variables.
-
-This contains most of the synchronization implementation and also wraps the
-apply_gradients() from the real optimizer.
-
-##### Args:
-
-
-* <b>`grads_and_vars`</b>: List of (gradient, variable) pairs as returned by
- compute_gradients().
-* <b>`global_step`</b>: Optional Variable to increment by one after the
- variables have been updated.
-* <b>`name`</b>: Optional name for the returned operation. Default to the
- name passed to the Optimizer constructor.
-
-##### Returns:
-
-
-* <b>`train_op`</b>: The op to dequeue a token so the replicas can exit this batch
- and start the next one. This is executed by each replica.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If the grads_and_vars is empty.
-* <b>`ValueError`</b>: If global step is not provided, the staleness cannot be
- checked.
-
-
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.get_chief_queue_runner()` {#SyncReplicasOptimizer.get_chief_queue_runner}
-
-Returns the QueueRunner for the chief to execute.
-
-This includes the operations to synchronize replicas: aggregate gradients,
-apply to variables, increment global step, insert tokens to token queue.
-
-Note that this can only be called after calling apply_gradients() which
-actually generates this queuerunner.
-
-##### Returns:
-
- A `QueueRunner` for chief to execute.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If this is called before apply_gradients().
-
-
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.get_init_tokens_op(num_tokens=-1)` {#SyncReplicasOptimizer.get_init_tokens_op}
-
-Returns the op to fill the sync_token_queue with the tokens.
-
-This is supposed to be executed in the beginning of the chief/sync thread
-so that even if the total_num_replicas is less than replicas_to_aggregate,
-the model can still proceed as the replicas can compute multiple steps per
-variable update. Make sure:
-`num_tokens >= replicas_to_aggregate - total_num_replicas`.
-
-##### Args:
-
-
-* <b>`num_tokens`</b>: Number of tokens to add to the queue.
-
-##### Returns:
-
- An op for the chief/sync replica to fill the token queue.
-
-##### Raises:
-
-
-* <b>`ValueError`</b>: If this is called before apply_gradients().
-* <b>`ValueError`</b>: If num_tokens are smaller than replicas_to_aggregate -
- total_num_replicas.
-
-
-
-#### Other Methods
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.get_slot(*args, **kwargs)` {#SyncReplicasOptimizer.get_slot}
-
-Return a slot named "name" created for "var" by the Optimizer.
-
-This simply wraps the get_slot() from the actual optimizer.
-
-##### Args:
-
-
-* <b>`*args`</b>: Arguments for get_slot().
-* <b>`**kwargs`</b>: Keyword arguments for get_slot().
-
-##### Returns:
-
- The `Variable` for the slot if it was created, `None` otherwise.
-
-
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.get_slot_names(*args, **kwargs)` {#SyncReplicasOptimizer.get_slot_names}
-
-Return a list of the names of slots created by the `Optimizer`.
-
-This simply wraps the get_slot_names() from the actual optimizer.
-
-##### Args:
-
-
-* <b>`*args`</b>: Arguments for get_slot().
-* <b>`**kwargs`</b>: Keyword arguments for get_slot().
-
-##### Returns:
-
- A list of strings.
-
-
-- - -
-
-#### `tf.train.SyncReplicasOptimizer.make_session_run_hook(is_chief, num_tokens=-1)` {#SyncReplicasOptimizer.make_session_run_hook}
-
-Creates a hook to handle SyncReplicasHook ops such as initialization.
-
-
-
-- - -
-
-### `tf.train.checkpoint_exists(checkpoint_prefix)` {#checkpoint_exists}
-
-Checks whether a V1 or V2 checkpoint exists with the specified prefix.
-
-This is the recommended way to check if a checkpoint exists, since it takes
-into account the naming difference between V1 and V2 formats.
-
-##### Args:
-
-
-* <b>`checkpoint_prefix`</b>: the prefix of a V1 or V2 checkpoint, with V2 taking
- priority. Typically the result of `Saver.save()` or that of
- `tf.train.latest_checkpoint()`, regardless of sharded/non-sharded or
- V1/V2.
-
-##### Returns:
-
- A bool, true iff a checkpoint referred to by `checkpoint_prefix` exists.
-
-
-- - -
-
-### `tf.train.do_quantize_training_on_graphdef(input_graph, num_bits)` {#do_quantize_training_on_graphdef}
-
-
-
-
-- - -
-
-### `tf.train.generate_checkpoint_state_proto(save_dir, model_checkpoint_path, all_model_checkpoint_paths=None)` {#generate_checkpoint_state_proto}
-
-Generates a checkpoint state proto.
-
-##### Args:
-
-
-* <b>`save_dir`</b>: Directory where the model was saved.
-* <b>`model_checkpoint_path`</b>: The checkpoint file.
-* <b>`all_model_checkpoint_paths`</b>: List of strings. Paths to all not-yet-deleted
- checkpoints, sorted from oldest to newest. If this is a non-empty list,
- the last element must be equal to model_checkpoint_path. These paths
- are also saved in the CheckpointState proto.
-
-##### Returns:
-
- CheckpointState proto with model_checkpoint_path and
- all_model_checkpoint_paths updated to either absolute paths or
- relative paths to the current save_dir.
-
-
-- - -
-
-### `tf.train.get_checkpoint_mtimes(checkpoint_prefixes)` {#get_checkpoint_mtimes}
-
-Returns the mtimes (modification timestamps) of the checkpoints.
-
-Globs for the checkpoints pointed to by `checkpoint_prefixes`. If the files
-exist, collect their mtime. Both V2 and V1 checkpoints are considered, in
-that priority.
-
-This is the recommended way to get the mtimes, since it takes into account
-the naming difference between V1 and V2 formats.
-
-##### Args:
-
-
-* <b>`checkpoint_prefixes`</b>: a list of checkpoint paths, typically the results of
- `Saver.save()` or those of `tf.train.latest_checkpoint()`, regardless of
- sharded/non-sharded or V1/V2.
-
-##### Returns:
-
- A list of mtimes (in microseconds) of the found checkpoints.
-
-